You Asked
Dear tom,
A. your new book is illuminating.
B. could you please, post an 10053 trace file and explain in your. Word what is going on there?
C. try to hack it my self - didn't worked for me, too much numbers.
Regards,
Amiel
and we said...
Do you have access to my book "Effective Oracle by Design", in there I write (among lots of other stuff): Use the 10053 Event to Trace CBO Choices Have you ever wondered why the CBO is making the choices it is making? It is not because it is arbitrary or capricious, rather the opposite. It is a piece of predicable software that comes to conclusions based on inputs. Given the same inputs, it will steadfastly give the same outputs (well, until you upgrade, that is). It appears arbitrary and capricious at times because what it does is among the most complex things Oracle itself does. Cost-based query optimization is a hard thing to do-truly computer science. The myriad of facts, parameters, and choices the CBO is faced with make it difficult for a human to predict what will happen.
When I'm faced with a query plan I just cannot explain, I use the 10053 trace event. I use this after I've tried all of the normal paths, such as using hints to get the query plan I felt I should have gotten in the first place. Most of the time, the COST/CARD output resulting from that exercise is sufficient to see where the plan went wrong and what statistics I might be missing.
Note that this trace event is undocumented, unsupported, and works only in certain circumstances. However, it is widely known outside Oracle Corporation. A simple Google search for event 10053 will return more than 1,500 documents on the Web and hundreds of hits on the newsgroups. There are even Oracle support notes on this topic available on metalink.oracle.com, the Oracle support web site, if you search for 10053. I do not make a regular practice of reading these trace files. In fact, I most often use them in the filing of a Technical Assistance Request (TAR) with Oracle support. They provide information to the support analysts that can be useful in diagnosing optimizer-related issues. However, an example will show you what you might expect to see in the trace file generated by this event. In order to generate the CBO trace file, we
need to set an event and then simply parse a query. One technique is as follows:
big_table@ORA920> ALTER SESSION SET EVENTS
2 '10053 trace name context forever, level 1';
Session altered.
big_table@ORA920> explain plan for
2 select * from big_table where object_id = 55;
Explained.
Now we are ready to inspect the trace file. You can use the same technique outlined in the "TKPROF" section of Chapter 2 to get a trace filename for your session (see that chapter for details if you do not know how to identify your session's trace file). Upon exiting SQL*Plus and editing the trace file, you will see something like this:
/usr/oracle/ora920/OraHome1/admin/ora920/udump/ora920_ora_23183.trc
Oracle9i Enterprise Edition Release 9.2.0.1.0 - Production
With the Partitioning, OLAP and Oracle Data Mining options
JServer Release 9.2.0.1.0 - Production
ORACLE_HOME = /usr/oracle/ora920/OraHome1
System name: Linux
Node name: tkyte-pc-isdn.us.oracle.com
Release: 2.4.18-14
Version: #1 Wed Sep 4 13:35:50 EDT 2002
Machine: i686
Instance name: ora920
Redo thread mounted by this instance: 1
Oracle process number: 18
Unix process pid: 23183, image: oracle@tkyte-pc-isdn.us.oracle.com
This is just the standard trace file header. The interesting stuff comes next:
*** SESSION ID:(15.1158) 2003-01-26 16:54:53.834
QUERY
explain plan for select * from big_table where object_id = 55
***************************************
PARAMETERS USED BY THE OPTIMIZER
********************************
OPTIMIZER_FEATURES_ENABLE = 9.2.0
OPTIMIZER_MODE/GOAL = Choose
lots chopped out here
DB_FILE_MULTIBLOCK_READ_COUNT = 16
_NEW_SORT_COST_ESTIMATE = TRUE
_GS_ANTI_SEMI_JOIN_ALLOWED = TRUE
_CPU_TO_IO = 0
_PRED_MOVE_AROUND = TRUE
***************************************
There, you will find the parameters that affect the CBO. The parameters that start with "_" are undocumented parameters. Most of the other parameters are discussed in the previous sections. These are the parameters you have free access to modify as you see fit.
Caution: Never set _ parameters in a real system without the expressed guidance and consent of support. This is not just me "stating the company line" here; this is for real. Undocumented parameters have unanticipated side effects (they could be seriously damaging to your data, your security, and many other things). Undocumented parameters change in meaning from release to release. Every time you upgrade, you need to ask support, "Do I still need this undocumented parameter?" I will not discuss the meaning, range of values, or use of any of these undocumented parameters.
Next in the trace file you'll find the base statistical information used:
BASE STATISTICAL INFORMATION
***********************
Table stats Table: BIG_TABLE Alias: BIG_TABLE
TOTAL :: CDN: 1833857 NBLKS: 22188 AVG_ROW_LEN: 84
-- Index stats
INDEX NAME: OBJECT_ID_IDX COL#: 4
TOTAL :: LVLS: 2 #LB: 4202 #DK: 1833792 LB/K: 1 DB/K: 1 CLUF: 21921
INDEX NAME: OBJECT_TYPE_IDX COL#: 6
TOTAL :: LVLS: 2 #LB: 5065 #DK: 27 LB/K: 187 DB/K: 2414 CLUF: 65187
_OPTIMIZER_PERCENT_PARALLEL = 0
You'll see this for all of the referenced objects in the query. This in itself can help you diagnose the problem right away. If you know that the cardinality of the table BIG_TABLE is really 1 and the number of blocks is 1, right here you can see what the problem is. The optimizer sees cardinality (CDN) of 1,833,857 rows and 22,188 blocks (NBLKS). So, the statistics would be out of date if there were really just one row.
Here's a quick explanation of what the abbreviations mean:
CDN Cardinality, a count of rows
NBLKS Number of blocks
AVG_ROW_LEN The computed average row length
COL# Column numbers in the table the index is on (select * from user_tab_columns where column_id = 4 and table_name = 'BIG_TABLE' would reveal the name of the column in this case)
LVLS Number of levels in the B*Tree
#LB Number of leaf blocks
#DK Number of distinct keys
LB/K Number of leaf blocks per key value on average
DB/K Number of base table data blocks per key value; how many table accesses (logical I/O's) would be made using an equality predicate on this index. Directly related to the
cluster factor below.
CLUF Clustering factor of this index; a measure of how sorted a base table is with
respect to this index.
Many times, a quick scan of this information can pinpoint the problem right away. A smack to the forehead, followed by a quick analysis of the table, and you've fixed the problem. If not, it starts to get more complex.
The next section shows the table-access cost, presenting the best access method for each table. Again, if the information appears incorrect to you, that would point to bad statistics or insufficient statistics (for example, no histograms, where histograms would
have played an important role). In this example, you would see the following:
***************************************
SINGLE TABLE ACCESS PATH
Column: OBJECT_ID Col#: 4 Table: BIG_TABLE Alias: BIG_TABLE
NDV: 1833792 NULLS: 0 DENS: 5.4534e-07
HEIGHT BALANCED HISTOGRAM: # BKT: 75 #VAL: 76
TABLE: BIG_TABLE ORIG CDN: 1833857 ROUNDED CDN: 1 CMPTD CDN: 1
Access path: tsc Resc: 2136 Resp: 2136
Access path: index (equal)
Index: OBJECT_ID_IDX
TABLE: BIG_TABLE
RSC_CPU: 0 RSC_IO: 4
IX_SEL: 0.0000e+00 TB_SEL: 5.4534e-07
BEST_CST: 4.00 PATH: 4 Degree: 1
***************************************
Here, you can see more basic information and the costs associated with the various access paths into this table. This example shows two access paths:
Access Path: tsc A table scan, with a "serial" cost of 2136 (resc) and a parallel cost of 2136 (resp) in this case.
Access Path: index (equal) An index access based on an equality predicate. Other paths might be index unique, no sta/sto keys-unbounded range scan (start/stop keys), index, and so on. This has a serial cost of 4 based on estimated I/O (RSC_IO).
As important as what you can see, is many times what you cannot see. For example, missing Access Paths - indexes you thought should be considered but were not. There absence may help you figure out "why" they were not being used. For example, I recently
received a question on AskTom:
We have 2 fact tables in our dataware house. One for transactions of type V, another for transactions of type M. Each table is partitioned by day and there are 90 days online. There are 145 Million rows of type V, 133 million of type M. They are stuctured in exactly the same way (same partition key, same indexes, etc). The only real difference is that the row size of the table of type V is larger (203 bytes) than the row size of the table of type M (141 bytes). We have a query to look for transactions of type V for customer X. Performance is good. We have an "identical" query to look for transactions of type M for customer X. Performance is horrible. The difference between the two is the query against V is using an INDEX RANGE SCAN. The query against M will only use an INDEX
FULL SCAN. What could be the reason?
Well, we went back and forth on this - tried hints, tried everything. Spent a while scratching our heads. Finally - I asked them to email me the 10053 trace files. What I discovered was the index was not only not being range scanned for the query against "M", it wasn't even being considered. That was a great clue - there was something that was precluding this index from being used in that fashion. I immediately asked for the CREATE TABLE and CREATE INDEX statements - something I should have asked for in the beginning, I was over analyzing the problem and didn't rule out "simple things" first. When I got them - I discovered that the indexed column was a NUMBER(12) in the V table and a VARCHAR2(12) in the M table. It was a simple datatype conversion that was precluding the index from being used in a range scan! They were joining these V and M tables to some other table by this column - when it was a NUMBER, the index could be range scanned, when it was a VARCHAR2 - it could not. In the end - what was assumed to be an "optimizer problem" turned out to be an implementation mistake - the column was supposed to be a NUMBER, they implemented wrong. Fixing that issue solved the problem immediately. I'm not sure if I looked at the CREATE TABLES that I would have caught it right away myself - it was something they overlooked easily enough and it was their data! The 10053 trace certainly helped here by showing that access path was not even considered.
Now, continuing on in this trace file, you can also see the ORIG CDN, which is the original cardinality of the base table, as well as the CMPTD CDN, the computed cardinality-how many rows the optimizer expects to get back from this object (regardless of the access method).
Now, unfortunately, the interesting output of this is the PATH, which is a nebulous 4 in this case. The 4 just happens to mean index-range scan. We know that because the least-cost access path is the index (equal), and that it is really an index-range scan (it has to be). OBJECT_ID_IDX is a nonunique index, and the predicate was where object_id = 55.
Basically, in this section you will be looking to see if the raw numbers even make sense, given your knowledge of the real situation. If not, it probably points to bad statistics, something you need to correct before assuming that something is wrong with the optimizer. Or, as noted, you'll be looking for what isn't there - and then trying to figure out "why" as I did.
The rest of this report, while interesting, is mostly useful only to Oracle support analysts and the optimizer developers themselves. It can be interesting to look at, to try and get a better understanding of the complexity of the software that is the optimizer. Here, you will see a review of all of the possible plans, including their costs and what they are doing. At the very bottom, you'll ultimately see the chosen plan and its cost.
Now, the trace file in this example is fairly small, concise, and easy to understand. If you take a nontrivial query, however, you'll see how complex this file can truly be. I did a 150 table join and traced it. The resulting trace file itself was almost 80,000 lines of text, simply due to the massive permutations by which the query could be processed after reordering the tables. So, my recommendation with this 10053 trace event is that you use this only when you believe the optimizer to be very wrong. You would generate the trace file and look at the first few pages, eyeball the numbers and init.ora settings, and make sure that they are realistic. If not, you would start there by correcting the problem-adjusting the parameter you thought was already set, or fixing the statistics by updating out-of-date ones or doing a more comprehensive gather, to include histograms, for example.
Lastly, if all else fails, you have the trace file you need to open a TAR with Oracle support, to find out why a particular plan is being selected. At this point, it would be best if you actually have two traces: one for the original query and another for a hinted query that massively outperforms your existing one. This information will be more than enough for the support technicians to discover the scientific reason for the performance differences and probably to suggest a workaround to solve the problem.
Ask Tom
I did a little research and found out that the rounded cardinality is original cardinality/distinct values. Is that right? I knew sqlplus autotrace is not the true execution plan. In oracle10g, the rounded cardinality in 10053 trace file matches the rows in the execution plan I saw in sqlplus autotrace traceonly. When I use 'alter session set sql_trace=true' and tkprof, I saw another different execution plan and the true number of rows returned. What I can not understand is how Oracle estimates the rows in autotrace in one way and actually picks up another execution plan when executing the SQL? Am I missing something here?
Followup May 23, 2006 - 7am UTC:
... What I can not understand is how Oracle estimates the rows in autotrace in one way and actually picks up another execution plan when executing the SQL? ... I don't know what you meant by that comment. Unless you are asking "why does autotrace sometimes not show me the actual plan used" - in which case there are many reasons:
a) explain plan does not bind variable peek, a real query would.
b) explain plan sees all binds as varchar2's - in real life they are numbers, dates, varchar2's and so on. Hence, implicit conversions may or may not be observed by explain plan that would be observed in real life.
c) explain plan does a hard parse, uses the statistics/dynamic samples that are IN PLACE RIGHT NOW. A query that was run for real "a little while ago" or "a little while from now" might see different inputs
d) explain plan uses the current operational environment to come up with a plan. You might be explaining in a FIRST_ROWS session, whereas the query was actually executed in a ALL_ROWS session (or any one of the optimizer parameters could be different)
e) any combination of a, b, c, d...
As for the rounded - no, not even close. It is the "guessed at cardinality"
Consider a pair of scripts, test.sql and test2.sql:
---------------- test.sql --------------------
set echo on
drop table t;
create table t
as
select rownum id1, case when mod(rownum,100)<80 then 0 else mod(rownum,100) end id2 from all_objects;
exec dbms_stats.gather_table_stats( user, 'T', method_opt => 'for all columns size 254' );
select count(case when id1=0 then 1 end) id1_0,
count(case when id2=0 then 1 end) id2_0,
count(case when id1=42 then 1 end) id1_42,
count(case when id2=42 then 1 end) id2_42,
count(case when id1=99 then 1 end) id1_99,
count(case when id2=99 then 1 end) id2_99
from t
/
@test2 "id1=0"
@test2 "id2=0"
@test2 "id1=42"
@test2 "id2=42"
@test2 "id1=99"
@test2 "id2=99"
------------------------------------------
-------------- test2.sql -----------------
column trace new_val TRACE
select c.value || '/' || d.instance_name || '_ora_' || a.spid || '.trc' trace
from v$process a, v$session b, v$parameter c, v$instance d
where a.addr = b.paddr
and b.audsid = userenv('sessionid')
and c.name = 'user_dump_dest'
/
alter session set events '10053 trace name context forever, level 1';
set autotrace traceonly explain
select * from t where &1.;
set autotrace off
disconnect
connect /
!echo &trace
!grep 'Card: Original: ' &TRACE
------------------------------------------------------
When run - produces the following output - note how the autotrace estimate is the "rounded card" - it is the GUESS, the estimated cardinality flowing out of the step
ops$tkyte@ORA10GR2> create table t
2 as
3 select rownum id1, case when mod(rownum,100)<80 then 0 else mod(rownum,100) end id2
4 from all_objects;
Table created.
ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> exec dbms_stats.gather_table_stats( user, 'T', method_opt => 'for all columns
size 254' );
PL/SQL procedure successfully completed.
ops$tkyte@ORA10GR2>
ops$tkyte@ORA10GR2> select count(case when id1=0 then 1 end) id1_0,
2 count(case when id2=0 then 1 end) id2_0,
3 count(case when id1=42 then 1 end) id1_42,
4 count(case when id2=42 then 1 end) id2_42,
5 count(case when id1=99 then 1 end) id1_99,
6 count(case when id2=99 then 1 end) id2_99
7 from t
8 /
ID1_0 ID2_0 ID1_42 ID2_42 ID1_99 ID2_99
---------- ---------- ---------- ---------- ---------- ----------
0 40079 1 0 1 500
ops$tkyte@ORA10GR2> @test2 "id1=0"
ops$tkyte@ORA10GR2> set echo off
TRACE
-------------------------------------------------------------------------------
/home/ora10gr2/admin/ora10gr2/udump/ora10gr2_ora_12273.trc
Session altered.
old 1: select * from t where &1.
new 1: select * from t where id1=0
Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873
--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 6 | 24 (9)| 00:00:01 |
|* 1 | TABLE ACCESS FULL| T | 1 | 6 | 24 (9)| 00:00:01 |
--------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("ID1"=0)
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - Production
With the Partitioning, OLAP and Data Mining options
Connected.
/home/ora10gr2/admin/ora10gr2/udump/ora10gr2_ora_12273.trc
Card: Original: 50087 Rounded: 1 Computed: 1.00 Non Adjusted: 1.00
Card: Original: 50087 Rounded: 1 Computed: 1.00 Non Adjusted: 1.00
ops$tkyte@ORA10GR2> @test2 "id2=0"
ops$tkyte@ORA10GR2> set echo off
TRACE
-------------------------------------------------------------------------------
/home/ora10gr2/admin/ora10gr2/udump/ora10gr2_ora_12277.trc
Session altered.
old 1: select * from t where &1.
new 1: select * from t where id2=0
Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873
--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 39812 | 233K| 24 (9)| 00:00:01 |
|* 1 | TABLE ACCESS FULL| T | 39812 | 233K| 24 (9)| 00:00:01 |
--------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("ID2"=0)
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 -
Production
With the Partitioning, OLAP and Data Mining options
Connected.
/home/ora10gr2/admin/ora10gr2/udump/ora10gr2_ora_12277.trc
Card: Original: 50087 Rounded: 39812 Computed: 39811.60 Non Adjusted: 39811.60
Card: Original: 50087 Rounded: 39812 Computed: 39811.60 Non Adjusted: 39811.60
ops$tkyte@ORA10GR2> @test2 "id1=42"
ops$tkyte@ORA10GR2> set echo off
TRACE
-------------------------------------------------------------------------------
/home/ora10gr2/admin/ora10gr2/udump/ora10gr2_ora_12280.trc
Session altered.
old 1: select * from t where &1.
new 1: select * from t where id1=42
Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873
--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 6 | 24 (9)| 00:00:01 |
|* 1 | TABLE ACCESS FULL| T | 1 | 6 | 24 (9)| 00:00:01 |
--------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("ID1"=42)
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 -
ProductionWith the Partitioning, OLAP and Data Mining options
Connected.
/home/ora10gr2/admin/ora10gr2/udump/ora10gr2_ora_12280.trc
Card: Original: 50087 Rounded: 1 Computed: 1.00 Non Adjusted: 1.00
Card: Original: 50087 Rounded: 1 Computed: 1.00 Non Adjusted: 1.00
ops$tkyte@ORA10GR2> @test2 "id2=42"
ops$tkyte@ORA10GR2> set echo off
TRACE
-------------------------------------------------------------------------------
/home/ora10gr2/admin/ora10gr2/udump/ora10gr2_ora_12285.trc
Session altered.
old 1: select * from t where &1.
new 1: select * from t where id2=42
Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873
--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 6 | 24 (9)| 00:00:01 |
|* 1 | TABLE ACCESS FULL| T | 1 | 6 | 24 (9)| 00:00:01 |
--------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("ID2"=42)
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 -
ProductionWith the Partitioning, OLAP and Data Mining options
Connected.
/home/ora10gr2/admin/ora10gr2/udump/ora10gr2_ora_12285.trc
Card: Original: 50087 Rounded: 1 Computed: 0.49 Non Adjusted: 0.49
Card: Original: 50087 Rounded: 1 Computed: 0.49 Non Adjusted: 0.49
ops$tkyte@ORA10GR2> @test2 "id1=99"
ops$tkyte@ORA10GR2> set echo off
TRACE
-------------------------------------------------------------------------------
/home/ora10gr2/admin/ora10gr2/udump/ora10gr2_ora_12288.trc
Session altered.
old 1: select * from t where &1.
new 1: select * from t where id1=99
Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873
--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 6 | 24 (9)| 00:00:01 |
|* 1 | TABLE ACCESS FULL| T | 1 | 6 | 24 (9)| 00:00:01 |
--------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("ID1"=99)
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 -
Production
With the Partitioning, OLAP and Data Mining options
Connected.
/home/ora10gr2/admin/ora10gr2/udump/ora10gr2_ora_12288.trc
Card: Original: 50087 Rounded: 1 Computed: 1.00 Non Adjusted: 1.00
Card: Original: 50087 Rounded: 1 Computed: 1.00 Non Adjusted: 1.00
ops$tkyte@ORA10GR2> @test2 "id2=99"
ops$tkyte@ORA10GR2> set echo off
TRACE
-------------------------------------------------------------------------------
/home/ora10gr2/admin/ora10gr2/udump/ora10gr2_ora_12291.trc
Session altered.
old 1: select * from t where &1.
new 1: select * from t where id2=99
Execution Plan
----------------------------------------------------------
Plan hash value: 1601196873
--------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
--------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 583 | 3498 | 24 (9)| 00:00:01 |
|* 1 | TABLE ACCESS FULL| T | 583 | 3498 | 24 (9)| 00:00:01 |
--------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("ID2"=99)
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 -
Production
With the Partitioning, OLAP and Data Mining options
Connected.
/home/ora10gr2/admin/ora10gr2/udump/ora10gr2_ora_12291.trc
Card: Original: 50087 Rounded: 583 Computed: 582.72 Non Adjusted: 582.72
Card: Original: 50087 Rounded: 583 Computed: 582.72 Non Adjusted: 582.72
Ask Tom
Are you aware of any reason (other than a bug) that can produce the following 10053?
SELECT this_.ASE_A_UID as y0_
FROM SB_AGENT_SCHEDULE_ENTRY this_
INNER JOIN SB_AGENT agent1_ ON this_.ASE_A_UID = agent1_.A_UID
WHERE this_.ASE_CCS_UID = :1
AND this_.ASE_SGS_UID = :2
AND this_.ASE_IS_ACTIVE = :3
AND agent1_.A_MSG_REQ_GUID is null
Version is 10.2.0.4.4
No hints, but the following parameters were changed:
*************************************
PARAMETERS WITH ALTERED VALUES
******************************
_pga_max_size = 1048560 KB
optimizer_mode = first_rows
sqlstat_enabled = true
_b_tree_bitmap_plans = false
optimizer_index_cost_adj = 10
optimizer_index_caching = 90
_optim_peek_user_binds = false
optimizer_dynamic_sampling = 0
statistics_level = all
*********************************
the explain plan produced is :
---------------------------------------------------------------+------------------------------+
| Id | Operation | Name |Rows |Bytes|Cost | Time |
---------------------------------------------------------------+------------------------------+
| 0 | SELECT STATEMENT | | | | 62K | |
| 1 | NESTED LOOPS | | 4056| 606K| 62K | 00:13:39 |
| 2 | TABLE ACCESS FULL | SB_AGENT | 113K| 13M| 16K | 00:03:21 |
| 3 | TABLE ACCESS BY INDEX ROWID | SB_AGENT_SCHEDULE_ENTRY| 1| 37| 1 | 00:00:01 |
| 4 | INDEX RANGE SCAN | IX_ASE_A_UID_CCS_UID | 3| | 1 | 00:00:01 |
---------------------------------------------------------------+------------------------------+
Predicate Information:
----------------------
2 - filter("AGENT1_"."A_MSG_REQ_GUID" IS NULL)
3 - filter(("THIS_"."ASE_SGS_UID"=TO_NUMBER(:2) AND "THIS_"."ASE_IS_ACTIVE"=TO_NUMBER(:3)))
4 - access("THIS_"."ASE_A_UID"="AGENT1_"."A_UID" AND "THIS_"."ASE_CCS_UID"=TO_NUMBER(:1))
Changing optimizer_mode to first_rows_10 or _100 or _1000 fixes the problem.
I'm trying to find legitimate reason for such 10053, not to fix the query performance - it was fixed by a hint. The correct plan is :
---------------------------------------------------------------+-------------------------------+
| Id | Operation | Name |Rows |Bytes| Cost | Time |
---------------------------------------------------------------+-------------------------------+
| 0 | SELECT STATEMENT | | | | 2 | |
| 1 | NESTED LOOPS | | 11| 1683| 2 | 00:00:01 |
| 2 | TABLE ACCESS BY INDEX ROWID | SB_AGENT_SCHEDULE_ENTRY| 285| 10K| 1 | 00:00:01 |
| 3 | INDEX RANGE SCAN | IX_ASE_CCS_UID | 22K| | 1 | 00:00:01 |
| 4 | TABLE ACCESS BY INDEX ROWID | SB_AGENT | 11| 1276| 1 | 00:00:01 |
| 5 | INDEX UNIQUE SCAN | SYS_C0011186 | 1| | 1 | 00:00:01 |
---------------------------------------------------------------+-------------------------------+
Predicate Information:
----------------------
2 - filter(("THIS_"."ASE_SGS_UID"=TO_NUMBER(:2) AND "THIS_"."ASE_IS_ACTIVE"=TO_NUMBER(:3)))
3 - access("THIS_"."ASE_CCS_UID"=TO_NUMBER(:1))
4 - filter("AGENT1_"."A_MSG_REQ_GUID" IS NULL)
5 - access("THIS_"."ASE_A_UID"="AGENT1_"."A_UID")
SELECT this_.ASE_A_UID as y0_ Looks like Best Path is the path with maximum (not minimum) cost.
***************************************
SINGLE TABLE ACCESS PATH
-----------------------------------------
BEGIN Single Table Cardinality Estimation
-----------------------------------------
Column (#2): ASE_CCS_UID(NUMBER) NO STATISTICS (using defaults)
AvgLen: 13.00 NDV: 178094 Nulls: 0 Density: 5.6150e-06
Column (#4): ASE_SGS_UID(NUMBER) NO STATISTICS (using defaults)
AvgLen: 13.00 NDV: 178094 Nulls: 0 Density: 5.6150e-06
Column (#5): ASE_IS_ACTIVE(NUMBER)
AvgLen: 3.00 NDV: 2 Nulls: 0 Density: 0.0017993 Min: 0 Max: 1
Histogram: Freq kts: 2 UncompBkts: 4724 EndPtVals: 2
Table: SB_AGENT_SCHEDULE_ENTRY Alias: THIS_
Card: Original: 5699014 Rounded: 285 Computed: 284.95 Non Adjusted: 284.95
-----------------------------------------
END Single Table Cardinality Estimation
-----------------------------------------
Access Path: TableScan
Cost: 8678.53 Resp: 8678.53 Degree: 0
Cost_io: 8399.00 Cost_cpu: 2395052452
Resp_io: 8399.00 Resp_cpu: 2395052452
kkofmx: index filter:"THIS_"."ASE_IS_ACTIVE"=TO_NUMBER(:B1) AND "AGENT1_"."A_MSG_REQ_GUID" IS NULL
AND "THIS_"."ASE_A_UID"="AGENT1_"."A_UID"
kkofmx: index filter:"THIS_"."ASE_CCS_UID"=TO_NUMBER(:B1) AND "THIS_"."ASE_SGS_UID"=TO_NUMBER(:B2)
AND "THIS_"."ASE_IS_ACTIVE"=TO_NUMBER(:B3) AND "AGENT1_"."A_MSG_REQ_GUID" IS NULL AND
"THIS_"."ASE_A_UID"="AGENT1_"."A_UID"
Access Path: index (skip-scan)
SS sel: 0.5 ANDV (#skips): 19356
SS io: 1935.55 vs. table scan io: 8399.00
Skip Scan chosen
Access Path: index (SkipScan)
Index: IX_ASE_A_UID
resc_io: 2511233.50 resc_cpu: 19588485753
ix_sel: 0.5 ix_sel_with_filters: 0.5
Cost: 251351.97 Resp: 251351.97 Degree: 1
Access Path: index (FullScan)
Index: IX_ASE_A_UID_CCS_UID
resc_io: 47008.00 resc_cpu: 1429460564
ix_sel: 1 ix_sel_with_filters: 0.0041982
Cost: 4726.99 Resp: 4726.99 Degree: 1
Access Path: index (AllEqGuess)
Index: IX_ASE_CCS_UID
resc_io: 2291.00 resc_cpu: 29571532
ix_sel: 0.0040461 ix_sel_with_filters: 0.0040461
Cost: 229.45 Resp: 229.45 Degree: 1
Best:: AccessPath: IndexRange Index: IX_ASE_A_UID
Cost: 251351.97 Degree: 1 Resp: 251351.97 Card: 284.95 Bytes: 0
***************************************
Followup December 22, 2011 - 3pm UTC:
... optimizer_mode = first_rows
...
there you go, that is it.
Not a bug, but what you requested. You got the plan that gets the first row as fast as possible - even though it might take days to get the last row.
first_rows, a setting to not use probably anymore, is sort of a rule based-cost based optimization. first_rows(n) is a true cost based optimization.
http://docs.oracle.com/cd/E11882_01/server.112/e25513/initparams168.htm#REFRN10145
Values:
first_rows_n
The optimizer uses a cost-based approach and optimizes with a goal of best response time to return the first n rows (where n = 1, 10, 100, 1000).
first_rows
The optimizer uses a mix of costs and heuristics to find a best plan for fast delivery of the first few rows.
all_rows
The optimizer uses a cost-based approach for all SQL statements in the session and optimizes with a goal of best throughput (minimum resource use to complete the entire statement).
the rule (heuristic) used is typically "an index is best, use them". It is not the case often however that this is best.
参考至:http://asktom.oracle.com/pls/asktom/f?p=100:11:0%3A%3A%3A%3AP11_QUESTION_ID:63445044804318
如有错误,欢迎指正
邮箱:czmcj@163.com
相关推荐
kris -- Thanks for the question regarding "Cursor in SQL", version 10.2.0
Please choose the most appropiate newsgroup for your question. Do not cross post to them all. Before posting to the newsgroups, I suggest you try to search for an answer on the Google (DejaNews) ...
The series offers publications that present the state-of-the-art regarding the problems in question; show computational biology/bioinformatics methods at work; and finally discuss anticipated demands ...
The Lord of the Rings(tm), The Battle for Middle-earth(tm) is a trademark or registered trademark of Electronic Arts in the USA and/or other countries. All rights reserved. LucasArts, the LucasArts ...
For questions about this document regarding CDRH-regulated devices, contact the 510(k) Staff at 301-796-5640. For questions about this document regarding CBER-regulated devices, contact the Office ...
LSIP200232921 (DFCT) MegaCLI displays version for PD in unclear format and the data is not aligned correctly LSIP200232954 (DFCT) Need to Support all the MFC default values in the command AdpSettings...
Based on the given document information, we can derive several key points related to the Programming Environments Manual for 32-Bit Implementations of the PowerPC™ Architecture. ### Title: ...
(b) the name of the table, the names of the table's attributes, the data types of the table's attributes, the formats of the table's attributes, and the maximum number of rows that the table can have...
USB Type-C ECN(Engineering Change Notice)是一种技术更新通知,主要针对USB Type-C规范的2.0版本,发布于2019年8月。这个ECN的主要目的是澄清关于SOP'(Session Initiation Protocol prime)操作在特定设备中的...
In: inRef - The reference for the object item. Out: None Returns: Returns a reference counter if successful. For errors, returns $FFFFFFFF. -----------------------------------------------------...
Information regarding the origin and location of the exception can be identified using the exception stack trace below. Stack Trace: [COMException (0x80040154): Retrieving the ...
This protected script was not encoded for version 5.4.31 of PHP. Please re-encode or contact the author of the script regarding this problem. Error code [07]Filename: controllers/Main.phpLine Number:...
Minor changes to the scanning code for the filename of attachments.1.8.2- Fixed a bug with the filename of attachments (thanks to Taufer Pavel Ing.).- Added the function IsIPAddress from hou yg (the ...
We take pride in offering the best support service on the internet, so we are offering free support for all our files regarding bug fixes and basic integration of these files into common environments....
- Customize the Visual Chat code regarding your own requirements - Use customchatdev.html for developing / testing - Create .jar and .cab-files containing client-specific .class-files and the images-...
Much of the code under this directory originally came from the Apache Harmony project, and as such contains the standard Apache header comment. Some of the code was written originally for the Android ...
Horizontal Error Bars - Version 6.0 gives developers the ability to add error bars to bar charts, which offers additional statistical information regarding the data on the chart for more in depth ...
- The library was quiet except for the sound of pages turning. #### 三十六、抱歉 - **公式**: be sorry to do / be sorry for sb / be sorry to hear that - **例句**: - I am sorry to tell you that we ...
A survey of state-level contacts for school psychology regarding retention/promotion practices: Are we evaluating the risks and benefits? Psychologv in the Schools Volume 28, January 1991 A ...
account the operational complexity regarding the identification process as well as their accuracy to follow the PA behavior. Both identified PA models will be used to estimate a Hammerstein based ...