Oracle 12c introduced Partial indexing, which works well for simple partitioned tables with literals. However, it has several significant issues:
Continue readingTag Archives: troubleshooting
Slow index access “COL=:N” where :N is NULL
All Oracle specialists know that a predicate X=NULL can never be true and we should use “X is NULL” in such cases. The Oracle optimizer knows about that, so if we create a table like this:
Continue readingWhere does the commit or rollback happen in PL/SQL code?
One of the easiest ways is to use diagnostic events:
alter session set events 'sql_trace {callstack: fname xctend} errorstack(1)';
ORA exceptions that can’t be caught by exception handler
I know 2 “special” exceptions that can’t be processed in exception handler:
- “ORA-01013: user requested cancel of current operation”
- “ORA-03113: end-of-file on communication channel”
- and + “ORA-00028: your session has been killed” from Matthias Rogel
Tanel Poder described the first one (ORA-01013) in details here: https://tanelpoder.com/2010/02/17/how-to-cancel-a-query-running-in-another-session/ where Tanel shows that this error is based on SIGURG signal (kill -URG
):
-- 1013 will not be caught:
declare
e exception;
pragma exception_init(e,-1013);
begin
raise e;
exception when others then dbms_output.put_line('caught');
end;
/
declare
*
ERROR at line 1:
ORA-01013: user requested cancel of current operation
ORA-06512: at line 5
Another interesting troubleshooting case
Got an interesting question today in RuOUG:
Some very simple PL/SQL procedures usually are completed within ~50ms, but sometimes sporadically longer than a second. For example, the easiest one from these procedures:
create or replace PROCEDURE XXXX (
P_ORG_NUM IN number,
p_result OUT varchar2,
p_seq OUT number
) AS
BEGIN
p_seq := P_ORG_NUM; p_result:='';
END;
sql_trace shows that it was executed for 1.001sec and all the time was “ON CPU”:
Continue readingSimple function returning Parallel slave info
You can add also any information from v$rtsm_sql_plan_monitor if needed
create or replace function px_session_info return varchar2 parallel_enable as
vSID int;
res varchar2(30);
begin
vSID:=userenv('sid');
select
to_char(s.server_group,'fm000')
||'-'||to_char(s.server_set,'fm0000')
||'-'||to_char(s.server#,'fm0000')
||'('||s.sid||','||s.degree||'/'||s.req_degree||')'
into res
from v$px_session s
where s.sid=vSID;
return res;
exception when no_data_found then
return 'no_parallel';
end;
/
Simple example:
select--+ parallel
px_session_info, count(*)
from sys.obj$
group by px_session_info
/
PX_SESSION_INFO COUNT(*)
------------------------ --------
001-0002-0001(630,2/2) 38298
001-0002-0002(743,2/2) 34706
Smart Scan and Recursive queries
Since Christmas I have been asked to investigate two different “failures to use Smart Scan”. It turns out they both fell into the same little known restriction on the use of Direct Read. Smart Scan critically depends on Direct Read in order to read the synthetic output blocks into private buffers in PGA so with Direct Read disabled Smart Scan is also disabled. In these two cases the restriction is on using Direct Read on Serial Recursive queries.
Case 1: Materialized View Refresh
A customer asked me to investigate why his MView refresh was running slowly and was failing to use Smart Scan. He had used 'trace[NSMTIO] disk=highest'
which showed the cause as:
Direct Read for serial qry: disabled(::recursive_call::kctfsage:::)
Reading and analyzing trace file contents using just SQL
Simple example: tracefiles for the last 5 days:
select fc.*
from v$diag_trace_file f
join v$diag_trace_file_contents fc
on f.adr_home=fc.adr_home
and f.trace_filename=fc.trace_filename
where f.modify_time >= systimestamp - interval'5' minute
and fc.timestamp >= systimestamp - interval'5' minute
and fc.component_name = 'SQL_Trace'
--and fc.section_name like 'kests%'
;
--or:
select tr.*
from v$diag_app_trace_file tf,
v$diag_sql_trace_records tr
where tf.sql_trace = 'Y'
and tf.modify_time > systimestamp - interval'5'minute
and tr.adr_home = tf.adr_home
and tr.trace_filename = tf.trace_filename
and tr.timestamp > systimestamp - interval'5'minute;
Workarounds for JPPD with view and table(kokbf$), xmltable or json_table functions
You may know that table() (kokbf$ collection functions), xmltable and json_table functions block Join-Predicate PushDown(JPPD).
Simple example:
create table xtest(a, b, c) as select mod(level,1000),level,rpad('x',100,'x') from dual connect by level<=1e4 / create index itest on xtest(a) / create or replace view vtest as select a,count(b) cnt from xtest group by a / call dbms_stats.gather_table_stats(user,'xtest'); /
select distinct v.* from table(sys.odcinumberlist(1,2,3)) c, vtest v where v.a = c.column_value; Plan hash value: 699667151 ------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 19 | 80 (4)| 00:00:01 | | 1 | HASH UNIQUE | | 1 | 19 | 80 (4)| 00:00:01 | |* 2 | HASH JOIN | | 1 | 19 | 79 (3)| 00:00:01 | | 3 | COLLECTION ITERATOR CONSTRUCTOR FETCH| | 1 | 2 | 29 (0)| 00:00:01 | | 4 | VIEW | VTEST | 1000 | 17000 | 49 (3)| 00:00:01 | | 5 | HASH GROUP BY | | 1000 | 8000 | 49 (3)| 00:00:01 | | 6 | TABLE ACCESS FULL | XTEST | 10000 | 80000 | 48 (0)| 00:00:01 | ------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - access("V"."A"=VALUE(KOKBF$))
select/*+ cardinality(c 1) use_nl(v) push_pred(v) */ * from json_table('{"a":[1,2,3]}', '$.a[*]' COLUMNS (a int PATH '$')) c ,vtest v where c.a = v.a; Plan hash value: 664523328 -------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | -------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 28 | 78 (2)| 00:00:01 | | 1 | NESTED LOOPS | | 1 | 28 | 78 (2)| 00:00:01 | | 2 | JSONTABLE EVALUATION | | | | | | |* 3 | VIEW | VTEST | 1 | 26 | 49 (3)| 00:00:01 | | 4 | SORT GROUP BY | | 1000 | 8000 | 49 (3)| 00:00:01 | | 5 | TABLE ACCESS FULL | XTEST | 10000 | 80000 | 48 (0)| 00:00:01 | -------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 3 - filter("V"."A"="P"."A") Hint Report (identified by operation id / Query Block Name / Object Alias): Total hints for statement: 1 (U - Unused (1)) --------------------------------------------------------------------------- 1 - SEL$F534CA49 / V@SEL$1 U - push_pred(v)
select/*+ leading(c v) cardinality(c 1) use_nl(v) push_pred(v) */ v.* from xmltable('(1,3)' columns a int path '.') c,vtest v where c.a = v.a(+); Plan hash value: 564839666 ------------------------------------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 1 | 28 | 78 (2)| 00:00:01 | | 1 | NESTED LOOPS OUTER | | 1 | 28 | 78 (2)| 00:00:01 | | 2 | COLLECTION ITERATOR PICKLER FETCH| XQSEQUENCEFROMXMLTYPE | 1 | 2 | 29 (0)| 00:00:01 | |* 3 | VIEW | VTEST | 1 | 26 | 49 (3)| 00:00:01 | | 4 | SORT GROUP BY | | 1000 | 8000 | 49 (3)| 00:00:01 | | 5 | TABLE ACCESS FULL | XTEST | 10000 | 80000 | 48 (0)| 00:00:01 | ------------------------------------------------------------------------------------------------------------ Predicate Information (identified by operation id): --------------------------------------------------- 3 - filter("V"."A"(+)=CAST(TO_NUMBER(SYS_XQ_UPKXML2SQL(SYS_XQEXVAL(VALUE(KOKBF$),0,0,54525952,0), 50,1,2)) AS int )) Hint Report (identified by operation id / Query Block Name / Object Alias): Total hints for statement: 1 (U - Unused (1)) --------------------------------------------------------------------------- 1 - SEL$6722A2F6 / V@SEL$1 U - push_pred(v)
And compare with this:
create global temporary table temp_collection(a number); insert into temp_collection select * from table(sys.odcinumberlist(1,2,3)); select/*+ cardinality(c 1) no_merge(v) */ distinct v.* from temp_collection c, vtest v where v.a = c.a; Plan hash value: 3561835411 ------------------------------------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 1 | 26 | 41 (3)| 00:00:01 | | 1 | HASH UNIQUE | | 1 | 26 | 41 (3)| 00:00:01 | | 2 | NESTED LOOPS | | 1 | 26 | 40 (0)| 00:00:01 | | 3 | TABLE ACCESS FULL | TEMP_COLLECTION | 1 | 13 | 29 (0)| 00:00:01 | | 4 | VIEW PUSHED PREDICATE | VTEST | 1 | 13 | 11 (0)| 00:00:01 | |* 5 | FILTER | | | | | | | 6 | SORT AGGREGATE | | 1 | 8 | | | | 7 | TABLE ACCESS BY INDEX ROWID BATCHED| XTEST | 10 | 80 | 11 (0)| 00:00:01 | |* 8 | INDEX RANGE SCAN | ITEST | 10 | | 1 (0)| 00:00:01 | ------------------------------------------------------------------------------------------------------------ Predicate Information (identified by operation id): --------------------------------------------------- 5 - filter(COUNT(*)>0) 8 - access("A"="C"."A")
You can see that JPPD works fine in case of global temporary tables and, obviously, the first workaround is to avoid such functions with complex views.
But in such simple queries you have 2 other simple options:
1. you can avoid JPPD and get optimal plans using CVM(complex view merge) by just simply rewriting the query using IN or EXISTS:
select * from vtest v where v.a in (select/*+ cardinality(c 1) */ c.* from table(sys.odcinumberlist(1,2,3)) c); Plan hash value: 1474391442 --------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | --------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 10 | 100 | 42 (5)| 00:00:01 | | 1 | SORT GROUP BY NOSORT | | 10 | 100 | 42 (5)| 00:00:01 | | 2 | NESTED LOOPS | | 10 | 100 | 41 (3)| 00:00:01 | | 3 | NESTED LOOPS | | 10 | 100 | 41 (3)| 00:00:01 | | 4 | SORT UNIQUE | | 1 | 2 | 29 (0)| 00:00:01 | | 5 | COLLECTION ITERATOR CONSTRUCTOR FETCH| | 1 | 2 | 29 (0)| 00:00:01 | |* 6 | INDEX RANGE SCAN | ITEST | 10 | | 1 (0)| 00:00:01 | | 7 | TABLE ACCESS BY INDEX ROWID | XTEST | 10 | 80 | 11 (0)| 00:00:01 | --------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 6 - access("A"=VALUE(KOKBF$))
select * from vtest t where t.a in (select/*+ cardinality(v 1) */ v.a from json_table('{"a":[1,2,3]}', '$.a[*]' COLUMNS (a int PATH '$')) v); Plan hash value: 2910004067 --------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | --------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 10 | 100 | 42 (5)| 00:00:01 | | 1 | SORT GROUP BY NOSORT | | 10 | 100 | 42 (5)| 00:00:01 | | 2 | NESTED LOOPS | | 10 | 100 | 41 (3)| 00:00:01 | | 3 | NESTED LOOPS | | 10 | 100 | 41 (3)| 00:00:01 | | 4 | SORT UNIQUE | | | | | | | 5 | JSONTABLE EVALUATION | | | | | | |* 6 | INDEX RANGE SCAN | ITEST | 10 | | 1 (0)| 00:00:01 | | 7 | TABLE ACCESS BY INDEX ROWID| XTEST | 10 | 80 | 11 (0)| 00:00:01 | --------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 6 - access("A"="P"."A") select v.* from vtest v where exists(select/*+ cardinality(c 1) */ 1 from xmltable('(1,3)' columns a int path '.') c where c.a = v.a); Plan hash value: 1646016183 --------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | --------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 10 | 100 | 42 (5)| 00:00:01 | | 1 | SORT GROUP BY NOSORT | | 10 | 100 | 42 (5)| 00:00:01 | | 2 | NESTED LOOPS | | 10 | 100 | 41 (3)| 00:00:01 | | 3 | NESTED LOOPS | | 10 | 100 | 41 (3)| 00:00:01 | | 4 | SORT UNIQUE | | 1 | 2 | 29 (0)| 00:00:01 | | 5 | COLLECTION ITERATOR PICKLER FETCH| XQSEQUENCEFROMXMLTYPE | 1 | 2 | 29 (0)| 00:00:01 | |* 6 | INDEX RANGE SCAN | ITEST | 10 | | 1 (0)| 00:00:01 | | 7 | TABLE ACCESS BY INDEX ROWID | XTEST | 10 | 80 | 11 (0)| 00:00:01 | --------------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 6 - access("A"=CAST(TO_NUMBER(SYS_XQ_UPKXML2SQL(SYS_XQEXVAL(VALUE(KOKBF$),0,0,54525952,0),50,1,2)) AS int ))
2. Avoid JPPD using lateral():
select/*+ cardinality(c 1) no_merge(lat) */ distinct lat.* from table(sys.odcinumberlist(1,2,3)) c, lateral(select * from vtest v where v.a = c.column_value) lat; Plan hash value: 18036714 ----------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ----------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 10 | 190 | 41 (3)| 00:00:01 | | 1 | HASH UNIQUE | | 10 | 190 | 41 (3)| 00:00:01 | | 2 | NESTED LOOPS | | 10 | 190 | 40 (0)| 00:00:01 | | 3 | COLLECTION ITERATOR CONSTRUCTOR FETCH| | 1 | 2 | 29 (0)| 00:00:01 | | 4 | VIEW | VW_LAT_4DB60E85 | 10 | 170 | 11 (0)| 00:00:01 | | 5 | SORT GROUP BY | | 10 | 80 | 11 (0)| 00:00:01 | | 6 | TABLE ACCESS BY INDEX ROWID BATCHED| XTEST | 10 | 80 | 11 (0)| 00:00:01 | |* 7 | INDEX RANGE SCAN | ITEST | 10 | | 1 (0)| 00:00:01 | ----------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 7 - access("A"=VALUE(KOKBF$))
Let’s see a bit more complex query:
create table xtest1(id primary key, a) as select level,level from dual connect by level<=1000; create table xtest2(a, b, c) as select mod(level,1000),level,rpad('x',100,'x') from dual connect by level<=1e4 / create index itest2 on xtest2(a) / create or replace view vtest2 as select a,count(b) cnt from xtest2 group by a /
select v.* from xtest1 t1, vtest2 v where t1.id in (select/*+ cardinality(c 1) */ * from table(sys.odcinumberlist(1,2,3)) c) and v.a = t1.a; Plan hash value: 4293766070 ----------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ----------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | 36 | 80 (3)| 00:00:01 | |* 1 | HASH JOIN | | 1 | 36 | 80 (3)| 00:00:01 | | 2 | JOIN FILTER CREATE | :BF0000 | 1 | 10 | 31 (4)| 00:00:01 | | 3 | NESTED LOOPS | | 1 | 10 | 31 (4)| 00:00:01 | | 4 | NESTED LOOPS | | 1 | 10 | 31 (4)| 00:00:01 | | 5 | SORT UNIQUE | | 1 | 2 | 29 (0)| 00:00:01 | | 6 | COLLECTION ITERATOR CONSTRUCTOR FETCH| | 1 | 2 | 29 (0)| 00:00:01 | |* 7 | INDEX UNIQUE SCAN | SYS_C0026365 | 1 | | 0 (0)| 00:00:01 | | 8 | TABLE ACCESS BY INDEX ROWID | XTEST1 | 1 | 8 | 1 (0)| 00:00:01 | | 9 | VIEW | VTEST2 | 1000 | 26000 | 49 (3)| 00:00:01 | | 10 | HASH GROUP BY | | 1000 | 8000 | 49 (3)| 00:00:01 | | 11 | JOIN FILTER USE | :BF0000 | 10000 | 80000 | 48 (0)| 00:00:01 | |* 12 | TABLE ACCESS FULL | XTEST2 | 10000 | 80000 | 48 (0)| 00:00:01 | ----------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - access("V"."A"="T1"."A") 7 - access("T1"."ID"=VALUE(KOKBF$)) 12 - filter(SYS_OP_BLOOM_FILTER(:BF0000,"A"))
As you see, CVM can’t help in this case, but we can use lateral():
select/*+ no_merge(lat) */ lat.* from xtest1 t1, lateral(select * from vtest2 v where v.a = t1.a) lat where t1.id in (select/*+ cardinality(c 1) */ * from table(sys.odcinumberlist(1,2,3)) c); Plan hash value: 1798023704 ------------------------------------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 10 | 360 | 42 (3)| 00:00:01 | | 1 | NESTED LOOPS | | 10 | 360 | 42 (3)| 00:00:01 | | 2 | NESTED LOOPS | | 1 | 10 | 31 (4)| 00:00:01 | | 3 | SORT UNIQUE | | 1 | 2 | 29 (0)| 00:00:01 | | 4 | COLLECTION ITERATOR CONSTRUCTOR FETCH| | 1 | 2 | 29 (0)| 00:00:01 | | 5 | TABLE ACCESS BY INDEX ROWID | XTEST1 | 1 | 8 | 1 (0)| 00:00:01 | |* 6 | INDEX UNIQUE SCAN | SYS_C0026365 | 1 | | 0 (0)| 00:00:01 | | 7 | VIEW | VW_LAT_A18161FF | 10 | 260 | 11 (0)| 00:00:01 | | 8 | SORT GROUP BY | | 10 | 80 | 11 (0)| 00:00:01 | | 9 | TABLE ACCESS BY INDEX ROWID BATCHED | XTEST2 | 10 | 80 | 11 (0)| 00:00:01 | |* 10 | INDEX RANGE SCAN | ITEST2 | 10 | | 1 (0)| 00:00:01 | ------------------------------------------------------------------------------------------------------------ Predicate Information (identified by operation id): --------------------------------------------------- 6 - access("T1"."ID"=VALUE(KOKBF$)) 10 - access("A"="T1"."A")
There is also another workaround with non-documented “precompute_subquery” hint:
select v.* from xtest1 t1, vtest2 v where t1.id in (select/*+ precompute_subquery */ * from table(sys.odcinumberlist(1,2,3)) c) and v.a = t1.a; Plan hash value: 1964829099 ------------------------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 30 | 480 | 37 (3)| 00:00:01 | | 1 | HASH GROUP BY | | 30 | 480 | 37 (3)| 00:00:01 | | 2 | NESTED LOOPS | | 30 | 480 | 36 (0)| 00:00:01 | | 3 | NESTED LOOPS | | 30 | 480 | 36 (0)| 00:00:01 | | 4 | INLIST ITERATOR | | | | | | | 5 | TABLE ACCESS BY INDEX ROWID| XTEST1 | 3 | 24 | 3 (0)| 00:00:01 | |* 6 | INDEX UNIQUE SCAN | SYS_C0026365 | 3 | | 2 (0)| 00:00:01 | |* 7 | INDEX RANGE SCAN | ITEST2 | 10 | | 1 (0)| 00:00:01 | | 8 | TABLE ACCESS BY INDEX ROWID | XTEST2 | 10 | 80 | 11 (0)| 00:00:01 | ------------------------------------------------------------------------------------------------ Predicate Information (identified by operation id): --------------------------------------------------- 6 - access("T1"."ID"=1 OR "T1"."ID"=2 OR "T1"."ID"=3) 7 - access("A"="T1"."A")
It can help even in most difficult cases, for example if you can’t rewrite query (in this case you can create sql patch or sql profile with “precompute_subquery”), but I wouldn’t suggest it since “precompute_subquery” is non-documented, it can be used only with simple collections and has limitation in 1000 values.
I’d suggest to use the workaround with lateral, since it’s most reliable and very simple.
Top time-consuming predicates from ASH
Sometimes it might be useful to analyze top time-consuming filter and access predicates from ASH, especially in cases when db load is spread evenly enough by different queries and top segments do not show anything interesting, except usual things like “some tables are requested more often than others”.
Of course, we can start from analysis of SYS.COL_USAGE$: col_usage.sql
col owner format a30 col oname format a30 heading "Object name" col cname format a30 heading "Column name" accept owner_mask prompt "Enter owner mask: "; accept tab_name prompt "Enter tab_name mask: "; accept col_name prompt "Enter col_name mask: "; SELECT a.username as owner ,o.name as oname ,c.name as cname ,u.equality_preds as equality_preds ,u.equijoin_preds as equijoin_preds ,u.nonequijoin_preds as nonequijoin_preds ,u.range_preds as range_preds ,u.like_preds as like_preds ,u.null_preds as null_preds ,to_char(u.timestamp, 'yyyy-mm-dd hh24:mi:ss') when FROM sys.col_usage$ u , sys.obj$ o , sys.col$ c , all_users a WHERE a.user_id = o.owner# AND u.obj# = o.obj# AND u.obj# = c.obj# AND u.intcol# = c.col# AND a.username like upper('&owner_mask') AND o.name like upper('&tab_name') AND c.name like upper('&col_name') ORDER BY a.username, o.name, c.name ; col owner clear; col oname clear; col cname clear; undef tab_name col_name owner_mask;
But it’s not enough, for example it doesn’t show predicates combinations. In this case we can use v$active_session_history and v$sql_plan:
with ash as ( select sql_id ,plan_hash_value ,table_name ,alias ,ACCESS_PREDICATES ,FILTER_PREDICATES ,count(*) cnt from ( select h.sql_id ,h.SQL_PLAN_HASH_VALUE plan_hash_value ,decode(p.OPERATION ,'TABLE ACCESS',p.OBJECT_OWNER||'.'||p.OBJECT_NAME ,(select i.TABLE_OWNER||'.'||i.TABLE_NAME from dba_indexes i where i.OWNER=p.OBJECT_OWNER and i.index_name=p.OBJECT_NAME) ) table_name ,OBJECT_ALIAS ALIAS ,p.ACCESS_PREDICATES ,p.FILTER_PREDICATES -- поля, которые могут быть полезны для анализа в других разрезах: -- ,h.sql_plan_operation -- ,h.sql_plan_options -- ,decode(h.session_state,'ON CPU','ON CPU',h.event) event -- ,h.current_obj# from v$active_session_history h ,v$sql_plan p where h.sql_opname='SELECT' and h.IN_SQL_EXECUTION='Y' and h.sql_plan_operation in ('INDEX','TABLE ACCESS') and p.SQL_ID = h.sql_id and p.CHILD_NUMBER = h.SQL_CHILD_NUMBER and p.ID = h.SQL_PLAN_LINE_ID -- если захотим за последние 3 часа: -- and h.sample_time >= systimestamp - interval '3' hour ) -- если захотим анализируем предикаты только одной таблицы: -- where table_name='&OWNER.&TABNAME' group by sql_id ,plan_hash_value ,table_name ,alias ,ACCESS_PREDICATES ,FILTER_PREDICATES ) ,agg_by_alias as ( select table_name ,regexp_substr(ALIAS,'^[^@]+') ALIAS ,listagg(ACCESS_PREDICATES,' ') within group(order by ACCESS_PREDICATES) ACCESS_PREDICATES ,listagg(FILTER_PREDICATES,' ') within group(order by FILTER_PREDICATES) FILTER_PREDICATES ,sum(cnt) cnt from ash group by sql_id ,plan_hash_value ,table_name ,alias ) ,agg as ( select table_name ,'ALIAS' alias ,replace(access_predicates,'"'||alias||'".','"ALIAS".') access_predicates ,replace(filter_predicates,'"'||alias||'".','"ALIAS".') filter_predicates ,sum(cnt) cnt from agg_by_alias group by table_name ,replace(access_predicates,'"'||alias||'".','"ALIAS".') ,replace(filter_predicates,'"'||alias||'".','"ALIAS".') ) ,cols as ( select table_name ,cols ,access_predicates ,filter_predicates ,sum(cnt)over(partition by table_name,cols) total_by_cols ,cnt from agg ,xmltable( 'string-join(for $c in /ROWSET/ROW/COL order by $c return $c,",")' passing xmltype( cursor( (select distinct nvl( regexp_substr( access_predicates||' '||filter_predicates ,'("'||alias||'"\.|[^.]|^)"([A-Z0-9#_$]+)"([^.]|$)' ,1 ,level ,'i',2 ),' ') col from dual connect by level<=regexp_count( access_predicates||' '||filter_predicates ,'("'||alias||'"\.|[^.]|^)"([A-Z0-9#_$]+)"([^.]|$)' ) ) )) columns cols varchar2(400) path '.' )(+) order by total_by_cols desc, table_name, cnt desc ) select table_name ,cols ,sum(cnt)over(partition by table_name,cols) total_by_cols ,access_predicates ,filter_predicates ,cnt from cols where rownum<=50 order by total_by_cols desc, table_name, cnt desc;
As you can see it shows top 50 predicates and their columns for last 3 hours. Despite the fact that ASH stores just sampled data, its results are representative enough for high-load databases.
Just few details:
- Column “COLS” shows “search columns”, and total_by_cols – their number of occurrences
- I think it’s obvious, that this info is not unambiguous marker of the problem, because for example few full table scans can misrepresent the statistics, so sometimes you will need to analyze such queries deeper (v$sqlstats,dba_hist_sqlstat)
- We need to group data by OBJECT_ALIAS within SQL_ID and plan_hash_value, because in case of index access with lookup to table(“table access by rowid”) some predicates are in the row with index access and others are in the row with table access.
Depending on the needs, we can modify this query to analyze ASH data by different dimensions, for example with additional analysis of partitioning or wait events.