Oracle SQL
  • LICENSE

Tag Archives: troubleshooting

Smart Scan and Recursive queries

Posted on March 5, 2020 by Roger MacNicol Posted in adaptive serial direct path reads, CBO, hints, oracle, parallel, SmartScan, trace, troubleshooting 1,668 Page views Leave a comment

Since Christmas I have been asked to investigate two different “failures to use Smart Scan”. It turns out they both fell into the same little known restriction on the use of Direct Read. Smart Scan critically depends on Direct Read in order to read the synthetic output blocks into private buffers in PGA so with Direct Read disabled Smart Scan is also disabled. In these two cases the restriction is on using Direct Read on Serial Recursive queries.

Case 1: Materialized View Refresh

A customer asked me to investigate why his MView refresh was running slowly and was failing to use Smart Scan. He had used 'trace[NSMTIO] disk=highest' which showed the cause as:

Direct Read for serial qry: disabled(::recursive_call::kctfsage:::)
Continue reading→
direct path reads hints oracle Roger MacNicol SmartScan troubleshooting

Reading and analyzing trace file contents using just SQL

Posted on July 11, 2019 by Sayan Malakshinov Posted in 12.2, oracle, trace, troubleshooting 1,665 Page views Leave a comment

Simple example: tracefiles for the last 5 days:

select fc.* 
from v$diag_trace_file f
     join v$diag_trace_file_contents fc
          on f.adr_home=fc.adr_home
          and f.trace_filename=fc.trace_filename
where f.modify_time >= systimestamp - interval'5' minute
  and fc.timestamp  >= systimestamp - interval'5' minute
  and fc.component_name = 'SQL_Trace'
  --and fc.section_name like 'kests%'
  ;
--or:
select tr.*
  from v$diag_app_trace_file tf,
       v$diag_sql_trace_records tr
 where tf.sql_trace = 'Y'
   and tf.modify_time > systimestamp - interval'5'minute
   and tr.adr_home = tf.adr_home
   and tr.trace_filename = tf.trace_filename
   and tr.timestamp > systimestamp - interval'5'minute;
10046 10053 diag traces troubleshooting

Workarounds for JPPD with view and table(kokbf$), xmltable or json_table functions

Posted on May 30, 2019 by Sayan Malakshinov Posted in CBO, oracle, query optimizing, SQL, troubleshooting 2,150 Page views Leave a comment

You may know that table() (kokbf$ collection functions), xmltable and json_table functions block Join-Predicate PushDown(JPPD).

Simple example:

DDL

[sourcecode language=”sql”]
create table xtest(a, b, c) as
select mod(level,1000),level,rpad(‘x’,100,’x’)
from dual
connect by level<=1e4
/
create index itest on xtest(a)
/
create or replace view vtest as
select a,count(b) cnt
from xtest
group by a
/
call dbms_stats.gather_table_stats(user,’xtest’);
/
[/sourcecode]

[collapse]

select distinct v.* 
from table(sys.odcinumberlist(1,2,3)) c, vtest v
where v.a = c.column_value;

Plan hash value: 699667151

-------------------------------------------------------------------------------------------------
| Id  | Operation                               | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                        |       |     1 |    19 |    80   (4)| 00:00:01 |
|   1 |  HASH UNIQUE                            |       |     1 |    19 |    80   (4)| 00:00:01 |
|*  2 |   HASH JOIN                             |       |     1 |    19 |    79   (3)| 00:00:01 |
|   3 |    COLLECTION ITERATOR CONSTRUCTOR FETCH|       |     1 |     2 |    29   (0)| 00:00:01 |
|   4 |    VIEW                                 | VTEST |  1000 | 17000 |    49   (3)| 00:00:01 |
|   5 |     HASH GROUP BY                       |       |  1000 |  8000 |    49   (3)| 00:00:01 |
|   6 |      TABLE ACCESS FULL                  | XTEST | 10000 | 80000 |    48   (0)| 00:00:01 |
-------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("V"."A"=VALUE(KOKBF$))

same for json_table

[sourcecode language=”sql”]
select/*+ cardinality(c 1) use_nl(v) push_pred(v) */ *
from json_table(‘{"a":[1,2,3]}’, ‘$.a[*]’ COLUMNS (a int PATH ‘$’)) c
,vtest v
where c.a = v.a;

Plan hash value: 664523328

——————————————————————————–
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
——————————————————————————–
| 0 | SELECT STATEMENT | | 1 | 28 | 78 (2)| 00:00:01 |
| 1 | NESTED LOOPS | | 1 | 28 | 78 (2)| 00:00:01 |
| 2 | JSONTABLE EVALUATION | | | | | |
|* 3 | VIEW | VTEST | 1 | 26 | 49 (3)| 00:00:01 |
| 4 | SORT GROUP BY | | 1000 | 8000 | 49 (3)| 00:00:01 |
| 5 | TABLE ACCESS FULL | XTEST | 10000 | 80000 | 48 (0)| 00:00:01 |
——————————————————————————–

Predicate Information (identified by operation id):
—————————————————

3 – filter("V"."A"="P"."A")

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 1 (U – Unused (1))
—————————————————————————

1 – SEL$F534CA49 / V@SEL$1
U – push_pred(v)
[/sourcecode]

[collapse]
same for xmltable

[sourcecode language=”sql”]
select/*+ leading(c v) cardinality(c 1) use_nl(v) push_pred(v) */ v.*
from xmltable(‘(1,3)’ columns a int path ‘.’) c,vtest v
where c.a = v.a(+);

Plan hash value: 564839666

————————————————————————————————————
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
————————————————————————————————————
| 0 | SELECT STATEMENT | | 1 | 28 | 78 (2)| 00:00:01 |
| 1 | NESTED LOOPS OUTER | | 1 | 28 | 78 (2)| 00:00:01 |
| 2 | COLLECTION ITERATOR PICKLER FETCH| XQSEQUENCEFROMXMLTYPE | 1 | 2 | 29 (0)| 00:00:01 |
|* 3 | VIEW | VTEST | 1 | 26 | 49 (3)| 00:00:01 |
| 4 | SORT GROUP BY | | 1000 | 8000 | 49 (3)| 00:00:01 |
| 5 | TABLE ACCESS FULL | XTEST | 10000 | 80000 | 48 (0)| 00:00:01 |
————————————————————————————————————

Predicate Information (identified by operation id):
—————————————————

3 – filter("V"."A"(+)=CAST(TO_NUMBER(SYS_XQ_UPKXML2SQL(SYS_XQEXVAL(VALUE(KOKBF$),0,0,54525952,0),
50,1,2)) AS int ))

Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 1 (U – Unused (1))
—————————————————————————

1 – SEL$6722A2F6 / V@SEL$1
U – push_pred(v)
[/sourcecode]

[collapse]

And compare with this:

create global temporary table temp_collection(a number);

insert into temp_collection select * from table(sys.odcinumberlist(1,2,3));

select/*+ cardinality(c 1) no_merge(v) */
   distinct v.* 
from temp_collection c, vtest v
where v.a = c.a;

Plan hash value: 3561835411

------------------------------------------------------------------------------------------------------------
| Id  | Operation                                | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                         |                 |     1 |    26 |    41   (3)| 00:00:01 |
|   1 |  HASH UNIQUE                             |                 |     1 |    26 |    41   (3)| 00:00:01 |
|   2 |   NESTED LOOPS                           |                 |     1 |    26 |    40   (0)| 00:00:01 |
|   3 |    TABLE ACCESS FULL                     | TEMP_COLLECTION |     1 |    13 |    29   (0)| 00:00:01 |
|   4 |    VIEW PUSHED PREDICATE                 | VTEST           |     1 |    13 |    11   (0)| 00:00:01 |
|*  5 |     FILTER                               |                 |       |       |            |          |
|   6 |      SORT AGGREGATE                      |                 |     1 |     8 |            |          |
|   7 |       TABLE ACCESS BY INDEX ROWID BATCHED| XTEST           |    10 |    80 |    11   (0)| 00:00:01 |
|*  8 |        INDEX RANGE SCAN                  | ITEST           |    10 |       |     1   (0)| 00:00:01 |
------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   5 - filter(COUNT(*)>0)
   8 - access("A"="C"."A")

You can see that JPPD works fine in case of global temporary tables and, obviously, the first workaround is to avoid such functions with complex views.
But in such simple queries you have 2 other simple options:
1. you can avoid JPPD and get optimal plans using CVM(complex view merge) by just simply rewriting the query using IN or EXISTS:

select * 
from vtest v
where v.a in (select/*+ cardinality(c 1) */ c.* from table(sys.odcinumberlist(1,2,3)) c);

Plan hash value: 1474391442

---------------------------------------------------------------------------------------------------
| Id  | Operation                                 | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                          |       |    10 |   100 |    42   (5)| 00:00:01 |
|   1 |  SORT GROUP BY NOSORT                     |       |    10 |   100 |    42   (5)| 00:00:01 |
|   2 |   NESTED LOOPS                            |       |    10 |   100 |    41   (3)| 00:00:01 |
|   3 |    NESTED LOOPS                           |       |    10 |   100 |    41   (3)| 00:00:01 |
|   4 |     SORT UNIQUE                           |       |     1 |     2 |    29   (0)| 00:00:01 |
|   5 |      COLLECTION ITERATOR CONSTRUCTOR FETCH|       |     1 |     2 |    29   (0)| 00:00:01 |
|*  6 |     INDEX RANGE SCAN                      | ITEST |    10 |       |     1   (0)| 00:00:01 |
|   7 |    TABLE ACCESS BY INDEX ROWID            | XTEST |    10 |    80 |    11   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   6 - access("A"=VALUE(KOKBF$))

the same with json_table and xmltable

[sourcecode language=”sql”]
select *
from vtest t
where t.a in (select/*+ cardinality(v 1) */ v.a from json_table(‘{"a":[1,2,3]}’, ‘$.a[*]’ COLUMNS (a int PATH ‘$’)) v);

Plan hash value: 2910004067

—————————————————————————————
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
—————————————————————————————
| 0 | SELECT STATEMENT | | 10 | 100 | 42 (5)| 00:00:01 |
| 1 | SORT GROUP BY NOSORT | | 10 | 100 | 42 (5)| 00:00:01 |
| 2 | NESTED LOOPS | | 10 | 100 | 41 (3)| 00:00:01 |
| 3 | NESTED LOOPS | | 10 | 100 | 41 (3)| 00:00:01 |
| 4 | SORT UNIQUE | | | | | |
| 5 | JSONTABLE EVALUATION | | | | | |
|* 6 | INDEX RANGE SCAN | ITEST | 10 | | 1 (0)| 00:00:01 |
| 7 | TABLE ACCESS BY INDEX ROWID| XTEST | 10 | 80 | 11 (0)| 00:00:01 |
—————————————————————————————

Predicate Information (identified by operation id):
—————————————————

6 – access("A"="P"."A")

select v.*
from vtest v
where exists(select/*+ cardinality(c 1) */ 1 from xmltable(‘(1,3)’ columns a int path ‘.’) c where c.a = v.a);

Plan hash value: 1646016183

—————————————————————————————————————
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
—————————————————————————————————————
| 0 | SELECT STATEMENT | | 10 | 100 | 42 (5)| 00:00:01 |
| 1 | SORT GROUP BY NOSORT | | 10 | 100 | 42 (5)| 00:00:01 |
| 2 | NESTED LOOPS | | 10 | 100 | 41 (3)| 00:00:01 |
| 3 | NESTED LOOPS | | 10 | 100 | 41 (3)| 00:00:01 |
| 4 | SORT UNIQUE | | 1 | 2 | 29 (0)| 00:00:01 |
| 5 | COLLECTION ITERATOR PICKLER FETCH| XQSEQUENCEFROMXMLTYPE | 1 | 2 | 29 (0)| 00:00:01 |
|* 6 | INDEX RANGE SCAN | ITEST | 10 | | 1 (0)| 00:00:01 |
| 7 | TABLE ACCESS BY INDEX ROWID | XTEST | 10 | 80 | 11 (0)| 00:00:01 |
—————————————————————————————————————

Predicate Information (identified by operation id):
—————————————————

6 – access("A"=CAST(TO_NUMBER(SYS_XQ_UPKXML2SQL(SYS_XQEXVAL(VALUE(KOKBF$),0,0,54525952,0),50,1,2)) AS int ))
[/sourcecode]

[collapse]

2. Avoid JPPD using lateral():

select/*+ cardinality(c 1) no_merge(lat) */
   distinct lat.* 
from table(sys.odcinumberlist(1,2,3)) c, 
     lateral(select * from vtest v where v.a = c.column_value) lat;

Plan hash value: 18036714

-----------------------------------------------------------------------------------------------------------
| Id  | Operation                               | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                        |                 |    10 |   190 |    41   (3)| 00:00:01 |
|   1 |  HASH UNIQUE                            |                 |    10 |   190 |    41   (3)| 00:00:01 |
|   2 |   NESTED LOOPS                          |                 |    10 |   190 |    40   (0)| 00:00:01 |
|   3 |    COLLECTION ITERATOR CONSTRUCTOR FETCH|                 |     1 |     2 |    29   (0)| 00:00:01 |
|   4 |    VIEW                                 | VW_LAT_4DB60E85 |    10 |   170 |    11   (0)| 00:00:01 |
|   5 |     SORT GROUP BY                       |                 |    10 |    80 |    11   (0)| 00:00:01 |
|   6 |      TABLE ACCESS BY INDEX ROWID BATCHED| XTEST           |    10 |    80 |    11   (0)| 00:00:01 |
|*  7 |       INDEX RANGE SCAN                  | ITEST           |    10 |       |     1   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   7 - access("A"=VALUE(KOKBF$))

Let’s see a bit more complex query:

Test tables 2

[sourcecode language=”sql”]
create table xtest1(id primary key, a) as
select level,level from dual connect by level<=1000;

create table xtest2(a, b, c) as
select mod(level,1000),level,rpad(‘x’,100,’x’)
from dual
connect by level<=1e4
/
create index itest2 on xtest2(a)
/
create or replace view vtest2 as
select a,count(b) cnt
from xtest2
group by a
/
[/sourcecode]

[collapse]

select v.* 
from xtest1 t1,
     vtest2 v
where t1.id in (select/*+ cardinality(c 1) */ * from table(sys.odcinumberlist(1,2,3)) c)
  and v.a = t1.a;

Plan hash value: 4293766070

-----------------------------------------------------------------------------------------------------------
| Id  | Operation                                  | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                           |              |     1 |    36 |    80   (3)| 00:00:01 |
|*  1 |  HASH JOIN                                 |              |     1 |    36 |    80   (3)| 00:00:01 |
|   2 |   JOIN FILTER CREATE                       | :BF0000      |     1 |    10 |    31   (4)| 00:00:01 |
|   3 |    NESTED LOOPS                            |              |     1 |    10 |    31   (4)| 00:00:01 |
|   4 |     NESTED LOOPS                           |              |     1 |    10 |    31   (4)| 00:00:01 |
|   5 |      SORT UNIQUE                           |              |     1 |     2 |    29   (0)| 00:00:01 |
|   6 |       COLLECTION ITERATOR CONSTRUCTOR FETCH|              |     1 |     2 |    29   (0)| 00:00:01 |
|*  7 |      INDEX UNIQUE SCAN                     | SYS_C0026365 |     1 |       |     0   (0)| 00:00:01 |
|   8 |     TABLE ACCESS BY INDEX ROWID            | XTEST1       |     1 |     8 |     1   (0)| 00:00:01 |
|   9 |   VIEW                                     | VTEST2       |  1000 | 26000 |    49   (3)| 00:00:01 |
|  10 |    HASH GROUP BY                           |              |  1000 |  8000 |    49   (3)| 00:00:01 |
|  11 |     JOIN FILTER USE                        | :BF0000      | 10000 | 80000 |    48   (0)| 00:00:01 |
|* 12 |      TABLE ACCESS FULL                     | XTEST2       | 10000 | 80000 |    48   (0)| 00:00:01 |
-----------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("V"."A"="T1"."A")
   7 - access("T1"."ID"=VALUE(KOKBF$))
  12 - filter(SYS_OP_BLOOM_FILTER(:BF0000,"A"))

As you see, CVM can’t help in this case, but we can use lateral():

select/*+ no_merge(lat) */ lat.* 
from xtest1 t1,
     lateral(select * from vtest2 v where v.a = t1.a) lat
where t1.id in (select/*+ cardinality(c 1) */ * from table(sys.odcinumberlist(1,2,3)) c);

Plan hash value: 1798023704

------------------------------------------------------------------------------------------------------------
| Id  | Operation                                | Name            | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                         |                 |    10 |   360 |    42   (3)| 00:00:01 |
|   1 |  NESTED LOOPS                            |                 |    10 |   360 |    42   (3)| 00:00:01 |
|   2 |   NESTED LOOPS                           |                 |     1 |    10 |    31   (4)| 00:00:01 |
|   3 |    SORT UNIQUE                           |                 |     1 |     2 |    29   (0)| 00:00:01 |
|   4 |     COLLECTION ITERATOR CONSTRUCTOR FETCH|                 |     1 |     2 |    29   (0)| 00:00:01 |
|   5 |    TABLE ACCESS BY INDEX ROWID           | XTEST1          |     1 |     8 |     1   (0)| 00:00:01 |
|*  6 |     INDEX UNIQUE SCAN                    | SYS_C0026365    |     1 |       |     0   (0)| 00:00:01 |
|   7 |   VIEW                                   | VW_LAT_A18161FF |    10 |   260 |    11   (0)| 00:00:01 |
|   8 |    SORT GROUP BY                         |                 |    10 |    80 |    11   (0)| 00:00:01 |
|   9 |     TABLE ACCESS BY INDEX ROWID BATCHED  | XTEST2          |    10 |    80 |    11   (0)| 00:00:01 |
|* 10 |      INDEX RANGE SCAN                    | ITEST2          |    10 |       |     1   (0)| 00:00:01 |
------------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   6 - access("T1"."ID"=VALUE(KOKBF$))
  10 - access("A"="T1"."A")

There is also another workaround with non-documented “precompute_subquery” hint:

select v.* 
from xtest1 t1,
     vtest2 v 
where t1.id in (select/*+ precompute_subquery */ * from table(sys.odcinumberlist(1,2,3)) c)
and v.a = t1.a;

Plan hash value: 1964829099

------------------------------------------------------------------------------------------------
| Id  | Operation                       | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                |              |    30 |   480 |    37   (3)| 00:00:01 |
|   1 |  HASH GROUP BY                  |              |    30 |   480 |    37   (3)| 00:00:01 |
|   2 |   NESTED LOOPS                  |              |    30 |   480 |    36   (0)| 00:00:01 |
|   3 |    NESTED LOOPS                 |              |    30 |   480 |    36   (0)| 00:00:01 |
|   4 |     INLIST ITERATOR             |              |       |       |            |          |
|   5 |      TABLE ACCESS BY INDEX ROWID| XTEST1       |     3 |    24 |     3   (0)| 00:00:01 |
|*  6 |       INDEX UNIQUE SCAN         | SYS_C0026365 |     3 |       |     2   (0)| 00:00:01 |
|*  7 |     INDEX RANGE SCAN            | ITEST2       |    10 |       |     1   (0)| 00:00:01 |
|   8 |    TABLE ACCESS BY INDEX ROWID  | XTEST2       |    10 |    80 |    11   (0)| 00:00:01 |
------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   6 - access("T1"."ID"=1 OR "T1"."ID"=2 OR "T1"."ID"=3)
   7 - access("A"="T1"."A")

It can help even in most difficult cases, for example if you can’t rewrite query (in this case you can create sql patch or sql profile with “precompute_subquery”), but I wouldn’t suggest it since “precompute_subquery” is non-documented, it can be used only with simple collections and has limitation in 1000 values.
I’d suggest to use the workaround with lateral, since it’s most reliable and very simple.

cbo CVM JPPD kokbf$ oracle undocumented behaviour pipelined functions precompute_subquery query optimization troubleshooting undocumented oracle

Top time-consuming predicates from ASH

Posted on May 13, 2019 by Sayan Malakshinov Posted in oracle, query optimizing, SQL, statistics, troubleshooting 1,870 Page views Leave a comment

Sometimes it might be useful to analyze top time-consuming filter and access predicates from ASH, especially in cases when db load is spread evenly enough by different queries and top segments do not show anything interesting, except usual things like “some tables are requested more often than others”.
Of course, we can start from analysis of SYS.COL_USAGE$: col_usage.sql

col_usage.sql

[sourcecode language=”sql”]
col owner format a30
col oname format a30 heading "Object name"
col cname format a30 heading "Column name"
accept owner_mask prompt "Enter owner mask: ";
accept tab_name prompt "Enter tab_name mask: ";
accept col_name prompt "Enter col_name mask: ";

SELECT a.username as owner
,o.name as oname
,c.name as cname
,u.equality_preds as equality_preds
,u.equijoin_preds as equijoin_preds
,u.nonequijoin_preds as nonequijoin_preds
,u.range_preds as range_preds
,u.like_preds as like_preds
,u.null_preds as null_preds
,to_char(u.timestamp, ‘yyyy-mm-dd hh24:mi:ss’) when
FROM
sys.col_usage$ u
, sys.obj$ o
, sys.col$ c
, all_users a
WHERE a.user_id = o.owner#
AND u.obj# = o.obj#
AND u.obj# = c.obj#
AND u.intcol# = c.col#
AND a.username like upper(‘&owner_mask’)
AND o.name like upper(‘&tab_name’)
AND c.name like upper(‘&col_name’)
ORDER BY a.username, o.name, c.name
;
col owner clear;
col oname clear;
col cname clear;
undef tab_name col_name owner_mask;
[/sourcecode]

[collapse]

But it’s not enough, for example it doesn’t show predicates combinations. In this case we can use v$active_session_history and v$sql_plan:

Top 50 predicates

[sourcecode language=”sql”]
with
ash as (
select
sql_id
,plan_hash_value
,table_name
,alias
,ACCESS_PREDICATES
,FILTER_PREDICATES
,count(*) cnt
from (
select
h.sql_id
,h.SQL_PLAN_HASH_VALUE plan_hash_value
,decode(p.OPERATION
,’TABLE ACCESS’,p.OBJECT_OWNER||’.’||p.OBJECT_NAME
,(select i.TABLE_OWNER||’.’||i.TABLE_NAME from dba_indexes i where i.OWNER=p.OBJECT_OWNER and i.index_name=p.OBJECT_NAME)
) table_name
,OBJECT_ALIAS ALIAS
,p.ACCESS_PREDICATES
,p.FILTER_PREDICATES
— поля, которые могут быть полезны для анализа в других разрезах:
— ,h.sql_plan_operation
— ,h.sql_plan_options
— ,decode(h.session_state,’ON CPU’,’ON CPU’,h.event) event
— ,h.current_obj#
from v$active_session_history h
,v$sql_plan p
where h.sql_opname=’SELECT’
and h.IN_SQL_EXECUTION=’Y’
and h.sql_plan_operation in (‘INDEX’,’TABLE ACCESS’)
and p.SQL_ID = h.sql_id
and p.CHILD_NUMBER = h.SQL_CHILD_NUMBER
and p.ID = h.SQL_PLAN_LINE_ID
— если захотим за последние 3 часа:
— and h.sample_time >= systimestamp – interval ‘3’ hour
)
— если захотим анализируем предикаты только одной таблицы:
— where table_name=’&OWNER.&TABNAME’
group by
sql_id
,plan_hash_value
,table_name
,alias
,ACCESS_PREDICATES
,FILTER_PREDICATES
)
,agg_by_alias as (
select
table_name
,regexp_substr(ALIAS,’^[^@]+’) ALIAS
,listagg(ACCESS_PREDICATES,’ ‘) within group(order by ACCESS_PREDICATES) ACCESS_PREDICATES
,listagg(FILTER_PREDICATES,’ ‘) within group(order by FILTER_PREDICATES) FILTER_PREDICATES
,sum(cnt) cnt
from ash
group by
sql_id
,plan_hash_value
,table_name
,alias
)
,agg as (
select
table_name
,’ALIAS’ alias
,replace(access_predicates,’"’||alias||’".’,’"ALIAS".’) access_predicates
,replace(filter_predicates,’"’||alias||’".’,’"ALIAS".’) filter_predicates
,sum(cnt) cnt
from agg_by_alias
group by
table_name
,replace(access_predicates,’"’||alias||’".’,’"ALIAS".’)
,replace(filter_predicates,’"’||alias||’".’,’"ALIAS".’)
)
,cols as (
select
table_name
,cols
,access_predicates
,filter_predicates
,sum(cnt)over(partition by table_name,cols) total_by_cols
,cnt
from agg
,xmltable(
‘string-join(for $c in /ROWSET/ROW/COL order by $c return $c,",")’
passing
xmltype(
cursor(
(select distinct
nvl(
regexp_substr(
access_predicates||’ ‘||filter_predicates
,'("’||alias||’"\.|[^.]|^)"([A-Z0-9#_$]+)"([^.]|$)’
,1
,level
,’i’,2
),’ ‘)
col
from dual
connect by
level<=regexp_count(
access_predicates||’ ‘||filter_predicates
,'("’||alias||’"\.|[^.]|^)"([A-Z0-9#_$]+)"([^.]|$)’
)
)
))
columns cols varchar2(400) path ‘.’
)(+)
order by total_by_cols desc, table_name, cnt desc
)
select
table_name
,cols
,sum(cnt)over(partition by table_name,cols) total_by_cols
,access_predicates
,filter_predicates
,cnt
from cols
where rownum<=50
order by total_by_cols desc, table_name, cnt desc;
[/sourcecode]

[collapse]

As you can see it shows top 50 predicates and their columns for last 3 hours. Despite the fact that ASH stores just sampled data, its results are representative enough for high-load databases.
Just few details:

  • Column “COLS” shows “search columns”, and total_by_cols – their number of occurrences
  • I think it’s obvious, that this info is not unambiguous marker of the problem, because for example few full table scans can misrepresent the statistics, so sometimes you will need to analyze such queries deeper (v$sqlstats,dba_hist_sqlstat)
  • We need to group data by OBJECT_ALIAS within SQL_ID and plan_hash_value, because in case of index access with lookup to table(“table access by rowid”) some predicates are in the row with index access and others are in the row with table access.

Depending on the needs, we can modify this query to analyze ASH data by different dimensions, for example with additional analysis of partitioning or wait events.

oracle query optimization SQL*Plus troubleshooting

Correct syntax for the table_stats hint

Posted on April 16, 2019 by Roger MacNicol Posted in adaptive serial direct path reads, CBO, hints, oracle, SmartScan, trace, troubleshooting, undocumented 2,445 Page views 3 Comments

A friend contacted me to ask why they were having problems using the table_stats hint to influence optimizer decision making and also to influence the decision to use direct read or buffer cache scan so this is just a quick blog post to clarify the syntax as it is not well documented.

table_stats(<table_name> <method> {<keyword>=<value>} )

Method is one of: DEFAULT, SET, SCALE, SAMPLE

Keyword is one of: BLOCKS, ROWS, ROW_LENGTH
Continue reading→
oracle query optimization Roger MacNicol SmartScan troubleshooting

Lateral view decorrelation(VW_DCL) causes wrong results with rownum

Posted on February 16, 2019 by Sayan Malakshinov Posted in 12c, bug, CBO, oracle, query optimizing, rownum, troubleshooting 1,930 Page views 3 Comments

Everyone knows that rownum in inline views blocks many query transformations, for example pushing/pulling predicates, scalar subquery unnesting, etc, and many people use it for such purposes as a workaround to avoid unwanted transformations(or even CBO bugs).

Obviously, the main reason of that is different calculation of rownum:

If we pull the predicate “column_value = 3″ from the following query to higher level
[sourcecode language=”sql” highlight=””]
select *
from (select * from table(odcinumberlist(1,1,1,2,2,2,3,3,3)) order by 1)
where rownum <= 2
and column_value = 3;

COLUMN_VALUE
————
3
3
[/sourcecode]
we will get different results:
[sourcecode language=”sql” highlight=”8″]
select *
from (select *
from (select * from table(odcinumberlist(1,1,1,2,2,2,3,3,3)) order by 1)
where rownum <= 2
)
where column_value = 3;

no rows selected
[/sourcecode]
Doc ID 62340.1

[collapse]

But we recently encountered a bug with it: lateral view with ROWNUM returns wrong results in case of lateral view decorrelation.
Compare results of this query with and without no_decorrelation hint:

with 
 t1(a) as (select * from table(odcinumberlist(1,3)))
,t2(b) as (select * from table(odcinumberlist(1,1,3,3)))
,t(id) as (select * from table(odcinumberlist(1,2,3)))
select
  *
from t,
     lateral(select/*+ no_decorrelate */ rownum rn 
             from t1,t2 
             where t1.a=t2.b and t1.a = t.id
            )(+)
order by 1,2;

        ID         RN
---------- ----------
         1          1
         1          2
         2
         3          1
         3          2
with 
 t1(a) as (select * from table(odcinumberlist(1,3)))
,t2(b) as (select * from table(odcinumberlist(1,1,3,3)))
,t(id) as (select * from table(odcinumberlist(1,2,3)))
select
  *
from t,
     lateral(select rownum rn 
             from t1,t2 
             where t1.a=t2.b and t1.a = t.id
            )(+)
order by 1,2;

        ID         RN
---------- ----------
         1          1
         1          2
         2
         3          3
         3          4

Of course, we can draw conclusions even from these results: we can see that in case of decorrelation(query with hint) rownum was calculated before the join. But to be sure we can check optimizer’s trace 10053:

Final query after transformations:

[sourcecode language=”sql”]
******* UNPARSED QUERY IS *******
SELECT VALUE(KOKBF$2) "ID", "VW_DCL_76980902"."RN" "RN"
FROM TABLE("ODCINUMBERLIST"(1, 2, 3)) "KOKBF$2",
(SELECT ROWNUM "RN_0", VALUE(KOKBF$0) "ITEM_3"
FROM TABLE("ODCINUMBERLIST"(1, 3)) "KOKBF$0",
TABLE("ODCINUMBERLIST"(1, 1, 3, 3)) "KOKBF$1"
WHERE VALUE(KOKBF$0) = VALUE(KOKBF$1)
) "VW_DCL_76980902"
WHERE "VW_DCL_76980902"."ITEM_3"(+) = VALUE(KOKBF$2)
ORDER BY VALUE(KOKBF$2), "VW_DCL_76980902"."RN"

*************************
[/sourcecode]

[collapse]

I’ll modify it a bit just to make it more readable:
we can see that

select
  *
from t,
     lateral(select rownum rn 
             from t1,t2 
             where t1.a=t2.b and t1.a = t.id)(+)
order by 1,2;

was transformed to

select
  t.id, dcl.rn
from t,
     (select rownum rn 
      from t1,t2 
      where t1.a=t2.b) dcl
where dcl.a(+) = t.id
order by 1,2;

And it confirms that rownum was calculated on the different dataset (t1-t2 join) without join filter by table t.
I created SR with Severity 1 (SR #3-19117219271) more than a month ago, but unfortunately Oracle development doesn’t want to fix this bug and moreover they say that is not a bug. So I think this is a dangerous precedent and probably soon we will not be able to be sure in the calculation of rownum and old fixes…

bug cbo lateral query optimization troubleshooting

Create Quarantine

Posted on August 16, 2018 by Roger MacNicol Posted in cell_offload, oracle, SmartScan 1,695 Page views Leave a comment

First if you want don’t know what an Exadata Quarantine is read this.

Someone asked whether you can create your own Exadata Cell quarantine and, if you can, why you might ever want to do it? 

The first step when you don’t know how to do something is try HELP in cellcli

CellCLI> HELP
...
ALTER QUARANTINE
...
CREATE QUARANTINE
...
DROP QUARANTINE
...
LIST QUARANTINE

So we see we can create a quarantine, so we use HELP again:

Continue reading→
Cell Offloading cellcli Offload Quarantine oracle Roger MacNicol SmartScan troubleshooting

Shining some light on Database In-Memory vs the Exadata Columnar Cache in 12.1.0.2

Posted on August 3, 2018 by Roger MacNicol Posted in cell_offload, inmemory, oracle, SmartScan, trace 1,556 Page views Leave a comment

I posted a while back on how to use Tracing Hybrid Columnar Compression in an offload server so this is a quick follow up.

  1. I have trouble remembering the syntax for setting a regular parameter in an offload server without bouncing it. Since I need to keep this written down somewhere I thought it might be use to support folks and dbas.
  2. I forgot to show you how to specify which offload group to set the trace event

So this example should do both: 

CellCLI > alter cell offloadGroupEvents = "immediate cellsrv.cellsrv_setparam('my_parameter, 'TRUE')", offloadGroupName = "SYS_122110_160621"

this will, of course, set a parameter temporarily until the next time the offload server is bounced, but also adding it to the offload group’s init.ora will take care of that.

Cell Offloading Cellmemory inmemory oracle Roger MacNicol SmartScan traces troubleshooting

Oracle issues after upgrade to 12.2

Posted on November 24, 2017 by Sayan Malakshinov Posted in 12.2, bug, oracle 2,494 Page views 2 Comments

Sometimes it’s really hard even to create reproducible test case to send it to oracle support, especially in case of intermittent errors.
In such cases, I think it would be really great to have access to similar service requests or bugs of other oracle clients.
So while my poll about knowledge sharing is still active, I want to share a couple of bugs we have faced after upgrade to 12.2 (and one bug from Eric van Roon). I’m going to remove the bugs from this list when they become “public” or “fixed”.
If you want to add own findings into this list, you can add them into comments. To make this process easier, you can provide just symptomps, short description and the link to own post with details – I’ll add it just as a link.
Continue reading→

12.2.0.1 bind variable bug deterministic functions oracle troubleshooting undocumented oracle

Easy(lazy) way to check which programs have properly configured FetchSize

Posted on November 16, 2017 by Sayan Malakshinov Posted in oracle, SQL, statistics 1,773 Page views Leave a comment
select 
   s.module
  ,ceil(max(s.rows_processed/s.fetches)) rows_per_fetch
from v$sql s
where 
    s.rows_processed>100
and s.executions    >1
and s.fetches       >1
and s.module is not null
and s.command_type  = 3    -- SELECTs only
and s.program_id    = 0    -- do not account recursive queries from stored procs
and s.parsing_schema_id!=0 -- <> SYS
group by s.module
order by rows_per_fetch desc nulls last
/
arraysize fetchsize troubleshooting
photo Sayan Malakshinov

Oracle ACE Pro Oracle ACE Pro Alumni

DEVVYOracle Database Developer Choice Award winner

Oracle performance tuning expert

UK / Cambridge

LinkedIn   Twitter
sayan@orasql.org

Recent Posts

  • Oracle Telegram Bot
  • Partition Pruning and Global Indexes
  • Interval Search: Part 4. Dynamic Range Segmentation – interval quantization
  • Interval Search Series: Simplified, Advanced, and Custom Solutions
  • Interval Search: Part 3. Dynamic Range Segmentation – Custom Domain Index

Popular posts

Recent Comments

  • Oracle SQL | Interval Search: Part 4. Dynamic Range Segmentation – interval quantization on Interval Search: Part 3. Dynamic Range Segmentation – Custom Domain Index
  • Oracle SQL | Interval Search: Part 4. Dynamic Range Segmentation – interval quantization on Interval Search: Part 2. Dynamic Range Segmentation – Simplified
  • Oracle SQL | Interval Search: Part 4. Dynamic Range Segmentation – interval quantization on Interval Search: Optimizing Date Range Queries – Part 1
  • Oracle SQL | Interval Search Series: Simplified, Advanced, and Custom Solutions on Interval Search: Part 2. Dynamic Range Segmentation – Simplified
  • Oracle SQL | Interval Search: Part 2. Dynamic Range Segmentation – Simplified on Interval Search: Part 3. Dynamic Range Segmentation – Custom Domain Index

Blogroll

  • Alex Fatkulin
  • Alexander Anokhin
  • Andrey Nikolaev
  • Charles Hooper
  • Christian Antognini
  • Coskan Gundogar
  • David Fitzjarrell
  • Igor Usoltsev
  • Jonathan Lewis
  • Karl Arao
  • Mark Bobak
  • Martin Bach
  • Martin Berger
  • Neil Chandler
  • Randolf Geist
  • Richard Foote
  • Riyaj Shamsudeen
  • Tanel Poder
  • Timur Akhmadeev
  • Valentin Nikotin
  • Prev
  • 1
  • 2
  • 3
  • Next
©Sayan Malakshinov. Oracle SQL