Oracle SQL
  • LICENSE

About unnecessary work with predicate “field=:bind_variable” where bind_variable is null

Posted on February 14, 2013 by Sayan Malakshinov Posted in CBO, oracle, query optimizing 1,986 Page views Leave a comment

Although we know that in the case when we do select from a table and one of the predicates is “field=:bind_variable” and :bind_variable is null, we should not get the rows, oracle not always “think” the same and don’t add upper filter predicate “:bind_variable is not null”, so it can do unnecessary work. It is completely depends from a plan, which will be chosen. Of course this applies to other predicates like >, < or != too. For example, there will not be any reads only if it is an access predicate on index range/unique scan or full table scan on 11.2.0.3 and with gathered stats. If it is FTS on previous versions, then only segment header is read. In others cases oracle will do useless scans. So if bind variable can be null and you want to be sure that oracle will not do futile work in such cases, just add predicate ":bind_variable is not null". This note is just aggregated info from recent question from our forum where i participated

Update:The explanation about segment header reads you can listen from Enkitec.tv by Tanel Poder

Full test code you can download as file – test.sql.

Little example #1

[sourcecode language=”sql”]
set timing off feed off

create table XT_NULLS_TEST(a not null, b not null,c)
as
select
level a
, mod(level,100) b
, lpad(1,100,1) c
from dual
connect by level<=1e5;

create index IX_NULLS_TEST on XT_NULLS_TEST(a);

exec dbms_stats.gather_table_stats(”,’XT_NULLS_TEST’);

——————- Main test with statistics: ———————————————-
— Variable with null:
var v_null number;

exec xt_runstats.init(p_latches => false);

— INDEX RANGE SCAN:
select/*+ INDEX(XT_NULLS_TEST IX_NULLS_TEST) */ count(*) cnt from XT_NULLS_TEST where a > :v_null;
exec xt_runstats.snap(‘IRS’);

— FULL TABLE SCAN:
select/*+ FULL(XT_NULLS_TEST) */ count(*) cnt from XT_NULLS_TEST where a > :v_null;
exec xt_runstats.snap(‘FTS’);

— INDEX FAST FULL SCAN:
select/*+ INDEX_FFS(XT_NULLS_TEST IX_NULLS_TEST) */ count(*) cnt from XT_NULLS_TEST where a > :v_null;
exec xt_runstats.snap(‘IFFS’);

–Results
set serveroutput on
exec xt_runstats.print(p_stats_mask => ‘reads|buff.*gets|consistent gets’,p_sta_diff_pct => 1);

drop table xt_nulls_test purge;
[/sourcecode]
Results:
[sourcecode language=”sql”]
SQL> @test.sql.txt

CNT
———-
0

CNT
———-
0

CNT
———-
0
################ Results: ##################
Run # 01 ran in 0 hsecs
Run # 02 ran in 0 hsecs
Run # 03 ran in 1 hsecs
############################################################################################
Statistics | IRS | FTS | IFFS
############################################################################################
session logical reads………………. | 0 | 1 | 230
consistent gets……………………. | 0 | 1 | 230
consistent gets from cache………….. | 0 | 1 | 230
consistent gets from cache (fastpath)… | 0 | 1 | 230
############################################################################################
[/sourcecode]

[collapse]

Example #2
DDL

[sourcecode language=”sql”]
SQL> alter session set optimizer_dynamic_sampling=0;
SQL> alter session set statistics_level=all;
SQL> create table xt_test as select 1 i from dual;

Table created.
[/sourcecode]

[collapse]
NULL 1

[sourcecode language=”sql”]
SQL> — NULL 1:
SQL> var a number;

SQL> select * from xt_test where i=:a;

no rows selected

SQL> select * from table(dbms_xplan.display_cursor(”,”,’ALLSTATS LAST’));

PLAN_TABLE_OUTPUT
————————————————————————————————-
SQL_ID 4rjbsjvwbq5m0, child number 0
————————————-
select * from xt_test where i=:a

Plan hash value: 3713359643

—————————————————————————————
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
—————————————————————————————
| 0 | SELECT STATEMENT | | 1 | | 0 |00:00:00.01 | 1 |
|* 1 | TABLE ACCESS FULL| XT_TEST | 1 | 3 | 0 |00:00:00.01 | 1 |
—————————————————————————————

Predicate Information (identified by operation id):
—————————————————
1 – filter("I"=:A)
[/sourcecode]

[collapse]
Not NULL 1

[sourcecode language=”sql”]
SQL> — NOT NULL 1:
SQL> exec :a := 0;

PL/SQL procedure successfully completed.

SQL> select * from xt_test where i=:a;

no rows selected

SQL> select * from table(dbms_xplan.display_cursor(”,”,’ALLSTATS LAST’));

PLAN_TABLE_OUTPUT
————————————————————————————————-
SQL_ID 4rjbsjvwbq5m0, child number 0
————————————-
select * from xt_test where i=:a

Plan hash value: 3713359643

————————————————————————————————
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads |
————————————————————————————————
| 0 | SELECT STATEMENT | | 1 | | 0 |00:00:00.01 | 3 | 1 |
|* 1 | TABLE ACCESS FULL| XT_TEST | 1 | 3 | 0 |00:00:00.01 | 3 | 1 |
————————————————————————————————

Predicate Information (identified by operation id):
—————————————————
1 – filter("I"=:A)
[/sourcecode]

[collapse]
Null 2

[sourcecode language=”sql”]
SQL> alter table xt_test add primary key(i);

Table altered.

SQL> exec :a := null;

PL/SQL procedure successfully completed.

SQL> — NULL 2:
SQL> select * from xt_test where i=:a;

no rows selected

SQL> select * from table(dbms_xplan.display_cursor(”,”,’ALLSTATS LAST’));

PLAN_TABLE_OUTPUT
————————————————————————————————-
SQL_ID 4rjbsjvwbq5m0, child number 0
————————————-
select * from xt_test where i=:a

Plan hash value: 136758570

———————————————————————————–
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time |
———————————————————————————–
| 0 | SELECT STATEMENT | | 1 | | 0 |00:00:00.01 |
|* 1 | INDEX UNIQUE SCAN| SYS_C00161305 | 1 | 1 | 0 |00:00:00.01 |
———————————————————————————–

Predicate Information (identified by operation id):
—————————————————
1 – access("I"=:A)
[/sourcecode]

[collapse]
Not NULL 2

[sourcecode language=”sql”]
SQL> exec :a := 0;

PL/SQL procedure successfully completed.

SQL> — NOT NULL 2:
SQL> select * from xt_test where i=:a;

no rows selected

SQL> select * from table(dbms_xplan.display_cursor(”,”,’ALLSTATS LAST’));

PLAN_TABLE_OUTPUT
————————————————————————————————-
SQL_ID 4rjbsjvwbq5m0, child number 0
————————————-
select * from xt_test where i=:a

Plan hash value: 136758570

———————————————————————————————
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
———————————————————————————————
| 0 | SELECT STATEMENT | | 1 | | 0 |00:00:00.01 | 1 |
|* 1 | INDEX UNIQUE SCAN| SYS_C00161305 | 1 | 1 | 0 |00:00:00.01 | 1 |
———————————————————————————————

Predicate Information (identified by operation id):
—————————————————
1 – access("I"=:A)
[/sourcecode]

[collapse]

query optimization

Deterministic function vs scalar subquery caching. Part 2

Posted on February 11, 2013 by Sayan Malakshinov Posted in deterministic functions, oracle, query optimizing, scalar subquery caching 2,860 Page views 3 Comments

In previous part i already point out that:

  1. Both mechanisms are based on hash functions.
  2. Deterministic caching depends on fetch size(arraysize) – results cached only within one fetch call, ssc has no this limitation.
  3. Hash collisions depends on the single parameter “_query_execution_cache_max_size” for both mechanizms, but they are more frequent in SSC.

Today’s topic:
4. Deterministic functions does not keeps last result as scalar subquery caching
5. Caching of deterministic functions results turns off after a certain number of attempts to get the value from the cache. But SSC always returns results from cache if values already cached.
Continue reading→

deterministic functions oracle scalar subquery caching

Deterministic function vs scalar subquery caching. Part 1

Posted on February 10, 2013 by Sayan Malakshinov Posted in deterministic functions, oracle, scalar subquery caching 8,760 Page views 8 Comments

I recently did a comparison caching mechanisms of scalar subquery caching(SSC) and deterministic functions in 11.2. Unfortunately, I do not have enough time to do a full analysis, so I will post it in parts.

Today’s topics:
1. Both mechanisms are based on hash functions.(You can read great articles about hash tables and hash collisions for scalar subquery caching by Tom Kyte and Jonathan Lewis(“Cost-Based Oracle fundamentals” chapter 9))
2. Deterministic caching depends on fetch size(arraysize) – results cached only within one fetch call, ssc has no this limitation.
3. Hash collisions depends on the single parameter “_query_execution_cache_max_size” for both mechanizms, but they are more frequent in SSC.

UPD: Part 2
Continue reading→

deterministic functions scalar subquery caching

Materialization in subquery factoring without hint “materialize” can be considered only when exists at least one predicate

Posted on February 9, 2013 by Sayan Malakshinov Posted in CBO, oracle, query optimizing, undocumented 3,585 Page views 6 Comments

I found just now that materialization can not be considered by optimizer if there are no predicates in subquery factoring clause. Of course, i mean cases without forcing materialization through hint “materialize”.
Simple example:

Spoiler

[sourcecode language=”sql”]

SQL> create table tt1 as select 1 id from dual;

Table created.

SQL> exec dbms_stats.gather_table_stats(”,’TT1′);

PL/SQL procedure successfully completed.

SQL> explain plan for
2 with gen as (select * from tt1)
3 select * from gen,gen g2;

Explained.

SQL> @xplan

PLAN_TABLE_OUTPUT
———————————————————————————————————-
Plan hash value: 486748850

—————————————————————————–
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
—————————————————————————–
| 0 | SELECT STATEMENT | | 1 | 6 | 6 (0)| 00:00:01 |
| 1 | MERGE JOIN CARTESIAN| | 1 | 6 | 6 (0)| 00:00:01 |
| 2 | TABLE ACCESS FULL | TT1 | 1 | 3 | 3 (0)| 00:00:01 |
| 3 | BUFFER SORT | | 1 | 3 | 3 (0)| 00:00:01 |
| 4 | TABLE ACCESS FULL | TT1 | 1 | 3 | 3 (0)| 00:00:01 |
—————————————————————————–

SQL> explain plan for
2 with gen as (select * from tt1 where 1=1)
3 select * from gen,gen g2;

Explained.

SQL> @xplan

PLAN_TABLE_OUTPUT
———————————————————————————————————-
Plan hash value: 2673059801

———————————————————————————————————
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
———————————————————————————————————
| 0 | SELECT STATEMENT | | 1 | 26 | 7 (0)| 00:00:01 |
| 1 | TEMP TABLE TRANSFORMATION | | | | | |
| 2 | LOAD AS SELECT | SYS_TEMP_0FD9D6610_6641830 | | | | |
| 3 | TABLE ACCESS FULL | TT1 | 1 | 3 | 3 (0)| 00:00:01 |
| 4 | MERGE JOIN CARTESIAN | | 1 | 26 | 4 (0)| 00:00:01 |
| 5 | VIEW | | 1 | 13 | 2 (0)| 00:00:01 |
| 6 | TABLE ACCESS FULL | SYS_TEMP_0FD9D6610_6641830 | 1 | 3 | 2 (0)| 00:00:01 |
| 7 | BUFFER SORT | | 1 | 13 | 4 (0)| 00:00:01 |
| 8 | VIEW | | 1 | 13 | 2 (0)| 00:00:01 |
| 9 | TABLE ACCESS FULL | SYS_TEMP_0FD9D6610_6641830 | 1 | 3 | 2 (0)| 00:00:01 |
———————————————————————————————————
[/sourcecode]

[collapse]

Update: I did some additional tests and found:

  1. with “table()” but without “xmltable” materialization occurs always regardless of existence of predicates or another tables in subquery factoring clause
  2. with “xmltable” behavior is very strange – decision about materialization depends on ‘columns …’ clause: when it exists – materialization occurs, if not – doesn’t.
  3. with “selects a subset of table columns” as David Aldridge said – decision still depends from predicates existence

Tests with table() were like this:

[sourcecode language=”sql”]
with t as (select * from table(cast(:a as sys.ku$_vcnt)) /*where 1=0*/ )
select count(*) from t,t t2;
with t as (select * from t10,table(cast(:a as sys.ku$_vcnt)) /*where 1=0*/ )
select count(*) from t,t t2;
[/sourcecode]

[collapse]

Tests with xmltable() were like this:

[sourcecode language=”sql”]
with t as (select * from t10,xmltable(:a ) ttt where 1=1)
select count(*)
from t, t t1;
with t as (select * from t10,xmltable(:a columns n int) ttt where 1=0)
select count(*)
from t, t t1;
with t as (select/*+ no_merge */ * from table(cast(:a as sys.ku$_vcnt)),xmltable(:a) where 1=0 )
select count(*) from t,t t2;
[/sourcecode]

[collapse]

Test with 2 from 1000 columns

[sourcecode language=”sql”]
declare
c varchar2(32767):=’create table t_1000_cols as select ‘;
c2 varchar2(32767);
begin
for i in 1..1000 loop
c2:=c2||’,lpad(1,4000,1) c’||i;
end loop;
c:=c||ltrim(c2,’,’)||’ from dual connect by level<=100′;
execute immediate c;
end;
/
exec dbms_stats.gather_table_stats(”,’T_1000_COLS’);
alter session set tracefile_identifier = mat1000;
alter session set events=’10053 trace name context forever, level 1′;

with t as (select c1,c2 from t_1000_cols)
select count(*)
from t, t t2;

with t as (select c1,c2 from t_1000_cols where 1=1)
select count(*)
from t, t t2;
[/sourcecode]

[collapse]

materialization materialize

Bug in documentation about dbms_stats.gather_schema_stats or in the dbms_stats itself

Posted on January 26, 2013 by Sayan Malakshinov Posted in bug, CBO, documentation, oracle, statistics, undocumented 2,269 Page views 2 Comments

Recently I had to gather pending stats with Object Filter List by several objects, so I copied example from documentation and was surprised: instead of gather stats by specified filter list, oracle started to gather stats for all tables in current_schema! And ‘filter list’ applies only with dbms_stats.GATHER_DATABASE_STATS

UPD: Jared Still gave link to registered bug id, which i couldn’t find in MOS before:

Bug 12754926 – DBMS_STATS.gather_schema_stats does not work with an obj_filter_list parameter specified [ID 12754926.8]
Modified:04-Jan-2012 Type:PATCH Status:PUBLISHED
This issue is fixed in 11.2.0.4 (Future Patch Set)

Little example

[sourcecode language=”sql”]
SQL> exec dbms_stats.delete_schema_stats(‘HR’);

PL/SQL procedure successfully completed.

Elapsed: 00:00:01.31
SQL> begin
2 for r in (select table_name
3 from dba_tables t
4 where t.owner=’XTENDER’
5 and table_name like ‘TMP%’
6 )loop
7 dbms_stats.delete_table_stats(‘XTENDER’,r.TABLE_NAME);
8 end loop;
9 end;
10 /

PL/SQL procedure successfully completed.

Elapsed: 00:00:00.31
SQL> select
2 owner
3 ,table_name
4 ,num_rows
5 ,last_analyzed
6 from dba_tab_statistics t
7 where owner=’XTENDER’ and table_name like ‘TMP%’
8 or owner=’HR’;

OWNER TABLE_NAME NUM_ROWS LAST_ANALYZED
—————————— —————————— ———- ——————-
HR COUNTRIES
HR DEPARTMENTS
HR EMPLOYEES
HR JOBS
HR JOB_HISTORY
HR LOCATIONS
HR REGIONS
XTENDER TMP
XTENDER TMP1
XTENDER TMP_CONTENT
XTENDER TMP_DATA
XTENDER TMP_DOCUMENT
XTENDER TMP_EXCH

13 rows selected.

Elapsed: 00:00:00.11
SQL> col dt new_value dt
SQL> select sysdate dt from dual;

DT
——————-
2013-01-27 00:30:21

1 row selected.

Elapsed: 00:00:00.00
SQL> DECLARE
2 filter_lst DBMS_STATS.OBJECTTAB := DBMS_STATS.OBJECTTAB();
3 BEGIN
4 filter_lst.extend(2);
5 filter_lst(1).ownname := ‘XTENDER’;
6 filter_lst(1).objname := ‘TMP%’;
7 filter_lst(2).ownname := ‘HR’;
8 DBMS_STATS.GATHER_SCHEMA_STATS(NULL, obj_filter_list => filter_lst,
9 options => ‘gather’);
10 END;
11 /

PL/SQL procedure successfully completed.

Elapsed: 00:03:16.89
SQL> select owner,table_name
2 from dba_tables t
3 where t.LAST_ANALYZED>timestamp’&dt’+0;

OWNER TABLE_NAME
—————————— ——————————
XTENDER TT
XTENDER TR_LOG
XTENDER IOT1
…[skipped 171 rows]…
XTENDER DEPARTMENTS

175 rows selected.

Elapsed: 00:00:01.04
[/sourcecode]

[collapse]

PS. Also there is a typo in the example from oracle documentation: ‘gather_stale’ instead of ‘gather stale’ – underline instead of blank space.

oracle undocumented behaviour

Just another version of Tom Kyte’s runstats (runstats_pkg)

Posted on January 21, 2013 by Sayan Malakshinov Posted in oracle, runstats, statistics 2,555 Page views Leave a comment

I want to share my modifications of Tom Kyte’s runstats package, which include:

  • Any number of runs sets for analyzing
  • Standalone: No need to create other objects
  • Ability to specify session SID for statistics gathering
  • Ability to specify what to gather: latches, stats or both
  • Separate mask filters for output by statname and latchname
  • Ability to specify difference percentage for output separately for latches and statistics
  • More accurate with some statistics because of avoiding global temporary table usage, but less in several others because of collections usage

Link to package: https://github.com/xtender/xt_runstats

Output example:

SQL> begin
  2    -- little example which shows difference between "fast dual" and "full table scan dual":
  3    -- http://docs.oracle.com/cd/E11882_01/server.112/e17118/queries009.htm#SQLRF20036
  4    xt_runstats.init(p_latches => false);
  5    -- 1:
  6    for r in (select * from dual connect by level<=1e3) loop
  7      null;
  8    end loop;
  9    xt_runstats.snap;
 10
 11    -- 2:
 12    for r in (select 'X' dummy from dual connect by level<=1e3) loop
 13      null;
 14    end loop;
 15    xt_runstats.snap;
 16    xt_runstats.print(p_stats_mask => '%gets%');
 17  end;
 18  /
################     Results:      ##################
Run #  01 ran in 0 hsecs
Run #  02 ran in 0 hsecs
###########################################################################
Statistics                               | Run # 1        | Run # 2
###########################################################################
consistent gets......................... |              3 |              0
consistent gets from cache.............. |              3 |              0
consistent gets from cache (fastpath)... |              3 |              0
no work - consistent read gets.......... |              1 |              0
###########################################################################
-

Some usage examples:
1. for own session:

      begin
        xt_runstats.init();
        [some_code_1] 
        xt_runstats.snap();
        [some_code_2]
        xt_runstats.snap();
        ... 
        [some_code_N]
        xt_runstats.snap();
        -- result output:
        xt_runstats.print();
      end;

2. for session with sid = N

      begin xt_runstats.init(N); end;
      ...[after a while]
      begin xt_runstats.snap; end;
      ...[one more if needed...]
      begin xt_runstats.snap; end;
        -- result output:
      begin xt_runstats.print(); end;

3. Latches only:

xt_runstats.init(p_stats=>false);

4. Print stats with name like ‘%gets%’:

xt_runstats.print(p_stats_mask=>'%gets%');

5. Print latches which differ by 30% or more and stats differ by 15% or more:

xt_runstats.print( p_lat_diff_pct=>30, p_sta_diff_pct => 15);

Differences between integer(int) in SQL and PL/SQL

Posted on November 10, 2012 by Sayan Malakshinov Posted in oracle, undocumented 8,197 Page views Leave a comment

Very simple example:

create table t_integer(i integer);
insert into t_integer values(1e125);
select * from t_integer;

declare
  i1 integer;
begin 
  select i into i1 from t_integer;
end;
/
declare
  i2 t_integer.i%type;
begin 
  select i into i2 from t_integer;
end;
/

Although variable and the column specified as “integer”, but in both blocks you will get error: ORA-06502: PL/SQL: numeric or value error: number precision too large.
Same error will be with “int“.
Describe shows wrong datatype:

> desc t_integer;
 Name                        Null?    Type
 --------------------------- -------- ----------
 I                                    NUMBER(38)

Really it would be number without precision and scale=0. You can see it in dba_tab_columns.
sys.standard:

  subtype INTEGER is NUMBER(38,0);
  subtype INT is INTEGER;

Also a couple simple good-known but often forgotten things:
1. integer as parameter type or return type:

SQL> create function f_integer(i integer)
  2    return integer is
  3  begin
  4    return i;
  5  end;
  6  /

Function created.

SQL> select f_integer(1/3) from dual;

F_INTEGER(1/3)
--------------
    .333333333

1 row selected.

2. Old finding from sql.ru – no casting in assignments (from 10.2 till 11.2, was found by Elic):

declare
  numberVar  number        := 1/3;
  numberVar2 number(20,10) := 1/3;
  procedure Test(Label varchar2, Value int)
  is
    LocalVar int := Value;
  begin
    dbms_output.put_line(Label || '         : ' || Value);
    dbms_output.put_line(Label || ' assigned: ' || LocalVar);
  end Test;
begin
  Test('|                 const', 1/3);
  Test('|            number var', numberVar);
  Test('|constrained number var', numberVar2);
end;
/
|                 const         : .3333333333333333333333333333333333333333
|                 const assigned: .3333333333333333333333333333333333333333
|            number var         : .3333333333333333333333333333333333333333
|            number var assigned: .3333333333333333333333333333333333333333
|constrained number var         : .3333333333
|constrained number var assigned: .3333333333

PL/SQL procedure successfully completed.
undocumented oracle

The change in “dump sort statistics” trace (event 10032)

Posted on October 30, 2012 by Sayan Malakshinov Posted in CBO, oracle, trace 1,929 Page views Leave a comment

Earlier “Event 10032 – Dump Sort Statistics” showed information about sorts only(11.2.0.1 incl.), but in 11.2.0.3 it also showing “Abridged” call stack. I have not tested it on 11.2.0.2, so I can’t tell from which version it works.

alter session set events '10032 trace name context forever, level 10';

From trace file:

*** 2012-10-30 23:14:34.627
*** SESSION ID:(41.15067) 2012-10-30 23:14:34.627
*** CLIENT ID:() 2012-10-30 23:14:34.627
*** SERVICE NAME:(orasql.org) 2012-10-30 23:14:34.627
*** MODULE NAME:(SQL*Plus) 2012-10-30 23:14:34.627
*** ACTION NAME:() 2012-10-30 23:14:34.627

soropn: opened (new) sort, sordef 0x7fc4679e2550, flags 0x802
        maxkey 25, nflds 12, nkflds 1

*** 2012-10-30 23:14:34.678
----- Current SQL Statement for this session (sql_id=3ktacv9r56b51) -----
select owner#,name,namespace,remoteowner,linkname,p_timestamp,p_obj#, nvl(property,0),subname,type#,d_attrs from dependency$ d, obj$ o where d_obj#=:1 and p_obj#=obj#(+) order by order#
        Abridged call stack trace:
ksedsts<-soropn<-qersoProcessULS<-qersoFetch<-opifch2<-opifch<-opiodr<-rpidrus<-skgmstack<-rpiswu2<-rpidrv<-rpifch<-kqllod<-kglobld<-kglobpn<-kglpim<-kglpin<-kglgob<-kgldpo0<-qcdlgpo<-qcsRslvPLSQLInvoc1<-qcsRslvPLSQLInvoc<-qcsRslvName<-qcsridn<-qcsraic<-qcspqbDescendents
<-qcspqb<-kkmdrv<-opiSem<-opiDeferredSem<-opitca<-kksFullTypeCheck<-rpiswu2<-kksLoadChild<-kxsGetRuntimeLock<-kksfbc<-kkspsc0<-kksParseCursor<-opiosq0<-kpooprx<-kpoal8<-opiodr<-ttcpip<-opitsk<-opiino<-opiodr<-opidrv<-sou2o<-opimai_real<-ssthrdmain<-main<-__libc_start_main
<-_start        End of abridged call stack trace.

*** 2012-10-30 23:14:35.328
soreod: sorp 0x7fc4679e2550
---- Sort Parameters ------------------------------
sort_area_size                    65536
sort_area_retained_size           65536
sort_multiblock_read_count        1
max intermediate merge width      3

Deceptive commit after select from dblink.

Posted on October 13, 2012 by Sayan Malakshinov Posted in commit, documentation, oracle, remote statements, trace, transactions 3,549 Page views Leave a comment

Recently on our russian forum we discussed about distributed transaction: is “insert /*+ append */ into x select * from t@dblink” a distributed transaction or not?
According to the documentation – no:

A distributed transaction includes one or more statements that, individually or as a group, update data on two or more distinct nodes of a distributed database

And Tom Kyte said the same:

In the following, we do not have a distributed transaction when we just insert /*+ append */ into local_table select * from remote@table; – we just have a single site transaction.

But v$global_transaction(but v$transaction), another part of documentation and v$lock(type=’DX’) assure that it is.

Distributed statement: A statement that accesses data on two or more distinct nodes/instances of a distributed database.
A remote statement accesses data on one remote node of a distributed database.

Who is right?

But it not so interesting as my another finding: commit after simple “select * from dblink” would be “read-only” commit on local db, but on remote it became “read-only” rollback! Except cases when there was any dml in that transaction(with no difference local or not). But how Oracle determining: read-only or not, execute commit or rollback?

Yet another interesting thing: If we do a few times “commit” after “select from dblink”, then, as expected, rollback will be executed only once on remote. But when we closing our session, there is another commit on remote(real read-only commit).

Tests below: Continue reading→

commit distributed transactions remote statements rollback

Easy way to tracing

Posted on October 12, 2012 by Sayan Malakshinov Posted in diagnostic event 10046, oracle, trace 2,197 Page views Leave a comment

Recently I needed to trace connections incoming by dblink, and I decided to make it more convenient, without the trigger.
It’s just three simple steps:

  1. Create new service
  2. Turn on tracing on this service
  3. Edit tnsnames and add dblink for this service if needed

So Oracle will trace all connections to this service.

Below code for sql*plus:

begin
  DBMS_SERVICE.CREATE_SERVICE('db11203trace','db11203trace');
end;
/
column new_value new_value new_value;
select p.value||',db11203trace' new_value from v$parameter p where p.name like 'service_names';
alter system set service_names='&new_value';
exec dbms_monitor.serv_mod_act_trace_enable(service_name => 'db11203trace',waits => true,binds => true);
Sayan Malakshinov Sayan Malakshinov

Software Development Architect (IC-6), Oracle

Oracle ACE Pro Oracle ACE Pro Alumni

DEVVY Award Oracle DB Developer Choice Award

Oracle performance tuning expert.

UK Global Talent; Fellow of BCS; Professional Member of ACM; Senior Member of IEEE.

United Kingdom / Cambridge

LinkedIn LinkedIn · Twitter Twitter · Twitter Github
sayan@orasql.org

Recent Posts

  • Parsing RTSM(Real-Time SQL Monitor) XML Reports
  • Parsing Real-Time SQL Monitor (RTSM) ACTIVE Reports Stored as HTML
  • Oracle Telegram Bot
  • Partition Pruning and Global Indexes
  • Interval Search: Part 4. Dynamic Range Segmentation – interval quantization

Popular posts

Recent Comments

  • Oracle SQL | Interval Search: Part 4. Dynamic Range Segmentation – interval quantization on Interval Search: Part 3. Dynamic Range Segmentation – Custom Domain Index
  • Oracle SQL | Interval Search: Part 4. Dynamic Range Segmentation – interval quantization on Interval Search: Part 2. Dynamic Range Segmentation – Simplified
  • Oracle SQL | Interval Search: Part 4. Dynamic Range Segmentation – interval quantization on Interval Search: Optimizing Date Range Queries – Part 1
  • Oracle SQL | Interval Search Series: Simplified, Advanced, and Custom Solutions on Interval Search: Part 2. Dynamic Range Segmentation – Simplified
  • Oracle SQL | Interval Search: Part 2. Dynamic Range Segmentation – Simplified on Interval Search: Part 3. Dynamic Range Segmentation – Custom Domain Index

Blogroll

  • Alex Fatkulin
  • Alexander Anokhin
  • Andrey Nikolaev
  • Charles Hooper
  • Christian Antognini
  • Coskan Gundogar
  • David Fitzjarrell
  • Igor Usoltsev
  • Jonathan Lewis
  • Karl Arao
  • Mark Bobak
  • Martin Bach
  • Martin Berger
  • Neil Chandler
  • Randolf Geist
  • Richard Foote
  • Riyaj Shamsudeen
  • Tanel Poder
  • Timur Akhmadeev
  • Valentin Nikotin
  • Prev
  • 1
  • …
  • 14
  • 15
  • 16
  • 17
  • Next
©Sayan Malakshinov. Oracle SQL