Oracle SQL
  • LICENSE

When bloggers get it wrong – part 1

Posted on May 4, 2017 by Roger MacNicol Posted in adaptive serial direct path reads, oracle, SmartScan 2,354 Page views 3 Comments

I’ve read a number of blog entries from various people who’ve clearly put great diligence into trying to understand how the decision to use the buffer cache for a table scan or whether it is better to use direct read for the scan is made. Since this is critical decision from a performance perspective, I’ve decided to write a more definitive account. Part 1 deals with a few principles and part 2 (which will follow very shortly) will show how this works with tracing and clear up a few misunderstandings. Part 3 will deal with PQ in more detail as soon as I have time.

Serial Scans

Small Table Threshold (STT) 

Serial scans use direct path reads when the object is not a small table and, since 11.2.0.2, the size used to determine whether it is “small” has come stats rather than the size from segment header.

_small_table_threshold defaults to the 2% buffer cache or to 20 blocks whichever is bigger (NB: this parameter ignores the effects of compression). An object is small enough for buffer cache reads to be more effective than direct path read if the number of blocks it contains is less than or equal to the value of _small_table_threshold. Such scans will bump the stat “table scans (short tables)”

Medium Table Threshold (MTT)

So what happens when an object is bigger than STT? We fall into the MTT case: MTT is not directly configurable and defaults to 10% of the buffer cache blocks. But it is also depends on a cost based decision that includes how busy the cache is. 
NB: PQ and Exadata have different rules.
NB: MTT also depends on _object_statistics being enabled.

For STT < #blocks < MTT, if the  cost based decision means a buffer cache scan is chosen, the stats “table scans (short tables)” will still get bumped.

So when is a medium table not worth caching?

First, when MTT < #blocks (ignore VLOT in blogs, it hasn’t been used for many years but persists in the trace and blogs for historic reasons).

Second, if  the scan is not a select for update: different rules apply because cache scans are more useful if the segment being updated is first the subject of a query. Table larger than MTT also do factor in storage reduction factor. The ratios used are 10X for HCC and 3.3.X for OLTP.

Third, a cost based analysis is done comprising the % currently in cache, the % buffers are dirty (check-pointing cost), the current I/O bottleneck, and any RAC access costs. If the buffer cache is still used the stat “table scans (long tables)” will get bumped. But, if Direct Read is chosen the stat “table scans (direct read)” will get bumped.

Automatic Big Table Caching (ABTC)

So does this mean the buffer cache never helps with large tables? Not at all, large tables can still use the buffer cache for a portion of their  scan based on the public parameter DB_BIG_TABLE_CACHE_PERCENT_TARGET which sets aside a percentage of the buffer cache for larger tables

On a single instance, ABTC works with both Serial and PQ scans BUT mixing Serial and PQ may lead to fewer cache hits. On RAC, it only works with PQ and requires that PARALLEL_DEGREE_POLICY set to AUTO or ADAPTIVE.

ABTC works by tracking the heat of medium and large tables and is designed to take advantage of any runs of buffers currently cached. Then if the decision to use ABTC is made, it periodically checks the cache for how many buffers in the next section are already in memory to see whether Direct Read or Buffer Cache scan would be better for the next chunk. Let’s see how the heat tracking works which looking at the v$bt_scan_obj_temps view.

SQL> select DATAOBJ#,SIZE_IN_BLKS,TEMPERATURE from V$BT_SCAN_OBJ_TEMPS; 

DATAOBJ#   SIZE_IN_BLKS TEMPERATURE
---------- ------------ -----------    
     79042         1689        3000

Each time we scan a medium or larger table it will bump that object’s temperature by 1000. Then, the object with the highest temperature will get absolute priority for caching and, if needed, will evict any previously cached ABTC blocks of other objects.
If the hottest object only partially fits in the buffer cache (which is quite likely), the first section that fits will be read through the buffer cache and cached then the rest of the scan will switch to Direct Read when the ABTC allocation in the cache is fully utilized.
NB: The object will still be check-pointed at the beginning so that it is safe to switch to Direct Read at any time.

Part 2 will include tracing to how how to make use of this feature. 

abtc direct path reads oracle Roger MacNicol SmartScan vlot

Offloading row level security to Smart Scan

Posted on May 4, 2017 by Roger MacNicol Posted in cell_offload, oracle, SmartScan, SQL 1,569 Page views Leave a comment

The question came up this week about whether the predicates for row level security are offloaded to Smart Scan. The simple answer is yes as long as the policy_function contains off-loadable predicates.

Let’s see this in action. Using the public TPC-H schema we can set up the customer table so that customers can only see their own entries in that table. We need a row level security policy to see if each row is one that the current user is permitted to see and a function for the policy to call to make that determination:

Continue reading→
oracle Roger MacNicol SmartScan

More tricks with OPT_PARAM

Posted on May 4, 2017 by Roger MacNicol Posted in cell_offload, hints, oracle, SmartScan 2,307 Page views Leave a comment

Did you know you can set most parameters for the execution of a single statement without using an Alter Session by using an OPT_PARAM hint? For example, regular parameters (here forcing the storage clause in the query plan):

SQL> select /*+ OPT_PARAM('cell_offload_plan_display' 'always') */ col1 From table1;

and underscore parameters:

SQL>  select /*+ OPT_PARAM('_arch_comp_dbg_scan',2048) */ n_name from nation;
Continue reading→
hints oracle Roger MacNicol SmartScan

Improvements to HCC with wide tables in 12.2

Posted on May 4, 2017 by Roger MacNicol Posted in oracle, SmartScan 1,716 Page views Leave a comment

HCC Compression Unit Sizing 

Since the beginning Oracle has provided four compression levels to offer a trade-off between the compression ratio and various other factors including table scans and the performance of single-row retrieval. I can not emphasize enough that the various trade offs mean that YMMV with the different levels and you should always test what works best with your application and hardware when choosing the level. Historically people have rarely used Query Low since the fast compression with reduced compression ratio means that the extra disk I/O is slower than the cost of decompression with Query High. The one time that Query Low makes sense on spinning disks is if you still have a significant number of row retrieval operations (including from index access joins).

NMVe Flash

X5 introduced NVMe technology which means that the extra I/O from Query Low is faster than ZLIB decompression which makes Query Low beneficial. So we needed to reassess the sizing of Compression Units. from 11.2.0.1 to 12.1.2.4 the sizing guidelines are as follows:

Continue reading→
HCC Roger MacNicol SmartScan

Examining the new Columnar Cache with v$cell_state

Posted on May 4, 2017 by Roger MacNicol Posted in oracle, SmartScan 1,872 Page views Leave a comment

12.1.0.2 introduced the new Columnar Flash Cache where 1MB of  blocks that are all in HCC format are rewritten so as to make each column from each CU contiguous. This works by first writing all the block headers to an array, then writing all the CU headers to an array, finally writing all the Column 1 column-level-CUs, then writing all the Column2 column-level-CUs etc.

The flash cache hash table maintains a simple mapping of column numbers to 64KB flash pages so, for any given query, we can simply do asynchronous disk I/O of the minimum set of 64KB pages required to cover the referenced columns.

Within the “flashcache” cell stats there is a new nested cell stat called “columnarcache” that enables you to track how it is working.

> set long 50000000
> set pagesize 10000
> select xmltype(STATISTICS_VALUE).getclobval(2,2) from v$cell_state;

XMLTYPE(STATISTICS_VALUE).GETCLOBVAL(2,2)
--------------------------------------------------------------------------------
....

  <stats type="flashcache">
    <stats type="columnarcache">
      <stat name="columnar_cache_size">0</stat>
      <stat name="columnar_cache_keep_size">0</stat>
      <stat name="columnar_cache_keep_read">0</stat>
      <stat name="columnar_cache_keep_read_bytes">0</stat>
      <stat name="columnar_attempted_read">0</stat>
      <stat name="columnar_attempted_read_bytes">0</stat>
      <stat name="columnar_cache_hits">0</stat>
      <stat name="columnar_cache_hits_read_bytes">0</stat>
      <stat name="columnar_cache_hits_saved_bytes">0</stat>
      <stat name="columnar_cache_pop">0</stat>
      <stat name="columnar_cache_pop_bytes">0</stat>
      <stat name="columnar_cache_pop_throttled">0</stat>
      <stat name="columnar_cache_pop_invalid_buf">0</stat>
      <stat name="columnar_cache_pop_size_align">0</stat>
      <stat name="columnar_cache_pop_findchdrfailure_mainchdr">0</stat>
      <stat name="columnar_cache_pop_findchdrfailure_childchdr">0</stat>
    </stats>
  </stats>

I typically spool the output of this to wade through it an editor but if we want to monitor how it is working with some workload, we need to extract individual stats, for example I bounced the cells and verified the cache was empty:

> select xmlcast(xmlquery('/cell_stats/stats/stats/stat[@name="columnar_cache_size"]' passing xmltype(STATISTICS_VALUE) returning content) as varchar2(200) ) "COLUMNAR CACHE SIZE"
from v$cell_state
where xmlexists('/cell_stats/stats/stats[@type="columnarcache"]' passing xmltype(STATISTICS_VALUE));

COLUMNAR CACHE SIZE
--------------------------------------------------------------------------------
0

I am using the 1GB TPC-H schema which takes a little more 400MB on disk when compressed with Query Low:

SQL> select sum(bytes) from user_segments where SEGMENT_NAME in ('SUPPLIER','PARTSUPP','LINEITEM','ORDERS','PART','CUSTOMER');

SUM(BYTES)
----------
 420675584

and checking the columnar cache again shows about half of the data has been rewritten into columnar cache format instead of caching raw blocks:

SQL> select xmlcast(xmlquery('/cell_stats/stats/stats/stat[@name="columnar_cache_size"]' passing xmltype(STATISTICS_VALUE) returning content) as varchar2(200) ) "COLUMNAR CACHE SIZE"
from v$cell_state
where xmlexists('/cell_stats/stats/stats[@type="columnarcache"]' passing xmltype(STATISTICS_VALUE));

COLUMNAR CACHE SIZE
--------------------------------------------------------------------------------
179306496

So let’s look at how the cache helped:

select
   xmlcast(
      xmlquery(
         '/cell_stats/stats/stats/stat[@name="columnar_cache_hits_read_bytes"]' 
         passing xmltype(STATISTICS_VALUE) 
         returning content
      ) as varchar2(200) 
   ) "HIT READ BYTES"
from v$cell_state
where xmlexists('/cell_stats/stats/stats[@type="columnarcache"]' passing xmltype(STATISTICS_VALUE));

 
HIT READ BYTES
--------------------------------------------------------------------------------
1909456896

select
   xmlcast(
      xmlquery(
         '/cell_stats/stats/stats/stat[@name="columnar_cache_hits_saved_bytes"]'
         passing xmltype(STATISTICS_VALUE)
         returning content
      ) as varchar2(200) 
   ) "HIT SAVED BYTES"
from v$cell_state
where xmlexists('/cell_stats/stats/stats[@type="columnarcache"]' passing xmltype(STATISTICS_VALUE));

 
HIT SAVED BYTES
--------------------------------------------------------------------------------
1128267776

which shows we were able to eliminate about 35% of the disk I/O for this query run!

We could, of course, have gotten that information more simply with the regular stat “cell physical IO bytes saved by columnar cache” but I wanted to show how to pull values from v$cell_state for use in scripts.


Many people only use Query High compression as they find the increased disk I/O from Query Low more than offsets the savings from cheaper decompression costs. However, with the columnar cache in place, those trade-offs have changed. It may be worth re-evaluating the decision as to when user Query Low vs. Query High particularly on CPU-bound cells.

Addendum: performance bug 20525311 affecting the columnar cache with selective predicates is fixed in the next rpm.

Roger MacNicol
Oracle Data Storage Technology 
Columnar Cache oracle Roger MacNicol SmartScan v$cell_state

Controlling the offload of specific operators

Posted on May 4, 2017 by Roger MacNicol Posted in cell_offload, oracle, SmartScan 1,975 Page views Leave a comment

One of the joys of regexp is that you can write a pattern that is painfully expensive to match and offloading these to the cell can cause significant impact on other users and overall throughput (including heartbeat issues). If you have a user who is prone to writing bad regexp expressions you as DBA can prevent regexp (or any other operator) from being offloaded to the cells.

Let’s take a very simple example using a cut down version of TPC-H Query 16 and a NOT LIKE predicate: 

SQL> explain plan for select p_brand, p_type, p_size
from part
where p_brand <> 'Brand#45'
and p_type not like 'MEDIUM POLISHED%'
and p_size in (49, 14, 23, 45, 19, 3, 36, 9)
group by p_brand, p_type, p_size;

SQL> select * FROM TABLE(DBMS_XPLAN.DISPLAY);

  |*  3 |    TABLE ACCESS STORAGE FULL| PART | 29833 |  1048K|    |   217   (2)| 00:00:01 |  1 |  8 
------------------------------------------------------------------------------------------------------------
      3 - storage(("P_SIZE"=3 OR "P_SIZE"=9 OR "P_SIZE"=14 OR "P_SIZE"=19 
      OR "P_SIZE"=23 OR "P_SIZE"=36 OR "P_SIZE"=45 OR "P_SIZE"=49) 
      AND "P_BRAND"<>'Brand#45' AND "P_TYPE" NOT LIKE 'MEDIUM POLISHED%')

Here we see all the predicates get offloaded as expected. So, for example, to stop NOT LIKE being offloaded we would need to find the operator in v$sqlfn_metadata

SQL> column descr format a18
SQL> select func_id, descr, offloadable from v$sqlfn_metadata where descr like '%LIKE%';

   FUNC_ID DESCR              OFF
---------- ------------------ ---
        26  LIKE              YES
        27  NOT LIKE          YES
        99  LIKE              NO
       120  LIKE              YES
       121  NOT LIKE          YES
       ...
       524  REGEXP_LIKE       YES
       525  NOT REGEXP_LIKE   YES
       537  REGEXP_LIKE       YES
       538  NOT REGEXP_LIKE   YES

we can ignore all but the two basic LIKE operators in this case, so to disable the offload of our LIKE predicates we use:

   FUNC_ID DESCR              OFF
---------- ------------------ ---
        26  LIKE              YES
        27  NOT LIKE          YES
        99  LIKE              NO
       120  LIKE              YES
       121  NOT LIKE          YES
       ...
       524  REGEXP_LIKE       YES
       525  NOT REGEXP_LIKE   YES
       537  REGEXP_LIKE       YES
       538  NOT REGEXP_LIKE   YES

we can ignore all but the two basic LIKE operators in this case, so to disable the offload of our LIKE predicates we use:

SQL> alter session set cell_offload_parameters="OPT_DISABLED={26,27};";

and we see this reflected in the offloadable column in v$sqlfn_metadata.

SQL> select func_id, descr, offloadable from v$sqlfn_metadata where descr like '%LIKE%';

   FUNC_ID DESCR              OFF
---------- ------------------ ---
        26  LIKE              NO
        27  NOT LIKE          NO
        99  LIKE              NO
       120  LIKE              YES
       121  NOT LIKE          YES

To re-enable them you would use:

SQL> alter session set cell_offload_parameters="OPT_DISABLED={};";

One thing to note about this param is that it doesn’t work like events (whose settings are additive), here it replaces the previous value and so every operator you want disabled has to be included in the same alter session (and the param is limited to 255 maximum characters limiting the number of operators that can be disabled). With the offload of LIKE and NOT LIKE disabled we can see the impact on the plan:

SQL> explain plan for select p_brand, p_type, p_size
from part
where p_brand <> 'Brand#45'
and p_type not like 'MEDIUM POLISHED%'
and p_size in (49, 14, 23, 45, 19, 3, 36, 9)
group by p_brand, p_type, p_size;

SQL> select * FROM TABLE(DBMS_XPLAN.DISPLAY); 

 |*  3 |    TABLE ACCESS STORAGE FULL| PART | 29833 |  1048K|    |   217   (2)| 00:00:01 |  1 |  8 
------------------------------------------------------------------------------------------------------------

     3 - storage(("P_SIZE"=3 OR "P_SIZE"=9 OR "P_SIZE"=14 OR "P_SIZE"=19 OR "P_SIZE"=23
     OR "P_SIZE"=36 OR "P_SIZE"=45 OR "P_SIZE"=49) AND "P_BRAND"<>'Brand#45')

and the NOT LIKE is no longer in the storage filter. Now lets say that you as DBA are faced with a more complex problem and want to halt all complex processing on the cells temporarily. There is a parameter that will disable everything except the simple comparison operators and NULL checks:

SQL> alter session set "_cell_offload_complex_processing"=FALSE;

Now lets see what happens:

SQL> explain plan for select p_brand, p_type, p_size
from part
where p_brand <> 'Brand#45'
and p_type not like 'MEDIUM POLISHED%'
and p_size in (49, 14, 23, 45, 19, 3, 36, 9)
group by p_brand, p_type, p_size;

SQL> select * FROM TABLE(DBMS_XPLAN.DISPLAY); 

 |*  3 |    TABLE ACCESS STORAGE FULL| PART | 29833 |  1048K|    |   217   (2)| 00:00:01 |  1 |  8 
------------------------------------------------------------------------------------------------------------

    3 - filter(("P_SIZE"=3 OR "P_SIZE"=9 OR "P_SIZE"=14 OR "P_SIZE"=19 OR "P_SIZE"=23
    OR "P_SIZE"=36 OR "P_SIZE"=45 OR "P_SIZE"=49) AND "P_BRAND"<>'Brand#45'
    AND "P_TYPE" NOT LIKE 'MEDIUM POLISHED%')

Well we got no storage predicates at all and we didn’t expect that because we had one simple predicate namely p_brand != 'Brand#45' and the IN predicate had been rewritten to a series of OR’ed comparisons so what happened? This parameter only permits simple predicates that are linked by AND’s and can be attached directly to one column. Disjuncts are not pushable so they are normally evaluated by an eva tree or by pcode neither of which are sent to the cell with this parameter set to FALSE. So why wasn’t our one simple predicate offloaded. Well, note where it is in the explain plan. It comes after the rewritten the IN and since the predicates are sorted by the optimizer on effectiveness we stop looking as soon as we see one that can’t be offloaded. Let’s remove the IN and see what happens:

SQL> explain plan for select  p_brand, p_type, p_size
from part
where p_brand <> 'Brand#45'
and p_type not like 'MEDIUM POLISHED%';

|*  2 |   TABLE ACCESS STORAGE FULL| PART |   190K|  6686K|   217   (2)| 00:00:01 | 1 | 8 |
---------------------------------------------------------------------------------------------------

    2 - storage("P_BRAND"<>'Brand#45')
       filter("P_BRAND"<>'Brand#45' AND "P_TYPE" NOT LIKE 'MEDIUM POLISHED%')

as expected the simple predicate is now offloaded. If you look at v$sqlfn_metadata you’ll see this param is reflected in the offloadable column:

SQL> select func_id, descr, offloadable from v$sqlfn_metadata where descr like '%LIKE%';

   FUNC_ID DESCR              OFF
---------- ------------------ ---
        26  LIKE              NO
        27  NOT LIKE          NO
        99  LIKE              NO
       120  LIKE              NO
       ...
       121  NOT LIKE          NO
       524  REGEXP_LIKE       NO
       525  NOT REGEXP_LIKE   NO
       537  REGEXP_LIKE       NO
       538  NOT REGEXP_LIKE   NO

I hope you never need any of this in real life but it’s good to have it in the toolbag.

Cell Offloading oracle Roger MacNicol SmartScan

Alter Table Shrink Space and SmartScan

Posted on May 4, 2017 by Roger MacNicol Posted in oracle, SmartScan 1,799 Page views Leave a comment

Over the years, updates can cause rows to become highly fragmented sapping performance on Exadata table scans.

The offload server and hence SmartScan get data to process 1 MB at a time. Because of this, SmartScan is only able to process row pieces that are available in the current 1 MB chunk that it processes at a time. Unlike RDBMS table scans, SmartScan is not able to initiate disk I/Os to retrieve further row pieces and, even if it could, it in unlikely that they would be present on the same cell.  When SmartScan finds it needs a row for a projected column that is not present in the blocks available to it, it will apply predicate on the row pieces it does have but if those predicate pass, it has to return the row unprocessed for the RDBMS to fetch the missing row pieces from the buffer cache.

Continue reading→
oracle Roger MacNicol SmartScan

SQL validation during PL/SQL compilation

Posted on May 1, 2017 by Sayan Malakshinov Posted in oracle, PL/SQL, SQL 2,441 Page views Leave a comment

A recent posting on SQL.RU asked why Oracle doesn’t raise such errors like “ORA-00979 not a group by expression” during PL/SQL compilation. Since I couldn’t find a link to the answer (though I read about it many years ago, but I don’t remember where…), I’ve decided to post short answer:

During PL/SQL compilation Oracle checks static SQL using only:

  1. Syntactic analysis – Oracle verifies that keywords, object names, operators, delimiters, and so on are placed correctly in your SQL statement. So such queries like “select * foRm dual” will fail during this validation. For example, we can get here such errors like:
    ORA-00900: invalid SQL statement
    ORA-00923: FROM keyword not found where expected
    ORA-00924: missing BY keyword
    ORA-00933: SQL command not properly ended
    …
  2. Semantic analysis – it verifies that references to host variables and database objects are valid(including their grants) and that host-variable datatypes are correct. For example, “select * from nonexisting_table” will fail this validation.

And since Oracle doesn’t validate all other types of errors during Syntactic and Semantic analysis, we can detect them only during optimization* or execution*.
For example, Oracle detects “ORA-00979 not a group by expression” during optimization phase.

* Read “Chapter Chapter 7: Parsing and Optimizing” in Oracle Core by Jonathan Lewis

NB. It doesn’t not apply to CREATE or ALTER VIEW, because Oracle executes optimization step for them. You can check it using trace 10053.

Simple example:

-- fORm instead of "from" - syntactic validation fails:
SQL> create table t(a int, b int, c int);
SQL> create or replace procedure p_syntactic is
  2     cursor c is select a,b,sum(c) sum_c fORm t group by a;
  3  begin
  4     null;
  5  end;
  6  /

Warning: Procedure created with compilation errors.

SQL> sho error;
Errors for PROCEDURE P_SYNTACTIC:

LINE/COL ERROR
-------- -----------------------------------------------------------------
2/16     PL/SQL: SQL Statement ignored
2/40     PL/SQL: ORA-00923: FROM keyword not found where expected

-- semantic validation fails:
SQL> create or replace procedure p_semantic is
  2     cursor c is select a,b,sum(blabla) sum_c from t group by a;
  3  begin
  4     null;
  5  end;
  6  /

Warning: Procedure created with compilation errors.

SQL> sho error;
Errors for PROCEDURE P_SEMANTIC:

LINE/COL ERROR
-------- -----------------------------------------------------------------
2/16     PL/SQL: SQL Statement ignored
2/31     PL/SQL: ORA-00904: "BLABLA": invalid identifier

-- As you can see this procedure passes successfully both syntactic and semantic analysis,
-- though query is not valid: it should raise "ORA-00979: not a GROUP BY expression"
SQL> create or replace procedure p_valid is
  2     cursor c is select a,b,sum(c) sum_c from t group by a;
  3  begin
  4     null;
  5  end;
  6  /

SQL> sho error;
No errors.

-- Oracle checks such errors for "CREATE VIEW", because it runs optimization for the query text:
SQL> create view v_cursor as select a,b,sum(c) sum_c from t group by a order by a;
create view v_cursor as select a,b,sum(c) sum_c from t group by a order by a
                                 *
ERROR at line 1:
ORA-00979: not a GROUP BY expression

How to speed up slow unicode migration of a table with xmltype columns

Posted on February 20, 2017 by Sayan Malakshinov Posted in oracle, parallel, query optimizing, troubleshooting 2,307 Page views Leave a comment

Recently I have had an issue with slow unicode migration of the database upgraded from 10g to 12.1.0.2. The main problem was a table with xmltype: we spent about 4 hours for this table(~17GB) during test migration, though all other tables (~190GB) migrated just for about 20 minutes.
We used DMU(Database Migration Assistant for Unicode), and the root cause of the problem was update statement generated by DMU for this table:

update  /*+ PARALLEL(A,16)*/ "RRR"."T_XMLDATA" A  set A."SYS_NC00011$" = SYS_OP_CSCONV(A."SYS_NC00011$", 'AL16UTF16')

“SYS_NC00011$” was internal hidden CLOB column used to store XMLTYPE. As you can see DMU added PARALLEL hint, but though oracle can use parallel dml for xmltype since 12.1.0.1, we can’t use it because of its’ limitations:

Changes in Oracle Database 12c Release 1 (12.1.0.1) for Oracle XML DB

Parallel DML Support for XMLType
Support for parallel DML has been improved for XMLType storage model binary XML using SecureFiles LOBs. The performance and scalability have been improved for both CREATE TABLE AS SELECT and INSERT AS SELECT.

Restrictions on Parallel DML

Parallel DML can be done on tables with LOB columns provided the table is partitioned. However, intra-partition parallelism is not supported.

For non-partitioned tables with LOB columns, parallel INSERT operations are supported provided that the LOB columns are declared as SecureFiles LOBs. Parallel UPDATE, DELETE, and MERGE operations on such tables are not supported.

Btw, Oracle didn’t support parallel dml for xmltype on previous releases:

No Parallel DML for XMLType – DML operations on XMLType data are always performed in serial. Parallel DML is not supported for XMLType. (Parallel query and DDL are supported for XMLType.)

So I had to use manual parallelization:
1. Monitor “Convert application tables” step through “View Table Conversion progress” and press “Stop” button during conversion of this table.
2. Create table with ROWIDs of this table and split them into 16 groups:

create table tmp_rids as 
select rowid rid, ntile(16)over(order by rowid) grp 
from t_xmldata;

3. Execute

ALTER SYSTEM SET EVENTS '22838 TRACE NAME CONTEXT LEVEL 1,FOREVER'; 

to avoid “ORA-22839: Direct updates on SYS_NC columns are disallowed”
4. Start 16 sessions and each of them have to update own part:

update t_xmldata A 
set A."SYS_NC00011$" = SYS_OP_CSCONV(A."SYS_NC00011$", 'AL16UTF16') 
where rowid in (select rid from tmp_rids where grp=&grp);
commit;

5. Disable event 22838:

ALTER SYSTEM SET EVENTS '22838 TRACE NAME CONTEXT OFF'; 

6. Open “View Table Conversion progress” window, click on this table and change “Retry” to “Skip” option for the update step.

This simple method allowed to make unicode migration about 16 times faster.

12c oracle parallel dml query optimization

row pieces, 255 columns, intra-block row chaining in details

Posted on February 12, 2017 by Sayan Malakshinov Posted in oracle, trace, undocumented 2,863 Page views 11 Comments

You might be familiar with intra-block row chaining, which can occur when a table has more than 255 columns. However, did you know that intra-block chaining only works with inserts, not updates? (Upd: This is not valid since version 12.2).

Documentation says:

When a table has more than 255 columns, rows that have data after the 255th column are likely to be chained within the same block. This is called intra-block chaining. A chained row’s pieces are chained together using the rowids of the pieces. With intra-block chaining, users receive all the data in the same block. If the row fits in the block, users do not see an effect in I/O performance, because no extra I/O operation is required to retrieve the rest of the row.

A bit more details:
1. A single row piece can store up to 255 columns.
2. Oracle splits fields into row pieces in reverse order.
3. Oracle doesn’t store trailing NULLs in a row (not in a row piece)
4. The next row piece can be stored in the same block only with inserts. When you run an update, oracle will place the new row piece into a different block. (not valid since 12.2)

I’ll show in examples with dumps:

Continue reading→
intra-block chaining row chaining
photo Sayan Malakshinov

Oracle ACE Pro Oracle ACE Pro Alumni

DEVVYOracle Database Developer Choice Award winner

Oracle performance tuning expert

UK / Cambridge

LinkedIn   Twitter
sayan@orasql.org

Recent Posts

  • Oracle Telegram Bot
  • Partition Pruning and Global Indexes
  • Interval Search: Part 4. Dynamic Range Segmentation – interval quantization
  • Interval Search Series: Simplified, Advanced, and Custom Solutions
  • Interval Search: Part 3. Dynamic Range Segmentation – Custom Domain Index

Popular posts

Recent Comments

  • Oracle SQL | Interval Search: Part 4. Dynamic Range Segmentation – interval quantization on Interval Search: Part 3. Dynamic Range Segmentation – Custom Domain Index
  • Oracle SQL | Interval Search: Part 4. Dynamic Range Segmentation – interval quantization on Interval Search: Part 2. Dynamic Range Segmentation – Simplified
  • Oracle SQL | Interval Search: Part 4. Dynamic Range Segmentation – interval quantization on Interval Search: Optimizing Date Range Queries – Part 1
  • Oracle SQL | Interval Search Series: Simplified, Advanced, and Custom Solutions on Interval Search: Part 2. Dynamic Range Segmentation – Simplified
  • Oracle SQL | Interval Search: Part 2. Dynamic Range Segmentation – Simplified on Interval Search: Part 3. Dynamic Range Segmentation – Custom Domain Index

Blogroll

  • Alex Fatkulin
  • Alexander Anokhin
  • Andrey Nikolaev
  • Charles Hooper
  • Christian Antognini
  • Coskan Gundogar
  • David Fitzjarrell
  • Igor Usoltsev
  • Jonathan Lewis
  • Karl Arao
  • Mark Bobak
  • Martin Bach
  • Martin Berger
  • Neil Chandler
  • Randolf Geist
  • Richard Foote
  • Riyaj Shamsudeen
  • Tanel Poder
  • Timur Akhmadeev
  • Valentin Nikotin
  • Prev
  • 1
  • …
  • 7
  • 8
  • 9
  • 10
  • 11
  • …
  • 17
  • Next
©Sayan Malakshinov. Oracle SQL