One of the main performance technologies underlying ADW is the In-Memory format column cache and a question that has come up several times is: how does the columnar cache handle DMLs and cache invalidations. This blog post attempts to answer that question.
The Two Columnar Cache Formats
The original columnar cache was an idempotent rearrangement of Hybrid Columnar Compressed blocks into pure columnar form still in the original HCC encoding. This is known internally as “CC1” and was not applicable to row format blocks. Because this is an idempotent transformation we are able to reconstitute the original raw blocks from a CC1 cache if needed.
We then introduced the In-Memory format columnar cache where, if you had an In-Memory licence, we would run the 1 MB chunks processed by Smart Scan through the In-Memory loader and write new clean columns encoded in In-Memory formats which meant that we could then use the new SIMD based predicate evaluation and other performance improvements developed for Database In-Memory. If you do not have an In-Memory licence, the original CC1 column cache is used. Another advantage of the CC2 format is that we can load row format blocks as well as HCC format blocks into pure columnar In-Memory format.
The decision to rewrite 1 MB chunks into columnar format follows the normal cache AUTOKEEP pool policy of needing to see at least two scan per hour on average of a given segment (i.e. table, partitions, or sub-partition). 1 MB chunks are checked for eligibility including transactional state, all chained row pieces being available, and the absence of any Oracle 8 format blocks; if they pass they are placed on a background queue that is throttled to prevent it interfering with the main workload. This queue processes the 1 MBs and writes them to the cache in their new format. The new format replaces the space used by the raw blocks from cache.
Please see:
- CELLMEMORY Part 1
- CELLMEMORY Part 2
- CELLMEMORY Part 3
- How to tell if the Exadata column cache is fully loaded?
ADW
When ADW was introduced, the decision was made to include CC2 as one of the bundled performance technologies and have it enabled automatically for all tables whether they are stored in row-major or column major format. Initial population on ADW is different in that a single scan will trigger the segment to be loaded into IM formats. For example, if your load script commits then executes a simple select that returns no rows, the CC2 image will be completed within a minute or so of loading.
One of the questions that has come up multiple times is how does the columnar cache handle DMLs and cache invalidations. And, as one might expect on ADW there is an element of learned behaviour involved. The behaviour is very different to how DBIM handles invalidations on RDBMS nodes so it is worth explaining.
When a block is written by a DML, it will invalidate the entire 1 MB chunk for the CC2 format allowing individual blocks to be cached in raw form in case further DMLs to those blocks will be following and they need to be read quickly. We then need to decide when the chunk is cool enough to be worth running through the In-Memory loader again and the current heuristic is that we need to see 10 scans after the last write to that 1 MB chunk then the 11th scan will trigger the CC2 load. This heuristic is controlled by the cell init param “_cell_region_heuristics_read_count_threshold”. On ADW it is not possible to change this parameter but it is possible to change it to better fit your workload on non-Autonomous systems.
Please note, there is no equivalent to DBIM’s row-level invalidation and there is no equivalent to the full rebuild of the column store that the DBIM background initiates after reaching a threshold number of invalidations.
Another aspect of the per-1MB tracking is if reformatting into In-Memory format fails for any reason and why it failed. For example, IM loading could fail because of various memory limits on the size of the IM rewrite, the number of CUs in a 1MB chunk, transaction state issues, or issues with one or more chained-row pieces being unavailable in that 1MB chunk.
Please note, for blocks stored in row-major format, the issue of needing to see all the chained row pieces to be able to process a row or load that row in IM format means that setting the PCTFREE of a table to account for the row’s typical lifecycle is especially important! Please see the comments on PCTFREE in the Exadata section at the top of: Thoughts on Big Data SQL SmartScan. One of the many advantages of Hybrid Columnar Compression is that it has no chained row pieces and no possibility of DMLs caused Smart Scan to be unable to process rows. The wider the table, the larger benefit you get from the HCC format.
If we have marked a 1MB chunk as failing IM population we will not try to load it again until we see a write to the 1MB chunk then we clear the failure status and let it follow the normal decision making path.
Another caveat is when small loads have resulted in interleaved blocks from different tables in a single 1MB chunk. We track the segment owner for the last block written and if we see a write from a different segment owner to that 1 MB we will immediately try to populate it – this is the one case that does not wait to have seen 10 scans after a DML first before populating.
Please let me know if you have questions.
—
Roger