Many people know that in 12.1.0.2 we introduced a ground-breaking columnar cache that rewrote 1 MB chunks of HCC format blocks in the flash cache into pure columnar form in a way that allowed us to only do the I/O for the columns needed but also to recreate the original block when that
was required.
This showed up in stats as “cell physical IO bytes saved by columnar cache”.
But in 12.1.0.2 we had also introduced Database In-Memory (or DBIM) that rewrote heap blocks into pure columnar form in memory. That project introduced:
- new columnar formats optimized for query performance
- a new way of compiling predicates that supported better columnar execution
- the ability to run predicates against columns using SIMD instructions which could execute the predicate against multiple rows simultaneously
so it made perfect sense to rework the columnar cache in 12.2 to take advantage of the new In-Memory optimizations.
Quick reminder of DBIM essentials
In 12.2, tables have to be marked manually for DBIM using the INMEMORY keyword:
SQL> Alter Table INMEMORY
When a scan on a table tagged as INMEMORY is queried the background process is notified to start loading it into the INMEMORY area. This design was adopted so that the user’s scans are not impeded by loading. Subsequent scans that come along will check what percentage is currently loaded in memory and make a rule based decision:
For Exadata
- Greater than 80% populated, use In-Memory and the Buffer Cache for the rest
- Less than 80% populated, use Smart Scan
- The 80% cutoff between BC and DR is configurable with an undocumented underscore parameter
- Note: if an In-Memory scan is selected even for partially populated, Smart Scan is not used
For non-Exadata
- Greater than 80% populated, use In-Memory and the Buffer Cache for the rest
- Less than 80% populated, use In-Memory and Direct Read for the rest
Note: this requires that segment to be check pointed first - The 80% cutoff between BC and DR is configurable with an undocumented underscore parameter
While DBIM can provides dramatic performance improvements it is limited by the amount of usable SGA on the system that can be set aside for the In-Memory area. After that performance becomes that of disk access to heap blocks from the flash cache or disk groups. What was needed was a way to increase the In-Memory area so that cooler segments could still benefit from the In-Memory formats without using valuable RAM which is often a highly constrained resource.
Cellmemory
Cellmemory works in a similar way to the 12.1.0.2 columnar cache in that 1 MB of HCC formatted blocks are cached in columnar form automatically without the DBA needing to be involved or do anything. This means columnar cache scans are cached after Smart Scan has processed the blocks rather than before as happens with ineligible scans. The 12.1.0.2 columnar cache format simply takes all the column-1 compression units (CUs) and stores them contiguously, then all the column-2 CUs stored contiguously etc. so that each column can be read directly from the cache without reading unneeded columns. This happens during Smart Scan so that the reformated blocks are returned to the cell server along with the query results for that 1 MB chunk.
In 12.2, eligible scans continue to be cached in the 12.1.0.2 format columnar cache format after Smart Scan has processed the blocks so that columnar disk I/O is available immediately. The difference is that if and only if the In-Memory area is in use on the RDBMS node (i.e. the In-Memory feature is already in use in that database), the address of the beginning of the columnar cache entry is added to a queue so that a background process running at a lower priority can read those cache entries and invoke the DBIM loader to rewrite the cache entry into In-Memory formatted data.
Unlike the columnar cache which has multiple column-1 CUs, the IMC format creates a single column using the new formats that use semantic compression and support SIMD etc on the compressed intermediate compressed data. By default a second round of compression using LZO is then applied. When 1 MB of HCC ZLIB compressed blocks are rewritten in this way they typically take around 1.2 MB (YMMV obviously).
Coming up
Up-coming blog entries will cover:
- Overriding the default behaviour with DDL
- New RDBMS stats for Cellmemory
- New cellsrv stats for Cellmemory
- Flash cache pool changes
- Tracing Cellmemory