Exadata is designed to present a well-balanced system between disk scan rates, cell CPU processing rates, network bandwidth, and RDBMS CPU such that on average across a lot of workloads and a lot of applications there should not be no one obvious bottleneck. Good performance engineering means avoiding a design like this:

because that would be nuts. If you change one component in a car e.g. the engine you need to rework the other components such as the chassis and the brakes. When we rev the Exadata hardware we go through the same exercise to make sure the components match to avoid bottlenecks.
One of the components in a well-balanced system that people often leave off this list is compression algorithm. With compression you are balancing storage costs as well as the disk scan rates against the CPU’s decompression rate. In a disk bound system you can alleviate the bottleneck by compressing the data down to the point that it is being scanned at the same rate that the CPUs can decompress and process it. Squeeze it too far, say with HCC “Compress For Archive High” which uses BZ2, the CPUs become a major bottleneck because the decompression costs unbalance the system (but it’s wonderful if you think you’re unlikely to ever need the data again).
Since HCC was released with Exadata 11.2, DBAs have almost always used Query High (ZLIB) since it balances spinning disks against the Xeon CPUs quite nicely while retaining a storage size small enough to retrieve in a single disk I/O on 256 KB formatted extents. With Query High we tend to see an equal chance of some query being disk bound or CPU decompression bound.
Prior to X3, HCC Query Low (LZO) was never a performance option for scans since the reduced CPU load couldn’t compensate for 2X the disk reads required even on flash and Query Low scans are going to be disk bound. The only utility for Query Low was if there were a lot of single-row reads and the 32 KB storage size combined with cheap decompression made row retrieval much faster than Query High where you decompressing twice as much with a more expensive decompression algorithm. With X3 flash cache we saw something new, the Xeons would decompress LZO at a speed that very roughly matched the SSD scan speeds of the flash cache. While some queries could now run faster using Query Low than Query High that didn’t provide a compelling reason to use Query Low since it uses not only twice as much storage space but twice as much space in the flash cache and you had to be sure it was in the flash cache to see that speed.
With the introduction of X5 just over a year ago the balance changed. I am a huge fan of the X5-EF with its flash only configuration. While X5-HC offers 10% more storage capacity and a flash cache that is as fast as flash tablespace, the X5-EF offers consistency: you never need to ask whether the data is in the cache or pay the overhead of maintaining the cache. It also offers the chance to leave the past and the notion of spinning disks behind. The X5-EF can scan at over 250 GB/sec and offers 288 fast cores in the storage layers for decompression. Clearly if your primary concern is storage capacity you will want to stay with Query High but if we are thinking purely in terms of exploring performance the time has come for a reassessment of role HCC Query Low on X5. This will be the topic of the next blog entry.
In the meantime, I’ll leave you with my own mental image of the X5-EF – the sleek well-balanced very responsive BMW i8 which can do 0 to 60 in 4.4 seconds and 50 to 75 mph in 2.6 seconds (hmmm, though since the i8 is a hybrid maybe it corresponds to the X5-HC … just a thought).
