If you turn on NSMTIO tracing you will see references to VLOT:
qertbFetch:[MTT < OBJECT_SIZE < VLOT]: Checking cost to read from caches (local/remote) and checking storage reduction factors (OLTP/EHCC Comp)
I had said you could ignore VLOT and Frits Hoogland pointed out that tracing showed it had some impact, so let me clarify:
VLOT is the absolute upper bound that cached reads can even be considered.
This defaults to 500% of the number of buffers in the cache i.e.
_very_large_object_threshold = 500
While this number is not used in any calculations, it is used in two places as a cutoff to consider those calculations
1) Can we consider using Automatic Big Table Caching (a.k.a. DWSCAN) for this object?
2) Should we do a cost analysis for Buffer Cache scan vs Direct Read scan on tables larger than the MTT?
The logic for tables above the calculated medium table threshold (MTT) and that are NOT part of searched DMLs and are NOT on Exadata with statistics based storage reduction factor enabled (_statistics_based_srf_enabled) is:
- If _serial_direct_read == ALWAYS, use Direct Read
- If _serial_direct_read == NEVER, use Buffer Cache
- If _serial_direct_read == AUTO and #blocks in table < VLOT, use cost model
- Else use Direct Read “qertbFetch:DirectRead:[OBJECT_SIZE>VLOT]”
In practice 5X buffer cache is so large the cost based decision will come to the same conclusion anyway – the default VLOT simply saves time spent doing the analysis.
For example, I got a quick count of the number of blocks in non-partitioned TPC_H Scale 1 lineitem
select segment_name,sum(blocks),sum(bytes) from user_extents where segment_name='LINEITEM'
and created my buffer cache to be exactly the same size. With this setup, setting _very_large_object_threshold=100
bypassed the cost model and went straight to DR scan, while setting it to 200 forced the use of the cost model.
The moral of this is that the default value of VLOT rarely changes the decisions made unless you reduce VLOT to a much smaller multiplier of the cache size and can start to see it cause a few more of your larger buffer cache scans move to direct read when they are no longer eligible for cost analysis. If you wish to stop some of the largest buffer cache scans from happening you would need to set _very_large_object_threshold
less than 200.