Years ago, I had to solve the problem of cluster-wide contention for log table blocks during a massive number of inserts from numerous sessions.
Since the table was essentially insert-only and reads were extremely rare (only during investigations of issues), the obvious solution was for sessions from each cluster node to write to their own segments.
To implement this, we introduced an INSTANCE_ID column to the table and partitioned the data based on this column:
create table t_part(ID int, instance int default sys_context('userenv','instance'))
partition by list(instance) (
partition p1 values(1),
partition p2 values(2),
partition p3 values(3),
partition p4 values(4),
partition pX values(default)
);
Essentially, we created four partitions for instance numbers 1 to 4, along with a default partition (pX) to accommodate any unexpected increase in the number of nodes.
Since then, I have successfully applied this approach in numerous, more complex scenarios involving frequent read operations. As a result, I decided to document this technique to facilitate sharing and future reference.
While the provided example is simplified, real-world implementations often necessitate additional considerations. For instance, the table might be range-partitioned, enabling the use of composite partitioning:
create table t_log(ID int, tmstmp timestamp, instance int default sys_context('userenv','instance'))
partition by range(tmstmp) interval(interval'1'day)
subpartition by list(instance)
subpartition TEMPLATE (
subpartition sub_p1 values(1),
subpartition sub_p2 values(2),
subpartition sub_p3 values(3),
subpartition sub_p4 values(4),
subpartition sub_pX values(default)
)
(
partition p1 values less than(timestamp'2024-12-01 00:00:00')
);