Cache-Efficient Data Structure for Modern Memory Hierarchies in C++

Optimizing algorithmic performance through cache-efficient techniques is crucial in modern computing due to the latency gap between processor speeds and memory access. This paper investigates cache efficiency in sorting algorithms—specifically, selection sort, quick sort, and merge sort—and matrix multiplication using loop tiling and blocking techniques. Additionally, linked list and queue traversal are examined to compare cache-aware and cache-oblivious strategies. By measuring both memory usage and execution time in nanoseconds, we demonstrate how cache optimization enhances data access patterns, reduces latency, and improves overall efficiency. Our findings indicate that cache-efficient implementations yield significant performance gains, providing insights for optimized data processing in memory-intensive applications.

Index Terms– Cache efficiency, sorting algorithms, matrix multiplication, loop tiling, blocking, linked lists, queue traversal, memory optimization, C++