Weekly Shaarli

All links of one week in a single page.

Week 29 (July 15, 2024)

Beating the L1 cache with value speculation

Given a linked list which happens to sit on consecutive memory, traversing it can take advantage of L1 cache. However it's possible to squeeze more performance by hinting the branch predictor to allow speculative execution, resulting in better parallelism with cpu pipelining. This is a simple and interesting trick although I can't think of much practical uses except for specific scenarios.