1687 shaares
869 private links
869 private links
Given a linked list which happens to sit on consecutive memory, traversing it can take advantage of L1 cache. However it's possible to squeeze more performance by hinting the branch predictor to allow speculative execution, resulting in better parallelism with cpu pipeline. This is a simple and interesting trick although I can't think of much practical uses except for specific scenarios.