The CPU’s cache reduces memory latency when data is accessed from the main system memory.

Developers can and should take advantage of CPU cache to improve application performance.How CPU caches work
Modern CPUs typically have three levels of cache, labeled L1, L2, and L3, which reflects the order in which the CPU checks them.

CPUs often have a data cache, an instruction cache (for code), and a unified cache (for anything).

Accessing these caches are much faster than accessing the RAM: Typically, the L1 cache is about 100 times faster than the RAM for data access, and the L2 cache is 25 times faster than RAM for data access.[ 9 lies programmers tell themselves. | 9 bad programming habits we secretly love. ]
When your software runs and needs to pull in data or instructions, the CPU caches are checked first, then the slower system RAM, and finally the much slower disk drives.

That’s why you want to optimize your code to seek what is likely to be needed from the CPU cache first.To read this article in full or to leave a comment, please click here

Leave a Reply