locality of reference

back to index

description: principle describing a tendency to repetitively accessing similar memory location sets momentaneously

7 results

Write Great Code, Volume 2

by Randall Hyde  · 6 Aug 2012  · 828pp  · 205,338 words

determine which constants you access often and place the variables for those constants at adjacent memory locations. Because of the way most CPUs handle spatial locality of reference (that is, accessing nearby variables; see Write Great Code, Volume 1), when you access one of these constant objects, the cache line will be filled

.5.1 Improving the Efficiency of Certain if/else Statements SPARC processor family, 4.1 Learning One Assembly Language Is Good; More Is Better Spatial locality of reference, 7.7 Floating-Point Constants Specialized source file formats, 5.2.1 Tokenized Source Files Speed optimization, 5.4.4.5 Controlling Compiler Optimization Speeding

Effective STL: 50 Specific Ways to Improve Your Use of the Standard Template Library

by Scott Meyers  · 16 Jun 2001  · 474pp  · 91,222 words

that objects in certain containers are typically used together, so you’d like to place them near one another in a special heap to maximize locality of reference. Or you’d like to set up a unique heap that corresponds to shared memory, then put one or more containers in that memory so

if you expect to have lots of short strings and either (1) your release environment has very little memory or (2) you are concerned about locality of reference and want to cluster strings on as few pages as possible. Clearly, string implementations have more degrees of freedom than are apparent at first glance

trite but true, and one of them is that size matters. Others are less trite but no less true, and one of those is that locality of reference matters, too. Consider first the size issue. Suppose we need a container to hold Widget objects, and, because lookup speed is important to us, we

top of the containers’ allocators—see Items 10 and 11) to achieve such clustering, but if your STL implementation fails to take steps to improve locality of reference among tree nodes, the nodes could end up scattered all over your address space. That would lead to even more page faults. Even with the

to 63 local classes definition of 189 type parameters and 189 locales 232–233 case-insensitive string comparisons and 229–237 facets and 234–235 locality of reference 103 improving via allocators 55 locking objects 60 logarithmic time complexity 6 linear complexity vs. 190 meaning for binary search algorithms 147 longest algorithm name

Write Great Code, Volume 1

by Randall Hyde  · 6 Aug 2012  · 894pp  · 190,485 words

memory locations repeatedly. Furthermore, you’ll also discover that a program often accesses adjacent memory locations. The technical names given to these phenomena are temporal locality of reference and spatial locality of reference. When exhibiting spatial locality, a program accesses neighboring memory locations within a short period after the initial memory access. When displaying temporal

locality of reference, a program accesses the same memory location repeatedly during a short time. Both forms of locality occur in the following Pascal code segment: for i :=

0 to 10 do A [i] := 0; There are two occurrences each of spatial and temporal locality of reference within this loop. Let’s consider the obvious ones first. In this Pascal code, the program references the variable i several times. The for loop

the loop. The assignment statement also uses i as an array index. This shows temporal locality of reference in action because the CPU accesses i at three points in a short time period. This program also exhibits spatial locality of reference. The loop itself zeros out the elements of array A by writing a zero to

stores the elements of A in consecutive memory locations, each loop iteration accesses adjacent memory locations. There is an additional example of temporal and spatial locality of reference in this Pascal example. Machine instructions also reside in memory, and the CPU fetches these instructions sequentially from memory and executes these instructions repeatedly, once

in the cache. In that case, the CPU has to read the data from main memory, incurring a performance loss. To take advantage of temporal locality of reference, the CPU copies data into the cache whenever it accesses an address not present in the cache. Because it is likely the system will access

be quite large (usually several megabytes) and work well for large systems with gigabytes of main memory. For programs that manipulate considerable data, yet exhibit locality of reference, a third-level caching subsystem can be very effective. 6.5 CPU Memory Access Most CPUs have two or three different ways to access memory

use virtual memory for everything. The whole point of having a memory hierarchy is to enable us to take advantage of the principles of spatial locality of reference and temporality of reference to move often-referenced data into fast memory and leave less-often-used data in slower memory. Unfortunately, during the course

is not a panacea for slow memory access. In order for a cache system to be effective, software must exhibit locality of reference (either spatial or temporal). Fortunately, real-world programs tend to exhibit locality of reference, so most programs will benefit from the presence of a cache in the memory subsystem. But while programs do

exhibit locality of reference, this is often accidental; programmers rarely consider the memory-access patterns of their software when designing and coding. Unfortunately, application programmers who work under the

programs can coexist in main memory, paging also provides a mechanism whereby the operating system can move infrequently used pages to secondary storage. Just as locality of reference applies to cache lines, it applies to pages in main memory as well. At any given time, a program will only access a small percentage

a given level in the memory hierarchy to properly contain the program working sets of cache lines or pages A program that does not exhibit locality of reference If there is insufficient memory to hold a working set of pages or cache lines, the memory system will constantly be replacing one block of

the disk, this slows the programs down by a tremendous factor. We have already seen in this chapter that if the program does not exhibit locality of reference and the lower memory subsystems are not fully associative, then thrashing can occur even if there is free memory at the current level in the

can try to run fewer processes concurrently or modify your program so that it references less memory over a given period. To reduce thrashing when locality of reference is causing the problem, you should restructure your program and its data structures to make its memory references physically near one another. 11.7 NUMA

programming language uses row-major ordering for two-dimensional arrays in memory. The second code sequence here, therefore, accesses sequential locations in memory, exhibiting spatial locality of reference. The first code sequence does not access sequential memory locations. It accesses array elements in the following order: array[0][0] array[1][0] array

big and little, 6.3 Big Endian Versus Little Endian Organization local bus, 12.4 I/O Speed Hierarchy locality, 6.4.3 Cache Memory locality of reference, 6.4.3 Cache Memory, 6.4.3 Cache Memory, 6.4.3 Cache Memory, 11.2 How the Memory Hierarchy Operates, 11.2 How

, 6.5.3 The Indexed Addressing Mode secondary cache, 6.4.3 Cache Memory smallest unit of, 6.2.1 8-Bit Data Buses spatial locality of reference, 6.4.3 Cache Memory, 11.2 How the Memory Hierarchy Operates, 11.2 How the Memory Hierarchy Operates speed rating, 6.4.1 Memory

of packed data, 3.5 Shifts and Rotates sparse files, 12.21.2 File Allocation Tables, 12.21.3 List-of-Blocks File Organization spatial locality of reference, 6.4.3 Cache Memory, 11.2 How the Memory Hierarchy Operates, 11.2 How the Memory Hierarchy Operates special expansion opcode, 10.3.4

The Architecture of Open Source Applications

by Amy Brown and Greg Wilson  · 24 May 2011  · 834pp  · 180,700 words

Recno supports variable-length values and Queue supports only fixed-length values). The main difference between Btree and Hash access methods is that Btree offers locality of reference for keys, while Hash does not. This implies that Btree is the right access method for almost all data sets; however, the Hash access method

The Art of Computer Programming: Sorting and Searching

by Donald Ervin Knuth  · 15 Jan 1998

idea, due to R. Sedgewick, saves some of the overhead that would be necessary if we applied straight insertion directly to each small subfile, unless locality of reference is significant.) c) Records with equal keys are exchanged, although it is not strictly necessary to do so. (This idea, due to R. C. Singleton

Beautiful Architecture: Leading Thinkers Reveal the Hidden Beauty in Software Design

by Diomidis Spinellis and Georgios Gousios  · 30 Dec 2008  · 680pp  · 157,865 words

not perform well. The expected access patterns also do not favor a database; a mechanism that handles continuous streaming of data with a lot of locality of reference might well be preferable. Not using a database for the data items would mean that transactional semantics on the operations on the store would have

The C++ Programming Language

by Bjarne Stroustrup  · 2 Jan 1986  · 923pp  · 516,602 words

the dependencies between different parts of a program explicit and tractable. More important, though, it reduces the number of dependencies in a system by improving locality of reference to data. The C++ Programming Language, Third Edition by Bjarne Stroustrup. Copyright ©1997 by AT&T. Published by Addison Wesley Longman, Inc. ISBN 0-201