Comp 40:
von Neumann Architecture
by Mark Sheldon

Nearly all modern computers are built on a basic model of computation named after John von Neumann. Understanding this model will help you understand, write, and debug programs in any language. It is absolutely essential to understand if you are programming in assembly language, which includes C programming. It qualifies as a big idea in computing.

To begin, we separate the computation device into two components: the processor (aka CPU), which is where simple calculations are performed, and the memory where information can be stored. One of the key insights in the evolution of computing was that programs and data can both be represented using the same alphabet ({0, 1}) and stored in the same memory.

Memory is structured as a linear array of fixed-sized units, each of which has an address. Addresses start at 0 and go up through the number of addressible units minus 1. Nearly all modern machines are byte addressable, which means that an address refers to a 8-bit byte. However, there are typically alignment contraints, i.e., a 32-bit data value must be stored starting at an address divisible by 4 (there are 4 bytes in a 32-bit word), a so-called word boundary. Similarly, 64-bit quantities also must be stored on a double word boundary.

Note that a memory address is just a number, which can easily be represented in binary. Therefore, memory addresses themselves can be stored in memory along with program data and instructions. A bit pattern in memory has no inherent meaning: the same pattern may be interpreted as a CPU instruction, a signed or unsigned integer value, a floating point value, a character, a graphical bit map, anything. You cannot look at a bit pattern in memory and tell what the intended meaning is.

To be executed, a program must first be loaded into memory. At boot time, the computer is usually configured to go to a very small program at a known memory location (stored in ROM) that contains a program that loads the boot loader into memory and then exectutes that. The boot program then loads the operating system into memory and starts that. The operating system can then load other programs into memory and run them.

In addition to the main memory, a CPU typically has a small number of machine-language-visible storage locations called registers. Normally, to be operated upon, a data value must first be brought into a CPU register.

Computers work using a fetch-decode-execute cycle. In essense, the computer hardware continuously executes a very tight loop. There are a couple (at least) special registers in the CPU that govern its operation: the program counter (PC), which holds the address of the next instruction to be executed; and the instruction register (IR), which holds an instruction while it's being interpreted by the CPU.

  1. Fetch the word at the address given by the PC from memory and store it in the IR. Increment the PC by the size of the fetched instruction.
  2. Decode the instruction in IR and and figure out what it is. The PC will be incremented by the size of the instruction.
  3. Execute the decoded instruction (which may involve various functional units inside the CPU). The instruction may add the contents of two regular registers and store the result in another register. It may fetch data from memory and put it in a register. It may store a register's contents at some location in memory. (In these cases some other register will contain the relevant memory address.) It may jump to some other location, in which case the PC will be updated with that address.
  4. Go to 1.

Are there computers that don't use this model? Yes there are, but they're not very common. The von Neumann architecture, while a huge advance that unleashed a tidle wave of advances, does have limitations. See John Backus's Turing Award lecture, Can Programming Be Liberated from the von Neumann Style? A Functional Style and Its Algebra of Programs for a rather explicit discussion about limitations of the model and one approach to moving beyond it.


Look at array_sum.c. This is dreadful C code — use of public global variables and labels/gotos is generally not recommended. The program was designed to map very directly to the machine code.


% gcc -Wall -ansi -pedantic -o array_sum.s -S array_sum.c
% gcc -g -Wall -ansi -pedantic -o array_sum-g.s -S array_sum.c
% gcc -g -Wall -ansi -pedantic -o array_sum array_sum.c

Explore the assembly code in each of the .s files. Where is the data? Where is the code? What assembly code labels correspond to the C source code labels? Can you find the address calculation that corresponds the array indexing operation?

This code has a bug. Run it, and you will get the wrong answer (unless you are exceptionally lucky and running on a non-standard platform). Don't analyze the code yet.

Type gdb array_sum. Then type help. gdb is the GNU debugger. It's the reason we put -g in our gcc flags. You can also run gdb within Emacs by typing M-x gdb. You can specify the file to debug when Emacs lets you see the gdb command, or you can use the file command after gdb is running. There is an X-windows interface to gdb called ddd, which you may also use later.

Type list. Then type li. Then type li 1. If you combined multiple files to make the program, you can also specify a file name. Try li main and li alfred.

Take out a piece of paper and divide it into 4 sections labeled Text/Code, Static/Global, Heap, and Stack. Or you can use a pre-made memory template, but we can actually get the real addresses for the stack data. This form was created by a former colleague of mine named Jim Canning. Here are instructions from a former tutor.

Try print size, print, print &size, print array, print &array, print &array[0], print main, print &main, print print_array. What area of memory stores the values for these variables? Draw them on your diagram with correct relationships between them. Label them with their real addresses.

Try x size, x &size, x array, x &array, x/3xw &array[0], x/12xb array. What did you just learn about this computer's processor?

Verify that array is at the lowest address of any of the three global variables. If it isn't, then use the lowest address among the three rather than the address of the first element of array in the following. Type x/5xw array, x/1xw &array[-1], and x/1xw &array[11].

We haven't even run the program yet. Try printing out a local variable.

Now type break main (you can abbreviate this b main, and, in general, you can abbreviate any command with its shortest distringuishing prefix in gdb). You're setting a break point, which is a place in the code where you'd like the debugger to stop whenever it gets there (presumably so you can explore what's happening). You can set a break point at a function definition, a line number, or give a file name and line number (for programs built from multiple files). We've now arranged for the program to start right away after we run it.

Now type run I love computing.

print argc
print &argc
print &argv
print argv
print *argv
print argv[0]

Fill in your memory template.

display array[size]
disp array
disp size
disp sum

Then try n (which is short for next, it executes the next source line and stops again) and check your understanding of what's happening. Continue stepping through the code one line at a time to the end (even if you figure out the bug along the way). If you haven't caught the bug yet, write down the contents of all the global variables exactly as they are laid out in memory. In fact, draw a picture of the state of memory showing the locations assigned to the variables and the contents of those locations. Then run the code again from the beginning and trace each variable reference, especially the array references, updating memory contents in your picture as appropriate.

Noah Mendelsohn (
Last Modified 4 August 2014