Skip to content
Back to Essays

Erasable Memory: Managing 2,048 Words of RAM on the Way to the Moon

How the AGC's programmers divided 4 kilobytes of magnetic core RAM among navigation, guidance, autopilot, and display—and why every word was a negotiation

Matt Dennis

The Apollo Guidance Computer had 2,048 words of erasable memory. Each word was 15 bits plus a parity bit. That’s 30,720 bits of writable storage—3,840 bytes. Not kilobytes. Bytes. The AGC’s entire working memory, the space where every variable was stored, every state vector maintained, every counter updated, every scratch calculation performed, was smaller than a single icon file on a modern desktop.


This memory had to hold the spacecraft’s position and velocity in three-dimensional space (12 double-precision words for the two state vectors—vehicle and target). It had to hold the Digital Autopilot’s configuration parameters, the navigation filter’s uncertainty data, the current program’s working variables, the Executive’s job table, the Waitlist’s task table, the DSKY display buffers, the IMU compensation coefficients, and hundreds of other values that changed during flight. Every one of these allocations was a negotiation. Every word assigned to one function was a word unavailable to another. And the 2,048-word ceiling was fixed in hardware—no swap files, no virtual memory, no heap to grow into.


Magnetic Core: Memory You Can Touch

The AGC’s erasable memory used magnetic core technology—the same basic approach that dominated computer memory from the mid-1950s through the early 1970s. Each bit was stored in a tiny ferrite ring, or “core,” approximately 0.02 inches in outer diameter. The core could be magnetized in one of two directions—clockwise or counterclockwise—representing binary 0 or 1.


Reading a core was destructive. To determine a core’s state, a current pulse was sent through a wire threaded through it. If the pulse flipped the core’s magnetization, a voltage was induced in a sense wire—indicating the core had been in the opposite state. If the pulse didn’t flip the magnetization (because the core was already in the state the pulse was driving toward), no voltage was induced. Either way, the read operation set the core to a known state, destroying the original value. The memory controller had to immediately write the original value back—a “read-restore” cycle that took approximately 11.7 microseconds.


This read-restore cycle was the fundamental speed limit of the AGC’s memory system. Every memory access—read or write—consumed a full cycle. Instructions that accessed memory twice (read an operand, write a result) took two cycles. The 2.048 MHz clock couldn’t outrun the 11.7-microsecond core cycle time; the processor waited for memory on virtually every instruction. Effective throughput was limited not by logic speed but by the physics of tiny ferrite rings flipping their magnetic fields.


The physical memory was organized as a grid of cores woven onto wire planes. The 2,048 words occupied eight “banks” of 256 words each. Each bank was a separate physical module—a frame of wires with thousands of cores threaded onto them by hand during manufacture. The entire erasable memory assembly weighed roughly 4.5 pounds and consumed about 7 watts of power.


The core memory had one property that solid-state RAM lacks: it was non-volatile. Magnetic cores retained their state when power was removed. If the AGC lost power and was restarted, the erasable memory still contained whatever values had been stored before the power failure. This non-volatility was a mixed blessing—it meant the state vector survived power transients, but it also meant that corrupted data from a partial failure persisted across restarts unless the software explicitly cleared it.


The Memory Map: Who Got What

The 2,048 words of erasable memory were divided into a meticulously planned memory map. Every word had an assigned purpose, documented in the software specification and enforced by the assembly process. The major allocations included:


Unswitched erasable (addresses 0-767): The first three banks of erasable memory were permanently accessible regardless of the current bank-switching state. This “unswitched” region held the most critical and frequently accessed variables—the ones that every routine in the system needed to reach at any time.


The unswitched region contained:


  • Special registers (addresses 0-63): Hardware-interfacing registers including the accumulator (A), the lower accumulator (L), the return address register (Q), the program counter (Z), and the bank register (BB). These weren’t really “memory” in the conventional sense—they were processor registers mapped into the memory address space. Reading or writing address 0 read or wrote the A register directly.

  • Input/output channels (addresses 1-6, various): The AGC communicated with spacecraft hardware through memory-mapped I/O channels. Writing to a specific address sent data to the DSKY, the engine gimbal actuators, the RCS jet valves, or the downlink telemetry formatter. Reading from a specific address retrieved data from the IMU, the optics encoders, the hand controllers, or the uplink receiver.

  • State vectors: The spacecraft’s own state vector (position and velocity, six double-precision words = 12 single-precision words) and the target vehicle’s state vector (another 12 words) were stored in unswitched erasable. These were the navigation foundation—every guidance computation started from these values.

  • Executive and Waitlist tables: The seven-entry job table and the seven-entry task table, each containing priority, restart data, and address information for active computations.

  • DSKY display buffers: The current VERB, NOUN, PROG, and R1/R2/R3 register values, plus the state machine data for PINBALL’s keystroke processing.

  • IMU compensation data: Coefficients for correcting known biases and scale factor errors in the inertial measurement unit’s gyroscopes and accelerometers. These were loaded before launch and remained constant during flight.

  • DAP configuration: The Digital Autopilot’s current parameters—deadband width, mass properties, jet configuration, rate limits.

Switched erasable (addresses 768-2047): The remaining five banks of erasable memory were “bank-switched”—only one bank was accessible at a time, selected by the bank register. The programmer set the bank register to access the desired bank, performed the memory operations, and could then switch to a different bank. This was the AGC’s way of extending its address space beyond what the instruction word’s address field could directly specify.


The switched banks held program-specific working variables:


  • Navigation filter data: The W-matrix components and covariance-related values for the Kalman filter, consuming dozens of words across multiple banks.

  • Guidance working storage: Intermediate values for the powered descent guidance equations, the entry guidance algorithm, and the rendezvous targeting computations. These values were needed only during the specific program that used them and could be overwritten when a different program ran—but the allocation had to be planned so that two programs sharing a bank didn’t collide.

  • Interpreter scratch area: The Interpreter maintained its own set of working registers in erasable memory—the MPAC (Multi-Purpose Accumulator), addressing registers, and temporary storage for intermediate results during interpretive execution.

  • VAC areas: The Executive allocated “VAC areas” (Vector Accumulator areas) from a pool of pre-assigned memory blocks when a job requested one via FINDVAC. Each VAC area was a contiguous block of 44 words that the job could use as scratch storage. The pool was limited—only five VAC areas existed—and exhausting them triggered the 1202 Executive overflow alarm.

The VAC Area Economy

The five VAC areas were the AGC’s equivalent of a dynamic memory pool, and their management was one of the most constrained resource allocation problems in the system. When a job started via FINDVAC, the Executive assigned it one of the five available areas. When the job completed or was terminated, the area was returned to the pool. If all five areas were in use and a new FINDVAC request arrived, the Executive had no area to assign—and a program alarm (1202 or 1201) resulted.


Five areas might sound generous for a system that could run only seven jobs, but the math was tight. During powered descent, the following jobs might be active simultaneously:


  • The descent guidance job (computing the next thrust direction)
  • The navigation integration job (updating the state vector from accelerometer data)
  • A display update job (formatting data for the DSKY)
  • A telemetry formatting job
  • A landing radar processing job

Five jobs, five VAC areas—zero margin. If the rendezvous radar’s spurious interrupts (as on Apollo 11) caused the system to create additional jobs faster than existing jobs completed, the VAC area pool was exhausted and the overflow alarm fired.


The programmers managed this scarcity through careful scheduling discipline. Jobs released their VAC areas as soon as they no longer needed them, not when they finally completed. Some jobs were written to use NOVAC (no VAC area) if they could manage with only the unswitched erasable space and the Interpreter’s scratch registers. Every FINDVAC call in the code was a conscious resource commitment, and the team tracked the maximum simultaneous VAC area usage for every mission phase to ensure the five-area ceiling was never breached under nominal conditions.


Core Sets: The Other Scarce Pool

Separate from the VAC areas, the Executive maintained a pool of “core sets”—blocks of erasable memory that preserved the state of an interrupted Interpreter computation. When a job running interpretive code was preempted by a higher-priority job, the Interpreter’s registers (MPAC, addressing registers, index values) had to be saved so the preempted job could resume later. Each core set was large enough to hold the full Interpreter context.


The AGC had a limited number of core sets (typically eight across the system). If all core sets were in use when a new Interpreter-based job needed to be scheduled, the 1201 alarm—“Executive overflow, no core sets”—fired. This was the sibling of the 1202 alarm, representing exhaustion of a different resource pool but with the same consequence: the restart system shed low-priority work to free up core sets for critical jobs.


The core set and VAC area pools were independent but related constraints. A job might need one of each, or just one, or neither, depending on how it was created and whether it used the Interpreter. The maximum simultaneous demand on both pools had to stay within bounds for every conceivable combination of active programs and mission events. The analysis that verified this was one of the most painstaking parts of the software verification effort.


Memory Overlays: Sharing Without Colliding

With only five banks of switched erasable memory to hold program-specific variables, the AGC couldn’t dedicate separate memory regions to every program. Programs that never ran simultaneously could share the same erasable memory locations—a technique called overlaying.


The descent guidance variables (P63/P64/P66) and the entry guidance variables (Colossus’s P61-P67) never needed to be in memory at the same time—the LM never reentered Earth’s atmosphere, and the CM never landed on the Moon. Their working variables could occupy the same erasable addresses. When the guidance program changed from descent to ascent, the variables were overwritten. The old values were gone, but they were no longer needed.


The overlay plan was documented in the memory map and enforced by the assembly process. The assembler assigned symbolic names to erasable memory addresses, and the overlay definitions specified which symbolic groups could share physical addresses. A mistake in the overlay plan—assigning two simultaneously active routines to the same memory—would result in one routine silently corrupting the other’s data. These mistakes were caught by analysis and testing, not by hardware protection, because the AGC had no memory protection hardware.


Overlay conflicts were among the most dangerous bugs in the AGC software, precisely because they were silent. A corrupted variable didn’t trigger an alarm. The affected routine simply computed with wrong data and produced wrong results. If the corrupted variable was a state vector component, the navigation solution would drift. If it was a guidance parameter, the thrust direction would be wrong. The only defense was meticulous analysis of the overlay plan and exhaustive simulation testing of every program transition.


Erasable Memory Initialization: What’s There at Power-Up

When the AGC was first powered on—typically during the pre-launch checkout on the pad—the erasable memory contained whatever values the core happened to have retained from the last power cycle (or random values if the cores had never been written). The initialization sequence—part of the “fresh start” code in the Executive—cleared critical areas of erasable memory to known values and loaded the initial parameters from tables in rope memory.


The state vectors were initialized with values uplinked from the ground before launch. The IMU compensation coefficients were loaded from a pre-computed table. The DAP configuration was set to the launch defaults. Program variables were cleared or set to their initial states.


During flight, the erasable memory was the only dynamic record of the mission. The rope memory contained the program—the instructions and constant data—but the erasable memory contained the state: where the spacecraft was, what program was running, what the crew had entered, what the guidance had computed. Losing the erasable memory contents (through a power interruption or a hardware fault) meant losing the spacecraft’s knowledge of its own state. The non-volatility of core memory was a partial safeguard—a brief power glitch wouldn’t erase the data—but a sustained power loss followed by a cold restart would require re-initialization from the ground.


Apollo 13’s power-down procedure highlighted this vulnerability. When the CM was shut down to conserve battery power, the AGC lost its running state. The erasable memory contents were preserved by the cores’ non-volatility, but the Executive, the Waitlist, and the running programs were no longer active. When the CM was powered up for reentry, the crew had to re-initialize the AGC, reload the state vectors from the LM’s computer (which had been running continuously on LM power), and reconfigure the DAP—all within tight power budgets and time constraints. The procedure worked, but it was one of the most complex power-up sequences ever performed in flight.


The Art of Fitting

Programming the AGC was, in a very real sense, an exercise in fitting. Fitting the guidance algorithms into 36,864 words of rope memory was hard. Fitting the working data into 2,048 words of erasable memory was harder. The rope memory had to hold the right code; the erasable memory had to hold the right data at the right time, and the “right time” changed as the mission progressed from phase to phase.


The erasable memory map was revised constantly throughout the development of Colossus and Luminary. As features were added, algorithms refined, and new requirements discovered, the memory allocations shifted. A guidance routine that grew by three working variables might displace a display routine that had been using those addresses. The displacement cascaded—the display routine moved to another bank, bumping a telemetry routine, which moved to overlap with a navigation routine that was thought to be inactive during that mission phase but turned out to run concurrently in one edge case that nobody had considered.


The memory analysis team—a dedicated group within the MIT software organization—maintained the master erasable memory allocation and verified that no overlay conflicts existed. Their tools were listings, spreadsheets (paper spreadsheets, in the 1960s), and careful reasoning about every possible program state and transition. They were, in effect, doing by hand what a modern operating system’s memory manager does automatically—and they were doing it with zero margin for error, because a mistake wouldn’t cause a segmentation fault or a blue screen. It would cause a guidance error over the Moon.


Every Apollo mission that flew did so with an erasable memory map that had been verified word by word, overlay by overlay, transition by transition. Not a single erasable memory conflict was ever discovered in flight. The navigation state vectors were never corrupted by an overlay collision. The DAP parameters were never overwritten by a guidance routine. The Executive’s job table was never trampled by a display update.


Two thousand forty-eight words. Four kilobytes. Enough to navigate to the Moon and back, if you were careful enough about how you used every single one.