The Landing Point Designator: Analog Crosshairs for a Digital Moon
How a set of lines scribed on a window gave Apollo commanders the power to see where the computer was taking them—and override it
The Lunar Module is falling toward the Moon at thousands of feet per second. The commander stands at his station, tilted slightly forward against his restraint harness, and looks down through the triangular window at the gray landscape rushing up to meet him. Somewhere in that expanse of craters and boulders, the guidance computer has selected a landing point. But where? The DSKY displays numbers—Noun 68, a pair of angles—but numbers don’t tell you whether you’re about to set down in a boulder field. For that, you need to look outside. And to connect what you see outside with what the computer is calculating inside, you need the Landing Point Designator: a set of lines etched into glass.
The Problem of Knowing Where You’ll Land
During the final phase of lunar descent, the LM pitched over from its engines-forward braking attitude to a more upright position, giving the crew their first visual contact with the landing site. This pitchover moment, occurring at roughly 7,000 feet altitude during the P64 phase of the guidance program, was one of the most critical transitions in the entire mission. Before pitchover, the crew was flying on instruments alone, trusting the Apollo Guidance Computer and its landing radar to manage the trajectory. After pitchover, the commander could see the surface and had to quickly assess whether the computer’s chosen landing point was acceptable.
The problem was bridging two fundamentally different kinds of information. The AGC knew, with considerable precision, the LM’s position, velocity, and attitude. It had computed a landing target based on navigation data, tracking updates from Houston, and its own landing radar measurements. It expressed this target as a pair of angles relative to the LM’s body axes—how far forward and how far left or right of the spacecraft’s current position the landing point lay.
The commander, on the other hand, had a pair of eyes and a window. He could see rocks, craters, slopes, and shadows. He could assess terrain suitability in ways no 1960s sensor could match. But he needed a way to translate the computer’s angular data into a specific point on the landscape visible through his window. That translation was the Landing Point Designator.
Scribed Lines and DSKY Angles
The LPD was, in its physical essence, a set of angle-graduated lines etched directly onto the commander’s forward window. These lines radiated from a common reference point, each marked with a number corresponding to a pitch angle in degrees. The principle was identical to an artillery sight or a bomber’s crosshairs: if you positioned your eye at a specific reference point and looked through the correct angle marking, you were looking at the spot where the computer intended to land.
The procedure worked like this: During P64, the DSKY displayed Noun 68, which provided the LPD angle—the number corresponding to one of the scribed lines on the window. The commander positioned his eye at the designed eye point (a specific location defined by his standing position in the restraint harness), found the appropriate angle line on the window, and looked through it toward the surface. Whatever terrain feature lay at the intersection of that line of sight and the lunar surface was where the guidance computer was steering them.
The system updated continuously. As the LM descended and its attitude changed, the DSKY’s LPD angle updated every two seconds. The landing point would appear to move along the scribed lines as the geometry shifted, giving the commander a dynamic, real-time view of the computer’s targeting solution.
There was an elegant simplicity to the optics. The etched lines added zero weight to the spacecraft (the window was already there), required zero electrical power, consumed no telemetry bandwidth, and had precisely zero failure modes. A line scribed in glass cannot break, cannot lose calibration, and cannot suffer a software glitch. It either existed or it didn’t, and it always existed.
Redesignation: Overriding the Computer
Seeing where the computer wanted to land was only half the system’s value. The other half was the ability to change that landing point. If the commander looked through the LPD and saw boulders, a crater rim, or any terrain he judged unsuitable, he could redesignate—commanding the guidance computer to shift the landing target.
Redesignation was accomplished through the commander’s hand controller. During P64, clicking the hand controller left, right, forward, or aft would increment the landing target in the corresponding direction. Each click shifted the aim point by a fixed amount—approximately 2 degrees in the downrange direction and about half a degree laterally. The AGC would recompute the descent trajectory to reach the new target, and the updated LPD angle would appear on the DSKY.
This was human-machine collaboration at its most refined. The computer handled the physics—trajectory calculation, throttle commands, attitude control. The human handled the judgment—is this a good place to land? The LPD was the interface between these two domains, allowing the commander to exercise the one capability no computer of that era possessed: looking at the Moon and understanding what he saw.
As the LM descended below approximately 500 feet, the guidance program transitioned from P64 to P66—the Rate of Descent mode. In P66, the commander controlled descent rate directly with the hand controller, clicking up or down to change the rate by 1 foot per second per click, while the computer maintained attitude and horizontal velocity. The LPD was no longer needed at this altitude; the commander was close enough to assess the terrain directly and maneuver visually to his chosen spot.
LPD in Action: Apollo 11 and Beyond
The Landing Point Designator’s most famous moment came during Apollo 11. As Neil Armstrong looked through the LPD markings after pitchover, he saw that the guidance computer was steering Eagle toward a boulder field at the western edge of West Crater—a field of rocks the size of automobiles scattered across the approach path. The terrain was clearly unsuitable for landing.
Armstrong redesignated, clicking the hand controller to shift the landing point further downrange, beyond the boulder field. He continued to monitor through the LPD as the computer adjusted the trajectory. But the safe terrain he needed kept retreating ahead of them, and fuel was running low. Eventually Armstrong transitioned to P66 and essentially flew the LM manually to a clear spot, with Charlie Duke in Mission Control calling out the dwindling propellant: “Sixty seconds.” Then: “Thirty seconds.”
Eagle landed with approximately 25 seconds of fuel remaining. The LPD hadn’t prevented the drama—Armstrong’s flying skill did that—but it had provided the critical first piece of information: the computer’s target was bad, and he needed to do something about it. Without the LPD, Armstrong would have had no way to correlate the DSKY’s numbers with what he could see out the window. He might have recognized the boulder field too late or not at all.
Apollo 12 demonstrated the LPD’s value from the opposite direction. Pete Conrad, targeting an intentionally precise landing near the Surveyor III probe, used the LPD to confirm that the guidance computer’s target was exactly where he wanted it. He made minor redesignations to fine-tune the approach, then flew a confident P66 descent to land within 600 feet of the derelict probe—a pinpoint landing that validated the entire targeting and redesignation system.
Later missions refined the technique further. By Apollo 15, 16, and 17, commanders were landing in increasingly challenging terrain—mountain valleys, highland plateaus, canyon edges—using the LPD as one element in a well-practiced approach sequence. The tool never changed. The same scribed lines, the same Noun 68 display, the same hand controller redesignation logic served from the first landing to the last.
The Analog Bridge
In a spacecraft full of digital computers, inertial measurement units, radar altimeters, and telemetry systems, the Landing Point Designator stands apart as an artifact of an older engineering tradition. It was a sighting device, no different in concept from the graduated reticles that navigators, artillerists, and surveyors had used for centuries. Lines on glass, calibrated to known angles, interpreted by a trained human eye.
Its genius lay in what it didn’t need. Zero weight added. Zero power consumed. Zero failure modes introduced. Zero software to debug. In a vehicle where every pound was fought over and every circuit was a potential point of failure, the LPD accomplished its mission by requiring nothing at all beyond what was already present: a window, a pair of eyes, and a set of marks that would last as long as the glass itself.
It was also irreplaceable by any technology of the era. No television camera, no computer display, no automated terrain analysis system of the 1960s could have given a commander the same rapid, intuitive understanding of the landing site that looking through a calibrated window provided. The human visual system, with its ability to assess texture, shadow, slope, and scale at a glance, was the best terrain sensor available. The LPD simply connected it to the guidance computer’s targeting solution.
The Landing Point Designator reminds us that the most sophisticated system in a spacecraft was, and perhaps always will be, the crew. The AGC could compute trajectories with precision no human could match. But it took a pilot looking through scribed lines on glass to decide if those trajectories led somewhere worth going.