CLAVIUS   TECHNOLOGY
  computers
Home page
Conspiracy
Photography
Environment
Technology
Vehicles
Bibliography

The computer technology didn't exist in the 1960s to make the Apollo guidance computer.

This goes along with the general discussion about the state of technology available to NASA in the 1960s. But since computer capability has compounded many fold since Apollo, it is sometimes treated separately.

As with the general level of technology, conspiracists often try to compare the availability and sophistication of consumer computing equipment with that available to NASA. Computer companies of the 1950s and 1960s had to produce general purpose computers at a cost that would attract business and scientific customers. NASA had to solve only one problem -- guidance -- and could easily afford to have a custom system designed and built for them using cutting edge components and techniques.

We could today, if we wanted, produce very fuel-efficient automobiles that would go for hundreds of thousands of miles without any regular service or mechanical breakdown. Unfortunately that car would cost well over a million dollars a unit, and would therefore be out of reach of most consumers. And so automobile companies produce vehicles more tailored to the economy of their intended customer. As a result the level of technology lags behind what would be achievable if money were no object.

The question to ask is not what kinds of computers were available in IBM's color brochures, but what kind of computer was available to NASA with its essentially bottomless pockets.

The Apollo guidance computer had the computer power equivalent only to today's kitchen appliances, far less than what would be required to go to the moon.

It always amuses us to hear this from people who sit at multi-gigahertz computers and can't imagine that anything less was ever remotely usable for anything. This is a good example of a mental technology trap. People believe that because we use a particular technology to solve a particular problem today, that problem wasn't solvable before the technology was available.

As a matter of fact, John Glenn flew his spacecraft to earth orbit without any onboard computer whatsoever. Yet the trajectory was precisely controlled, and his capsule could have operated completely automatically if necessary. (In fact, the original design called for it to be completely automated, but the astronauts demanded the ability to pilot the capsule.)

So far no conspiracist has yet been able to accurately enumerate what computational tasks were required for going to the moon. It's one thing to say that a computer in the 1960s would be no match for a computer today. But it's another thing entirely to say that the computer built in the 1960s wasn't up to the task for which it had been designed. The conspiracists claim the latter, but provide evidence only for the former. To make the case that the guidance computer was not adequate to its task, one must first describe the task. Then one must show the specific deficiency of the computer with respect to that task.

Just to run a moon landing simulation requires dozens of megabytes. It would require more to accomplish the actual task.

This is the typical computer-illiterate attempt to compare the guidance computer to its task. Not being able to speak intelligently about the problem of guidance in space travel, the conspiracists select a problem they believe is similar (a lunar lander arcade game) whose requirements they believe they know.

There are of course a number of things wrong with this argument. First, moon landing simulations do not inherently require lots of computer resources. They do on today's personal computers, but only in the sense that any task on today's personal computers requires lots of resources. That's because those computers have heavyweight, general-purpose operating systems and are expected to provide lots of bells and whistles.

Fig. 1 -A screen shot from Ron Monsen's Eagle Lander 3D.

Some of the first programs on the small minicomputers of the 1960s and 1970s were rudimentary one-axis lunar lander games, including one for the DEC PDP-8 (Fig. 7), a computer with similar capabilities as the Apollo guidance computer. Of course they lacked the fancy three-dimensional graphics and realistic sound effects (Fig. 1), but they captured the essence of the physical behavior. See below for a description of the difference between a special-purpose computer and a general-purpose computer.

The notion that the real thing would be more involved than a simulation is intuitively wrong. The simulation not only has to embody the behavior of the simulated object, but it also has to programmatically create the environment -- the external effects like gravity. The Apollo guidance computer didn't have to create the lunar environment as part of the program; it was in the lunar environment.

In a flashy lander simulation, throwing a switch means performing a mouse gesture over its icon on the screen. The lander simulation must contain program code to create the icon, animate it, interpret the mouse motion, and translate that into a change in the operating state of the program. In the real guidance computer the guidance program does none of that; the pilot flips a switch and the corresponding computer "bit" is set or cleared in the computer's memory by the switch electronics.

Computer chips weren't invented in 1969, so there's no way NASA could have built the Apollo computers.

Fig. 2 -Jack Kilby's first integrated circuit, 1959. (Courtesy CNN)

It all depends on what you mean by "computer chips". Today a modestly priced Intel Pentium series microprocessor has registers, memory cache, floating-point processor, and graphics acceleration built right into the chip. Not so long ago those added functions had to be provided by additional chips. Putting a complete CPU on a single chip was indeed a breakthrough, but microchips performing simple tasks were available in the early 1960s and these could be built up into processors.

Electronics hobby stores carry project kits that use simple integrated circuits. With patience, even these very simple chips (e.g., a three-input NOR gate) can be combined to make a simple computer. In fact this is often an assignment for advanced digital design classes in college. These simple chips are not "computer chips" in the sense that they contain a computer on a single chip, but they are computer chips in that they can be used to build a computer.

Jack Kilby of Texas Instruments is generally credited with patenting the first miniaturized circuits built as an integrated unit, in 1959 (Fig. 2). Robert Noyce at Fairchild Camera and Instrument Corporation (later Fairchild Semiconductor) was granted a patent for the silicon-based integrated circuit the same year. After some initial legal battles the companies decided to cross-license each other's inventions.

Fig. 3 -One of the IBM System/360 integrated circuits, ca. 1964.

Fairchild released a commercially available integrated circuit, an SR flip-flip, in 1961. The rest of the RTL integrated circuit product line appeared later that year. By 1963 Fairchild had doubled the density of its chips. Philco produced the Apollo ICs to the same density and had perfected them by 1966. Beginning in the early 1960s all new computer designs were developed using these integrated circuits.[Hall96]

RCA introduced the Spectrum 70 computer in 1965 using Fairchild-type integrated circuits. IBM introduced the System/360 that same year using miniscule diodes and transistors potted on microscopic circuit boards -- its own version of integrated circuits (Fig. 3). The System/360 (see below) was the workhorse of the commercial computing industry for more than a decade.

Computers in the 1960s were huge, heavy machines that took up entire rooms in air-conditioned buildings.

Some were and some weren't. The most powerful computers of the time were bulky and took up entire rooms (Figs. 4,5) But that's still true of the most powerful computers today (Fig. 6). The Apollo guidance computer did not have to be extremely powerful, just reliable and adequate to the task. Several models of small computers were developed in the 1960s (Fig. 7). These were not very different from the Apollo guidance computer in terms of size and power.

Fig. 4 -The console and a few peripheral units from an IBM System/360 model 30.

Fig. 5 -The Whirlwind computer, the large-scale supercomputer of the 1950s and 1960s developed for automated strategic air defense applications. It could display real time video. The last Whirlwind computer was shut down in 1983.

Fig. 6 -A modern supercomputer; one of four racks of equipment making up the MCR supercomputing cluster built for the U.S. Department of Energy in 2001. (Lawrence Livermore National Laboratory)

The development of the IBM System/360, the powerhouse mainframe of the 1960s, was also the second most expensive engineering development project in that decade -- the first being the Apollo project itself. The point is that the world's most powerful computers at any given time always take up entire rooms and use vast amounts of electricity. The existence of these behemoths does not mean smaller computers are not possible, either in 2001 or in 1965.

Fig. 7 -The DEC PDP-8, a popular minicomputer of the late 1960s and early 1970s. The yellow box identifies the processors; the components above are hard disk drives, which wouldn't be necessary in an embedded system.

Recall that the Apollo computer was not a general purpose computer. It didn't have to run games or spreadsheets, or do payrolls, or store inventory databases. It only had to navigate the spacecraft to the moon. There were no printers or disk drives required. No tape drives, no card readers or card punches. And so it was a pretty lean computer.

Calling the Apollo guidance system a computer is probably a bit of an exaggeration. It's more closely related to what we call a microcontroller today, or perhaps a digital autopilot. Most of the number-crunching was done on at Mission Control on several mainframe computers. The results were transmitted to the Apollo guidance computer which acted on them. The onboard computer could only compute a small number of navigational problems itself.

There is a big difference between computers intended for general use and digital guidance systems such as those built for aerospace. General purpose computers have to be reasonably priced so that enough of them can be sold to make it worthwhile as a product. This means they can be bulkier and consume more electricity if that makes them cheaper to produce. Aerospace computers need to be light and small, even if that makes them very expensive to produce.

The maker of a general purpose computer doesn't know or care what the customer will use it for. This requires him to provide the computer with mostly RAM memory that can store whatever program the customer chooses to run. And since many, like the IBM System/360, were designed as time-sharing systems, they had to have the capacity to change programs very easily and rapidly. But guidance systems only have to run one program, so it's best to put that program in some kind of ROM and provide only enough RAM to hold the temporary results of guidance calculations.

Prev Next