NASA did not provide a
direct video feed of Apollo 11's lunar surface activities to
television stations. The broadcasters had to film it from a
television screen in Mission Control. This was so that it couldn't be
too closely examined for flaws. [Ralph René]
Mr. René is factually incorrect. Eric Jones, who was there
(René was not), verifies that electronic television feeds were
provided for news programs. There is a nugget of truth behind this
accusation, explained below.
First a television primer. A television signal according to the
U.S. standards sends 29.97 frames per second. (Let's call it 30.)
Each frame is composed of 525 horizontal lines of information. The
signal describes the color and intensity of different points along
each horizontal line. As you can imagine, it takes a lot of radio
bandwidth (data-carrying capacity) to transmit a high-quality
television signal.
|
Fig. 1 - Apollo 12's free standing S-band antenna.
The increased capacity of this antenna over the LM's smaller
antenna allowed for sending full color television
pictures. (NASA: AS12-47-6988)
|
|
|
The size of the transmitting and receiving antennas has a lot to do
with how complex a signal you can transmit over such a long distance.
The LM's built-in antenna was only one meter in diameter and could not
send as strong and as clear a signal as the free-standing S-band
antenna (Fig. 1) used on Apollos 12 and 14. Apollo 11 mission
planners didn't want the crew to spend 30 minutes of their two-hour
moon walk setting up this antenna. Besides, the world wanted to see
Armstrong take his first steps on television, which wouldn't be
possible if they had to wait for the larger antenna.
The original timing of Apollo 11's moonwalk put the big radio
telescopes at Goldstone and Honeysuckle Creek on the wrong side of
Earth to receive the signal. So in the original flight plan the
television signal would be received by smaller antennas elsewhere on
Earth. These smaller antennas were not capable of picking up a weak,
complex signal and so the solution engineers decided upon was to
reduce the complexity of the signal so that any of the Manned
Spaceflight Network (MSFN) stations could pick up a usable television
signal.
The power of the transmission too affects the signal quality.
Unlike the command and service modules, the lunar module was powered
by batteries. Engineers had to keep the power requirements of the
radio transmitters to the bare minimum so as not to exhaust the LM's
batteries too quickly.
|
Fig. 2 - Cumulative strategies used on Apollo 11 to
limit the bandwidth required transmit television images, and
the degree of cumulative signal reduction over a full color
signal. Top: Black and white versus color. Middle: Fewer
scan lines. Bottom: slower frame rate. (NASA:
AS11-40-5903)
|
|
|
How was the signal
reduced?
There are a number of ways engineers made that signal smaller.
First they dropped the color information. If you only have to send
intensity information (i.e., a black and white picture) you can cut
the bandwidth by two-thirds (Fig. 2, top). Then they reduced the
number of horizontal lines (Fig. 2, middle). This of course reduced
the clarity of the image, but it cut the signal size by a third. Then
they reduced the number of frames per second (Fig. 2, bottom). This
reduced it by two-thirds. The result was a signal that could be
squeezed into the limited data-carrying capacity of the small antenna,
requiring only about 5% of the information of a standard full color
signal.
But it was no longer compatible with the standard equipment on the
ground. This means the ground stations had to have elaborate
machinery to convert the signal back into a standard format for the
benefit of television journalists. The equipment for the more
elegant, high-bandwidth solutions was being developed for later
missions when television was deemed more useful. But for Apollo 11's
relatively ad hoc television apparatus there was no time (and
little incentive) to develop, for just one mission, the specialized
frame buffers needed for the frame-rate conversion and
re-rasterization (see below).
|
Fig. 3 - Ed von Renouard, Honeysuckle Creek video
technician, at the slow-scan television console. (Honeysuckle
Creek)
|
|
|
But a low-tech solution was available. It is relatively easy to
modify a standard television tube to display the slow, coarse signal
from Apollo 11. Then you simply aim a standard television camera at
that screen and out of it comes a television signal at the correct
frame rate and resolution, suitable for broadcast. This was done at
the ground stations, according to Apollo tracking expert Mike Dinn.
This setup is easy to build, but it doesn't give the same quality
image as the purely electronic equipment. It's likely this is the
source of the claim that NASA did not provide electronic feeds to
broadcasters.
If you can build a
machine to display this custom signal, surely you can build a machine
to convert from one signal format to the other
Conversion is a much more difficult task. A television picture is
displayed on a cathode ray tube by sweeping a beam across the screen
at a precisely controlled speed, left-to-right and top-to-bottom. You
vary the intensity of the beam as it sweeps to paint either a light or
a dark point on the screen. The electronics that control the sweep
and intensity of this beam are fairly simple; you just substitute
different electronic timing components to create a slower frame rate
and fewer lines. The picture comes out on the screen according to the
same principles as regular television, just slower.
But to convert a signal from one format to another requires a
means of storing the picture information and interpolating the missing
lines. As explained above, the frames came every 1/10 second, but a
standard signal produces a frame every 1/30 second. Each frame from
the moon would have to stick around somehow and produce two additional
frames in the final signal. And a standard signal requires 525 lines
of information, while the signal from the moon was coarser. You would
need some means of filling in the gaps. It is certainly possible to
build such a device, but it would be very complicated and expensive,
and would not be guaranteed to produce any better picture than
pointing a camera at the slow-scan screen.
MAGICIANS NEVER REPEAT
A TRICK
The question of deliberately reduced quality is ambiguous.
Magicians don't repeat their tricks because the more they repeat them,
the more chances you have to elude their misdirection and discover how
the trick is done. And magicians certainly don't turn up the lights
and wave away all the fog, and do the trick more slowly to help you
detect the swindle.
Mr. René's argument is just that: NASA supplied sharper,
clearer, lengthier telecasts as time went on, inviting closer and
closer scrutiny. If the purpose of the grainy Apollo 11 television
was to "hide the wires," why risk exposing those wires later -- to
anybody?
But nobody paid
attention to the later missions. After Apollo 11 there was much less
public scrutiny.
That may have been the case at the time, but that is not the case
now. To make this argument is to believe NASA took a short-sighted
approach to falsifying the landings. Today you can buy detailed,
high-quality digital copies of all the Apollo visual material. How
long did NASA plan to keep the hoax secret? Until 1972? 1980? 1999?
Forever? Conspiracists argue that NASA is still trying to keep the
secret, still trying to intimidate whistle-blowers and mitigate
unpleasant discoveries.
If that is true, why would they adopt a policy that presumes they
had to withstand intense scrutiny in 1969, but could get away with
more in 1972 after everyone had supposedly lost interest? It's
inconsistent; it doesn't make sense.
It's hard to believe
that the most important historical event of the century would be
purposely recorded with improvised techniques and cobbled-together
equipment.
It's simply a trade-off. NASA was trying to meet Kennedy's
challenge of landing a man on the moon before the end of the decade.
Kennedy had said nothing about sending back television pictures,
collecting rocks, and setting up experiments. These are goals which
NASA had added to the Apollo project. Apollo 11's mission objective
was simply to land and return safely, and anything that would delay
that simple mission was given lower priority.
Color television did not contribute to the probability of
mission success, and so it was omitted on Apollo 11.
|
|
NASA did not plan for Apollo 11 to be the only mission. Therefore
they didn't intend Apollo 11 to single-handedly accomplish the
objectives of the entire program. It was understood that subsequent
missions would build on Apollo 11's success and take advantage of the
lessons learned, and therefore have greater expectations. And so to
fulfill the primary goal, certain secondary goals were allowed to be
delayed: extensive experimentation, astronomy, high-quality
photography, the lunar rover. Apollo 11 was sent to the moon on time
to meet Kennedy's challenge, even though the technology not directly
related to the landing was still being developed and tested.
Conspiracy theorists like to impose their motives and strategies
upon NASA. And the reader even agree with those motives. But we
cannot simply assume NASA had, or should have had, those motives and
thus judge their actions according to them. In one sense Apollo 11
was indeed a historic occurrence, but in another sense -- and more
likely to be the sense in which NASA was thinking -- Apollo 11 was
only one part of a entire program which needed to be carefully managed
as a program.
It's a dilemma many people face: Do I go now when I'm only partly
ready, or do I wait until I'm fully ready and risk missing the deadline?
But a perfectly good
color signal was sent from the Apollo 11 spacecraft en route and in
lunar orbit. Therefore it can't be a bandwidth or equipment
issue. [Bart Sibrel]
|
Fig. 4 - The high-gain S-band antenna array on the
Apollo 15 service module. (NASA: AS15-88-11974)
|
|
|
Two different spacecraft were involved. The Apollo command and
service modules (CSM) had a large S-band antenna array capable of
sending the required color television bandwidth. It was from this
spacecraft that the color signals were sent. The Apollo 11 television
picture from the lunar surface was sent over the lunar module's much
smaller antenna and picked up on Honeysuckle Creek's large dish
antenna. The lunar module was powered by batteries, meaning that it
was best to make do with the minimum electrical power necessary. The
command module was powered by fuel cells and had much more
transmitting power available.
The command module's larger antenna could not be used to relay a
more robust signal to earth because the CM was orbiting. It was out
of radio contact with earth and with the lunar module for a large
segment of each two-hour orbit.
The color television
signal from later missions was supposed to be "live", but in fact NASA
had equipment on the ground that would delay the signal before passing
it on to news agencies. [David Percy]
NASA did, for a time, maintain a seven-second delay in the audio
downlink. Because of the public relations problem that might
accompany the slip of an astronaut's tongue, NASA elected to delay
live transmission of the audio so that a ground controller could, if
necessary, "bleep" inappropriate vocabulary. This is the only
legitimate example of censorship by NASA of a "live" transmission. In
the wake of the Apollo 1 fire NASA realized it was more important to
be fully candid with the public, and so the audio delay was
discontinued.
But wasn't the video
delayed too?
In all of the missions there was some unavoidable delay caused by
the conversion of the downlink signal into standard format. It
required some of the video frames to be retained in frame buffers
until they could be combined with later frames. This would obviously
introduce an interval between when a frame was received by the ground
station and when it would be sent along the MSFN to Mission Control.
Normal color television cameras at that time used three image
sensor tubes called vidicons, each
fitted with a filter for one of the primary red/green/blue colors.
The signals from each vidicon were electronically combined into a
single color signal inside the camera. This three-tube system was
deemed too heavy and bulky for use in space.
Engineers at Westinghouse developed an ingenious way to produce a
color signal using only one vidicon. They placed a spinning color
wheel in front of the vidicon with red, green, and blue filters in it.
Thus the vidicon would capture the red part of the image, followed by
the green, followed by the blue. It could send them sequentially as a
series of intensity-only frames that the ground equipment would assign
to color. Not only would this result in a very light camera, but it
would transmit a color image in the smaller bandwidth normally taken
up by a black-and-white signal.
But there was a drawback. If the color component frames were sent
one after the other instead of in a combined signal, then it would
take three frames from the camera (one red, one blue, and one green)
to produce just one color frame on the television screen. And that
meant that the effective color frame rate was one-third that of the
actual frame rate. Not only that, it would require equipment on the
ground to very quickly assemble the color components into the single
frame.
But the transmission rate of 30 fps would equate to only 10 fps
after combining. That's too slow for viewing. To the viewer the
motion would have a jerky, stop-motion quality to it. Not to mention
a noticeable flicker. The frames were therefore recombined according
to rolling "window". The red, green, and blue frames of one set were
combined to form a finished frame. The the red frame was dropped from
that set, and the blue and green frames from that set were combined
with the red frame from the next set, and so forth. This means each
color component frame could be used three times, and the decoded frame
rate would be back up to 30 frames per second.
If you freeze a frame of the color television coverage during an abrupt
movement of the camera or subject, you can see a sort of rainbow
effect where each color component captures the object in a slightly
different position. The object has moved in the time it takes the
color wheel to move to the next position.
|