CLAVIUS |
TECHNOLOGY
television quality |
|||||||||||||||||||
![]() ![]() ![]() ![]() ![]() ![]() ![]() |
The original timing of Apollo 11's moonwalk put the big radio telescopes at Goldstone and Honeysuckle Creek on the wrong side of Earth to receive the signal. So in the original flight plan the television signal would be received by smaller antennas elsewhere on Earth. These smaller antennas were not capable of picking up a weak, complex signal and so the solution engineers decided upon was to reduce the complexity of the signal so that any of the Manned Spaceflight Network (MSFN) stations could pick up a usable television signal. The power of the transmission too affects the signal quality. Unlike the command and service modules, the lunar module was powered by batteries. Engineers had to keep the power requirements of the radio transmitters to the bare minimum so as not to exhaust the LM's batteries too quickly.
|
|
But a low-tech solution was available. It is relatively easy to modify a standard television tube to display the slow, coarse signal from Apollo 11. Then you simply aim a standard television camera at that screen and out of it comes a television signal at the correct frame rate and resolution, suitable for broadcast. This was done at the ground stations, according to Apollo tracking expert Mike Dinn. This setup is easy to build, but it doesn't give the same quality image as the purely electronic equipment. It's likely this is the source of the claim that NASA did not provide electronic feeds to broadcasters.
Conversion is a much more difficult task. A television picture is displayed on a cathode ray tube by sweeping a beam across the screen at a precisely controlled speed, left-to-right and top-to-bottom. You vary the intensity of the beam as it sweeps to paint either a light or a dark point on the screen. The electronics that control the sweep and intensity of this beam are fairly simple; you just substitute different electronic timing components to create a slower frame rate and fewer lines. The picture comes out on the screen according to the same principles as regular television, just slower.
But to convert a signal from one format to another requires a means of storing the picture information and interpolating the missing lines. As explained above, the frames came every 1/10 second, but a standard signal produces a frame every 1/30 second. Each frame from the moon would have to stick around somehow and produce two additional frames in the final signal. And a standard signal requires 525 lines of information, while the signal from the moon was coarser. You would need some means of filling in the gaps. It is certainly possible to build such a device, but it would be very complicated and expensive, and would not be guaranteed to produce any better picture than pointing a camera at the slow-scan screen.
MAGICIANS NEVER REPEAT
A TRICK
The question of deliberately reduced quality is ambiguous. Magicians don't repeat their tricks because the more they repeat them, the more chances you have to elude their misdirection and discover how the trick is done. And magicians certainly don't turn up the lights and wave away all the fog, and do the trick more slowly to help you detect the swindle.
Mr. René's argument is just that: NASA supplied sharper, clearer, lengthier telecasts as time went on, inviting closer and closer scrutiny. If the purpose of the grainy Apollo 11 television was to "hide the wires," why risk exposing those wires later -- to anybody?
That may have been the case at the time, but that is not the case now. To make this argument is to believe NASA took a short-sighted approach to falsifying the landings. Today you can buy detailed, high-quality digital copies of all the Apollo visual material. How long did NASA plan to keep the hoax secret? Until 1972? 1980? 1999? Forever? Conspiracists argue that NASA is still trying to keep the secret, still trying to intimidate whistle-blowers and mitigate unpleasant discoveries.
If that is true, why would they adopt a policy that presumes they had to withstand intense scrutiny in 1969, but could get away with more in 1972 after everyone had supposedly lost interest? It's inconsistent; it doesn't make sense.
It's simply a trade-off. NASA was trying to meet Kennedy's challenge of landing a man on the moon before the end of the decade. Kennedy had said nothing about sending back television pictures, collecting rocks, and setting up experiments. These are goals which NASA had added to the Apollo project. Apollo 11's mission objective was simply to land and return safely, and anything that would delay that simple mission was given lower priority.
|
Conspiracy theorists like to impose their motives and strategies upon NASA. And the reader even agree with those motives. But we cannot simply assume NASA had, or should have had, those motives and thus judge their actions according to them. In one sense Apollo 11 was indeed a historic occurrence, but in another sense -- and more likely to be the sense in which NASA was thinking -- Apollo 11 was only one part of a entire program which needed to be carefully managed as a program.
It's a dilemma many people face: Do I go now when I'm only partly ready, or do I wait until I'm fully ready and risk missing the deadline?
|
The command module's larger antenna could not be used to relay a more robust signal to earth because the CM was orbiting. It was out of radio contact with earth and with the lunar module for a large segment of each two-hour orbit.
NASA did, for a time, maintain a seven-second delay in the audio downlink. Because of the public relations problem that might accompany the slip of an astronaut's tongue, NASA elected to delay live transmission of the audio so that a ground controller could, if necessary, "bleep" inappropriate vocabulary. This is the only legitimate example of censorship by NASA of a "live" transmission. In the wake of the Apollo 1 fire NASA realized it was more important to be fully candid with the public, and so the audio delay was discontinued.
In all of the missions there was some unavoidable delay caused by the conversion of the downlink signal into standard format. It required some of the video frames to be retained in frame buffers until they could be combined with later frames. This would obviously introduce an interval between when a frame was received by the ground station and when it would be sent along the MSFN to Mission Control.
Normal color television cameras at that time used three image sensor tubes called vidicons, each fitted with a filter for one of the primary red/green/blue colors. The signals from each vidicon were electronically combined into a single color signal inside the camera. This three-tube system was deemed too heavy and bulky for use in space.
Engineers at Westinghouse developed an ingenious way to produce a color signal using only one vidicon. They placed a spinning color wheel in front of the vidicon with red, green, and blue filters in it. Thus the vidicon would capture the red part of the image, followed by the green, followed by the blue. It could send them sequentially as a series of intensity-only frames that the ground equipment would assign to color. Not only would this result in a very light camera, but it would transmit a color image in the smaller bandwidth normally taken up by a black-and-white signal.
But there was a drawback. If the color component frames were sent one after the other instead of in a combined signal, then it would take three frames from the camera (one red, one blue, and one green) to produce just one color frame on the television screen. And that meant that the effective color frame rate was one-third that of the actual frame rate. Not only that, it would require equipment on the ground to very quickly assemble the color components into the single frame.
But the transmission rate of 30 fps would equate to only 10 fps after combining. That's too slow for viewing. To the viewer the motion would have a jerky, stop-motion quality to it. Not to mention a noticeable flicker. The frames were therefore recombined according to rolling "window". The red, green, and blue frames of one set were combined to form a finished frame. The the red frame was dropped from that set, and the blue and green frames from that set were combined with the red frame from the next set, and so forth. This means each color component frame could be used three times, and the decoded frame rate would be back up to 30 frames per second.
If you freeze a frame of the color television coverage during an abrupt movement of the camera or subject, you can see a sort of rainbow effect where each color component captures the object in a slightly different position. The object has moved in the time it takes the color wheel to move to the next position.