If I may add and update a few things:
Film is 24 frames per second.
That's the prevailing standard, but hardly the only one. It's also the capture rate only. Movie theaters display at 48 or 72 Hz. Yes, that means there's no change for two or three flashes (not unlike how 120, 240 etc. Hz TV sets work), but that is the real display rate. If you were to watch a film at 24 Hz, it would be unpleasant, and people with photosensitive epilepsy would be having seizures at that rate.
Video is typically 30 frames per second...
Most TV standards use a field rate of 50 or 59.94 Hz (60 Hz for monochrome), and a frame rate of 25 or 29.97 Hz (30 Hz for monochrome). The exception in TV broadcasting is 720p, which has 50 or 59.94 frames per second available to it. Streaming video and some satellite broadcasts can be 1080p, but are still rare. No matter what, there's a new and different image every 1/50th to 1/60th of a second.
Video tape, in the old NTSC TV system the U.S. used in the '70s anyway, resolved an image from three primary colors, red, green and blue.
No, the tape did not do color processing. Various methods of encoding color information were used: Direct color (quadrature encoding, used by Quadruplex, Type B, Type C) being the best quality and most costly. Color-under (used by U-matic, Betamax, VHS and Hi-8) the cheapest and worst. So-called "component analog tape" formats (MII and Betacam) were not component in the traditional sense, and offered only nearly direct color quality in smaller packages, at lower initial prices, but were fussy and required constant maintenance.
videotape formats like Betacam were introduced, which provided much greater color fidelity
See above. Direct color was king, easily meeting or exceeding NTSC standards. This myth that component is better quality than composite is completely false and misleading. The SMPTE Type C 1" helical scan viedotape format (introduced 1976) was king, but not because it offered any better color fidelity over properly maintained Quadruplex decks. "Quad" was the broadcast standard because, unlike helical scan formats, it put out a video signal that was stable enough to broadcast directly, without any time base correction. The microcomputer boom and the falling RAM prices that it brought made digital TBCs more cost effective by 1980, and the superior tape handling features of Type C (fast wind, slo-mo, still frame etc.), made Type C the gold standard well into the '90s.
When it comes to home recording, VHS HQ and S-video with pseudo-component cables that carried the baseband Y signal separately from the color-under signal was at best a kludge (comparable to "dub cables" for U-matic), and simply didn't compare to video standards used for television production. Betacam and MII made possible the first self-contained ENG camcorders, but wasn't ready for prime time ever. They were used for ENG and commercials in the '90s, and were quickly replaced with D-2, which used composite video encoding.
TVs and transmission technology simply weren't as good back then as they are today.
That's half true. TV transmission was always first class. It was the TV sets themselves that lagged behind. By 1990 or so, the average new TV set was able to resolve the full NTSC signal that it received, thanks to many improvements in components and manufacturing in the '80s.
Saticon tubes and relatively fuzzy lenses of the video cameras of that day compared to film
Straw man. The '70s was when the old image orthicon cameras made by RCA were giving way to the new vidicon cameras. But it was the Philips Plumbicon camera that was by far the most popular studio upgrade in the '70s. In the ENG and cable markets, Ikegami was the desired brand of compact video camera. (One chief engineer who I had the pleasure of working under was on the team that invented the "minicam" while working for NBC/RCA, but used Ikegami cameras.) In the late '80s Sony and its Saticon began to gain share, primarily in the lower end broadcast markets. By that time CCD sensors were ready for the commercial market, and the vacuum tube camera's days were numbered. I still maintain that you haven't really worked in TV broadcasting if you haven't registered a 3-tube camera. ;)
As for lenses, Canon and Fujinon lenses were hardly "fuzzy"! Most broadcast cameras were fitted with zoom lenses by then (supplanting the old turret system of prime lenses), which aren't as picky-purist "good" as prime lenses. But they were more than good enough for NTSC, PAL and SECAM broadcasting. Film school sophomores can argue the finer points of lens effects, but the average studio camera lens then was little different from the ones used today for HD and 4K broadcasting.
Video, due to its narrow contrast range in the '70s, required completely flat lighting
The dynamic range of NTSC has always been from 7.5 to 100 IRE. That has never changed. Film snobs who behave in a willfully ignorant manner can succeed in failing at TV lighting, but that's their personal problem. Those who have learned how to use gamma to their advantage are doing quite well, be it in film transfer or stage lighting.
Stop Motion is filmed.
An arcane idea. Plenty of stop motion is done using still cameras. Some of the best stop motion work that I've seen was done with a not-video DSLR, edited in raw format at full resolution, and then downscaled to HD for release.
The "big three" networks, ABC, NBC and CBS of the '70s, probably used all different types of cameras and recording equipment.
Yes, NBC was owned by RCA, and so it had a monopoly relationship for certain equipment. There are still a fair number of RCA antennas on top of tall towers and buildings, broadcasting DTV just fine. CBS also OEMed some of its gear; I've seen the "Columbia Broadcasting System name and logo on rackmount gear in TV stations, though not even close to the range of gear that RCA made. Ampex and later Sony were the big names in broadcast VTRs, with their VPR and BVH lines, respectively. Sony broadcast machines always started with a "B".
Film lasts longer than videotape.
Both have their own manners of degradation. Both have their own methods of restoration. Neither has magical properties, like everlasting life.
So why were filmed show open/closes "fuzzy"? Generational tape loss, of course. That and loss of oxide with use. Before nonlinear editing, TV shows were edited with real tape. Guess which tape never changed with new episodes? Yes, it's that simple.
How can you tell if the originating material on a videotape was shot on film / at 24 fps? If you have a deck with incremental frame advance, slowly turn the jog dial to advance through fields slowly. If the motion stops for two, then three, then back to two etc., it was kept at 24 fps at one time or another. That's no guarantee that it wasn't shot on video and kinescoped, or shot on film at rates other than 24, but it's a good rule of thumb.
reply
share