Vacuum Tube Memory

Back in the 1940s the first digital computer, the ENIAC, used a very small amount of memory made from vacuum tubes. Actually, the entire computer was made of tubes. It could manage calculations with 20 10-digit numbers.

ENIAC (public domain photo)

Vacuum tubes have several drawbacks (but keep in mind it was revolutionary at the time), primarily the amount of power required, and the reliability (or lack of it). Colossus was a significant vacuum tube computer, released in 1943.  It was used at Bletchley Park in the British WWII intelligence effort. When operating it consumed 8.5 kW (15 for the Mk2), contained 1600 tubes (2400 in the Mk2). Typically a tube would fail every couple days, which needed to be found and replaced (taking about 15 minutes).

Colossus (public domain photo)

Mercury Delay Line Memory

Later, but still in the 40s, Presper Eckert developed acoustic delay line memory. This consisted of a glass tube filled with mercury, with a crystal transducer at each end. At one end you would vibrate the transducer using a sound wave based on the serialized data to be stored. That would case the wave to ripple down the mercury in the tube, to be received by the transducer on the other end. That transducer would convert the wave in the mercury back into the electrical signal corresponding to the data. You had to have that loop back to be replayed into the tube to continuously refresh the stored data.

Photo by Ed Thelen CC BY-SA 3.0

Cathode Ray Tube Storage

Remember when TVs and monitors weren't flat? I remember having a 21" screen that was bigger front to back that it was side to side. But I digress. Somewhat. The next evolution of storage used tubes very much like those CRT screens, the data being stored in the image on its face. The charge that made the phosphor glow was read by other circuits. The data was stored for a fraction of a second so, like the delay line memory, had to be continuously refreshed.  The photo below is a Williams Tube, one of the designs that was in use.

Magnetic Core Memory

In the late 40s several researchers, notably Jay  Forrester of MIT, developed magnetic core memory. This memory was made up of tiny magnetic rings that were called cores (hence the name of the technology). Wires are threaded through the cores allowing them to be magnetized in either a clockwise or counter clockwise direction. The direction of magnetism represents one bit of data: a 0 or a 1. Interestingly, reading a bit always results in it being a 0 afterward. So if a 1 was read, it would have to be rewritten to maintain the information. Wikipedia gives a far more detailed description of the technology. Magnetic core did have the great advantage that the information that was written into it stayed there, unlike previous technologies that required continuous refreshing.

Photo by Konstantin Lanzet CC BY-SA 3.0

Magnetic core was the standard memory technology from the 50s through into the 70s. It only lost popularity when semiconductor memory was developed (initially by Intel). Semiconductor memory was much smaller and cheaper. Even to today the technology has had a lasting impact. Where do you think the term "core dump" comes from?

It's interesting to look back at the early computers and their technologies.  Remember those photos of ENIAC and Colossus the next time you grumble about soldering an SMT MCU :)

This guide was first published on May 02, 2018. It was last updated on Mar 08, 2024.

This page (History) was last updated on Mar 08, 2024.

Text editor powered by tinymce.