Transcript
Last updated
Before there was a glowing screen on every desk and in every pocket, calculation was drudgery. In 1613 Richard Brathwait used the word computer for a person, not a machine, and by 1943 most of those human computers were women, hired to grind through figures by hand because they could be paid less.
The first great leap came in London in 1822. Charles Babbage, an English engineer, showed the Royal Astronomical Society his Difference Engine, then in 1833 imagined something far bolder: the Analytical Engine. It would take punched cards from the Jacquard loom, store numbers in memory, branch, loop, print results, even ring a bell. It was the first design for a general-purpose programmable computer.
Babbage never finished it. The British Government stopped funding, the parts had to be made by hand, and the machine demanded thousands of them. His son Henry Babbage managed only a simplified mill in 1888, and demonstrated it in 1906. The idea was sound. The age was not ready.
The modern theory arrived before the modern machine. In 1936 Alan Turing described a universal computing machine, and in 1937 Claude Shannon showed that Boolean logic could drive switching circuits. Then war forced theory into hardware. In Berlin, Konrad Zuse built the Z3 in 1941, a working electromechanical programmable digital computer. At Bletchley Park, Tommy Flowers delivered Colossus in 1944 to attack German Lorenz messages. In Pennsylvania, John Mauchly and J. Presper Eckert brought ENIAC into full operation in 1945, vast, hungry for power, and far faster than anything before it.
Yet these giants were awkward beasts. To change a task, engineers had to rewire plugs and switches. The real turning point came in Manchester on 21 June 1948, when Frederic C. Williams, Tom Kilburn and Geoff Tootill ran the first program on the Manchester Baby. It stored instructions in memory. A machine could now keep its own method as well as its numbers.
From there, the spread was swift. Ferranti delivered the Mark 1 in 1951 as the first commercially available general-purpose computer, while J. Lyons & Company used LEO I that same year for routine office work. In 1947 Bell Labs had already produced the transistor through John Bardeen, Walter Brattain and William Shockley, and by the mid-1950s transistors were replacing valves. Then Jack Kilby and Robert Noyce pushed many components onto one integrated circuit. In the early 1970s, Federico Faggin, Ted Hoff, Masatoshi Shima and Stanley Mazor put a processor onto a single chip with the Intel 4004.
That shrinking changed the world. Computers moved from laboratories to offices, then homes, then pockets. They became the control systems in factories and microwave ovens, the engines of personal computers and smartphones, and the hidden machinery of the Internet linking billions of users. A modern system on a chip, about the size of a coin, can hold billions of transistors and still sip only a few watts.
Image: M. Weik, Public domain · AI-narrated · Drawn from Wikipedia · CC BY-SA 4.0
.jpg&width=1200)






