How the chase changed

Computers and π

When machines entered the π hunt in 1949, they didn't invent a new way to find π. They ran the same paper-era formulas, just faster. The revolution came later — and its roots still reach back two thousand years.

The record, decade by decade

Every bar below is log-scale — linear would crush everything before 2000 into invisible dust.

digits →
  • ~250 BC
    3 digits · Archimedes · Polygons, 96 sides, by hand
  • 1596
    35 digits · Van Ceulen · Polygons, 2⁶² sides, by hand
  • 1706
    100 digits · Machin · arctan series (by hand)
  • 1873
    707 digits · Shanks (W.) · Machin-like (by hand; 527 correct)
  • 1949
    2,037 digits · ENIAC team · Machin's formula on ENIAC
  • 1961
    100,265 digits · Shanks & Wrench · IBM 7090, Störmer + Gauss arctan
  • 1973
    1 million digits · Guilloud & Bouyer · CDC 7600, Machin-like
  • 1989
    1 billion digits · Chudnovsky bros. · Home-built supercomputer, Chudnovsky series
  • 2002
    1.2 trillion digits · Kanada team · HITACHI SR8000, arctan pair
  • 2019
    31.4 trillion digits · Emma Haruka Iwao · Google Cloud, y-cruncher (Chudnovsky)
  • 2022
    100 trillion digits · Iwao et al. · Google Cloud, y-cruncher
  • 2024
    105 trillion digits · StorageReview/Storer · y-cruncher on a single server

Selected records. The full chronology has dozens more entries between these.

The pencil era (~250 BC – 1945)

For two thousand years, π was a test of human stamina. Archimedes pinned it down to two correct digits with polygons around 250 BC. Ludolph van Ceulen spent much of his life grinding out 35 digits the same way in the 1590s. Three centuries after that, William Shanks spent 15 years by hand for 707 digits — and 180 of those turned out to be wrong. Progress was linear, slow, and fragile.

The first digital run (1949)

ENIAC — a room-sized vacuum-tube computer at the Ballistic Research Laboratory — ran a π calculation for 70 hours in September 1949 under John von Neumann's direction. It produced 2,037 digits. The remarkable part: the program used John Machin's 1706 formula, unchanged. A 243-year-old pencil-and-paper trick was the first real program ever written for π.

Paper formulas on silicon (1950s–1970s)

Each new machine ran the same basic plan: pick a Machin-like arctan formula, code it in whatever the new iron understood, run it longer. 1961: 100,000 digits on an IBM 7090. 1973: one million digits on a CDC 7600. Different hardware, same underlying idea Machin had in 1706.

A new kind of math (1976–1990)

In 1976, Eugene Salamin and Richard Brent independently published an algorithm (based on 19th-century work by Gauss and Legendre) that doubled the number of correct digits every iteration. Then the Borwein brothers pushed that to quartic and nonic convergence. The hand-era formulas were good for computers; these were built for computers from the start.

Ramanujan's ghost (1987–today)

In 1914, the self-taught Indian mathematician Srinivasa Ramanujan published a family of π series that converge astonishingly fast. Nobody could use them practically — until the Chudnovsky brothers, working from a Manhattan apartment with a homebuilt supercomputer made of mail-order parts, turned one of Ramanujan's ideas into the Chudnovsky algorithm in 1987. It adds roughly 14 correct digits per term, and it is still the formula behind every π record set today, nearly 40 years later.

A gift from nowhere (1995)

Bailey, Borwein, and Plouffe stunned everyone with a formula that can compute the n-th hexadecimal digit of π without computing any of the digits before it. No one had thought such a thing was possible. It hasn't replaced Chudnovsky for bulk records, but it lets researchers spot-check digits deep into the trillions without redoing the whole calculation.

Today, and the ceiling (2010s–now)

Modern records aren't really about math anymore — they're about storage, RAM, disk throughput, and FFT-based multiplication of enormous integers. Emma Haruka Iwao computed 31.4 trillion digits on Google Cloud in 2019, 100 trillion in 2022, and Jordan Ranous at StorageReview pushed the record past 105 trillion in 2024 on a single server. The engine is the open-source y-cruncher; under the hood it's still the Chudnovsky series.

The through-line

It's tempting to think of the computer era as a clean break from the Archimedes-and-Machin past. It isn't. Every single modern record stands on a paper-era shoulder:

  • ENIAC (1949) ran Machin's 1706 arctan formula.
  • Every record through 1973 used some Machin-like arctan combination — just a descendant of Machin's same idea.
  • Salamin\u2013Brent (1976) is built from the arithmetic\u2013geometric mean, a construction Gauss worked out around 1799.
  • The Chudnovsky algorithm (1987), which every current record uses, is a descendant of Ramanujan's series from 1914.
  • Even the multiplication underneath all of this — the FFT-based arithmetic used to multiply numbers with trillions of digits — is built from Fourier analysis, and Fourier analysis is, at heart, the math of circles. π calculating itself with its own geometry.

Archimedes' insight — that a hard curved problem could be pinned down by easier straight pieces, and that the error could be squeezed from both sides — is still there in how modern records are verified: a second, unrelated formula (often Plouffe's BBP) is run to confirm the answer. Two different methods converging on the same digits. A 2,300-year-old pattern.

What stops us from going further?

Not math. The Chudnovsky series keeps working. What stops further records is physical: you need enough RAM to hold the partial results, enough disk to swap them, enough bandwidth to keep the FFT fed, and enough patience — the 105-trillion-digit run took 75 days of continuous computation.

There is no known scientific application that needs any of these digits. We calculate them because we can, because they stress-test hardware in ways nothing else quite does, and because chasing π further into the infinite is a fun thing for humans to keep doing.

Sources & further reading