|Jeffrey O Shallit
professor of computer science, University of Waterloo
In theoretical computer science, the greatest innovation is the realisation that algorithms are mathematical objects, and can be rigorously analysed in terms of their consumption of scarce resources, including space, time, and randomness.
One of the first to analyse an algorithm was the French mathematician Pierre-Joseph-‘Etienne Finck (1797-1870). In an 1841 book, he showed that the Euclidean algorithm for computing the greatest common divisor of two integers uses a number of division steps that is linearly bounded in the number of digits of the inputs. Finck’s work is all but forgotten today, but I discussed it in a paper in Historia Mathematica in 1994.
In recent times, much of the credit for the development of algorithm analysis certainly belongs to Donald Ervin Knuth (b. 1938), who in a series of books entitled The Art of Computer Programming popularised many of the tools now used routinely to analyse algorithms. Almost overnight, algorithm analysis changed from a purely engineering approach involving coding and testing to a rigorous branch of mathematics where the challenge is proving theorems.