Alan Turing would have been 102 years old on June 23, 2014.
A British mathematician, logician, cryptanalyst, philosopher, computer scientist, mathematical biologist, and marathon runner, Turing’s brilliant insights into the relationship between logic and computation reveals a stark, abstract essence of what a computing machine — and indeed, mathematics itself — can do and what it can’t.
In 1900, mathematician David Hilbert posed 23 then-unsolved mathematical problems to the world’s mathematical community. The second of these was, “prove that the axioms of arithmetic are consistent.” That is, Hilbert asked if there is a set of axioms or basic, fundamental, “nuggets” of truth from which can be proved all mathematical statements, without giving any contradictory answers such as 1=0. In other words, is the world of mathematics truly: (1) complete, (2) consistent and (3) decidable?
Answering the three aspects of this question can be approached from two directions: one is based on pure logic, the other is informed by the essence of computability theory.
In terms of logic, Austrian logician, mathematician and philosopher Kurt Gödel in 1931 demonstrated with his two “incompleteness theorems” that one can make true propositions in any logical system that, surprisingly, cannot be proved from that system’s axioms. Thus, #1 and #2 are disproved.
In terms of computation, #3 (whether mathematics is decidable or not) takes the form of the “Decision Problem,” or “Entscheidungsproblem” as expressed in Hilbert’s German. The answer to this came in the mid-1930s with the (independent) work of Alonzo Church and Alan Turing, which led to the so-called Church-Turing thesis.
Turing imagined an abstract machine which he called the “automatic machine” or “a-machine” — but which we now call a Turing machine. You can envision it as an infinite tape divided into cells, or a roll of toilet paper with a symbol on each square (the machine’s memory). Each cell can contain one symbol, either a “0” or a “1.” The machine also has a “read-write head” that can read, write or erase the symbol on a single cell on the tape, and which can move left or right along the tape.
The Turing machine’s actions are determined by (1) the machine’s current state, (2) the symbol in the cell currently scanned by the head, and (3) a table of transition rules for the head, which is the machine’s “program.” The machine progresses from an initial state to a state where there is no unique transition rule to be carried out, whereupon it halts.
Turing demonstrated that any “effective procedure” — what we would call a computer algorithm — can be executed by a Turing machine. One Turing machine could simulate the workings of any other Turing machine (thus making it a Universal Turing Machine, or, simply, a “universal machine”), just as today’s computers can run simulations of one operating system inside another one, or one virtual computing environment inside another one.
However, Turing’s 1936 solution to what became known as the “Halting Problem” was that, given an arbitrary computer program, it is impossible to decide whether the program in question will ever finish running or continue to run forever. Thus, the Halting Problem, and indeed the Decision Problem of mathematics (#3 above) is undecidable.
Just as Kurt Gödel’s theorems show that a logical system cannot prove whether certain statements are true, so too does Alan Turing show that any program purporting to decide whether another program will halt must contradict itself and cannot be correct.
During World War II, Turing worked at Bletchley Park, a point midway between Oxford and Cambridge where the British Government Code and Cypher School (GC&CS) was situated. Turing led the group that deciphered the German naval coded messages, using early computer-like devices. Winston Churchill said that Turing made the single greatest contribution to Allied victory in the war against Nazi Germany.
Following the war, Turing worked at Britain’s National Physical Laboratory, where he designed an early programmable computer, the Automatic Computing Engine (ACE). The word “engine” was tacked onto the name of the device in honor of Charles Babbage, whose 1837 design for a mechanical, programmable computer, the Analytical Engine, had nearly been lost to history.
Unfortunately, Britain’s Official Secrets Act precluded Turing from demonstrating that his ideas could be implemented. This allowed others having knowledge of his work (such as mathematician John von Neumann) to become famous with their own designs.
Moreover, Turing was persecuted toward the end of his life for being gay, and his increasingly frustrated, thwarted life led him to commit suicide in 1954.
These days, Turing is best known for practically founding the field of artificial intelligence with his “Turing Test”: If a human interrogator cannot distinguish the output of a computer program from that of a live human being, then the computer program can said to be intelligent.
On June 7, 2014, at a University of Reading test held at Britain’s Royal Society, a Russian “chat bot” named Eugene Goostman was hailed at the first program to “officially” pass the Turing Test.
Richard Grigonis is an internationally known technology editor and writer. He was executive editor of Technology Management Corporation’s IP Communications Group of magazines from 2006 to 2009. The author of five books on computers and telecom, including the highly influential Computer Telephony Encyclopedia (2000), he was the chief technical editor of Harry Newton's Computer Telephony magazine (later retitled Communications Convergence after its acquisition by Miller Freeman/CMP Media) from its first year of operation in 1994 until 2003. Read more reports from Richard Grigonis — Click Here Now.
© 2024 Newsmax. All rights reserved.