I Think, Therefore…

Originally published in the Informanté newspaper on Thursday, 26 January, 2017

On 23 June 1912, in Maida Vale, London, a young man was born to a member of the Indian Civil Service of British India, and the daughter of the chief engineer of Madras Railways. Young Alan showed early signs of genius, and even at age 16, when he encountered Alter Einstein’s work, he not only understood it, but also figured out that Einstein was questioning Isaac Newton’s laws of motion, even though that was not explicit in the text. 

Alan later studied at King’s College, Cambridge, where he excelled at mathematics. By 1935, he was elected a fellow at King’s, and in 1936, he published a seminal paper, "On Computable Numbers, with an Application to the Entscheidungsproblem." Here he reformulated the limits of proof and computation via a simple hypothetical device that would become known as a Turing Machine. This “universal computing machine” was proved to not only be capable of calculating any conceivable mathematical computation if it could be represented as an algorithm, but he also proved that any such machine could perform, or emulate, the task of any other such machine. Alan Turing had provided the mathematical basis for computers. But he was not done.

Unfortunately, during the next few years, there was a spot of trouble with the Germans. He worked for His Majesty’s Government at Bletchley Park, in cryptanalysis, and developed a codebreaking machine that enabled the Allies to crack the Enigma Code. This played a key role in enabling an Allied victory, and it is estimated that his work shortened the war by two years, and saved more than 14 million lives. 

After that spot of bother, Alan turned his mind back to the mathematics of computational devices. In 1950, he published a paper in Mind, titled “Computing Machinery and Intelligence.” Based on his previous paper, where he proved that digital computers are ‘universal,’ in that they can in theory simulate the behaviour of any other digital machine, Alan Turing asked that seminal question that would drive the imaginations of computer scientists for years to come. Since Computer A could simulate the behaviour of Computer B in an imitation game, well, "Let us fix our attention on one particular digital computer C. Is it true that by modifying this computer to have an adequate storage, suitably increasing its speed of action, and providing it with an appropriate programme, C can be made to play satisfactorily the part of A in the imitation game, the part of B being taken by a man?"

In other words, can a computer isolated from a judge so that she or he cannot immediately tell whether she or he is communicating with a person or a computer, convince this judge that it is human? Can a computer think? Or, as the case may be, act indistinguishably from the way someone who can think acts? And thus, the field of Artificial General Intelligence was born. 

Many people have tried in the intervening 65-plus years to invalidate his original proposal with various arguments, the most common of which I alluded to in the previous paragraph. John Searle, for example, proposed the following analogy – suppose an artificial intelligence is programmed that passes this test, and understands Chinese. Then suppose he is in a closed room and has a book with an English version of the computer program, along with sufficient materials to run the program manually. He could receive Chinese characters through a slot in the door, follow the program as written, and produce Chinese characters as output through the door. If the program could pass Turing’s test, so could he – but he still wouldn’t understand Chinese! And neither, ipso facto, would the computer.

And yet, this is the same problem we as people face. Given that we can only observe the behaviour of others, how can we be certain that they have minds that can think? Behaviour, as shown above, does thus not guarantee that a thinking mind exists – we can only ever be certain our own minds exist. The Turing test, it seems, would not prove that a thinking computer would exist, but for all practical intent and purposes, it would be indistinguishable from one that does…

Of course, simply knowing it’s possible does not make it easy to achieve. The limitations of computer power at the time left it to a mostly theoretical science. Yet as computational power grew, so too were the attempts to realise what Alan Turing had theorized. Trying to reach that elusive goal of Artificial General Intelligence in one go was soon abandoned, but the fields of research into its specific sub-sections have borne fruit.



Fields like machine language processing, and translation opened up. Expert systems were created than when input with data, would follow expert reasoning to give solutions to a problem at hand. Games were amongst the first to gain artificial intelligence, as when the first available commercial computer was released in 1951, both chess and checkers had AI to play them written almost immediately. It took a while to mature, naturally, but by 1997 computers were able to beat grandmasters at chess for the first time, and by today, even chess AI running on mobile phones can beat most human players.  

By 2016, computer AI was able to beat the best humans at Go, considered one of the most computationally challenging games to win, and even now, artificial intelligence programs are inching ever closer to being able to beat humans at poker. But it is not only games – in the 1960’s, Captain Kirk’s verbal querying of the USS Enterprise computer was considered science fiction and over 200 years in the future, and yet today, we carry mobile phones that can do the same thing. Apple’s Siri, Google’s Assistant (and predecessor, Google Now), Amazon’s Alexa and Microsoft’s Cortana are all intelligent assistants available on a variety of computing devices that respond to your voice and perform actions. 

Slowly the parts are coming together, and Turing’s test will become relevant as never before. Sadly, Turing was gay, and chemically castrated in 1952 for Homosexual Acts, leading to his suicide in 1954 before his 42nd birthday. One of the greatest minds and pioneers in computer science had his life cut short due to intolerance and the world is worse off for it. It took the British government until 2009 to offer a public apology for its appalling treatment of a war hero, and in 2013 Queen Elizabeth II granted him a posthumous pardon. 

Let us hope that this is not how we act when we finally meet these new children of humanity. It is our responsibility as people to make ethical decisions based on reason, empathy and a concern not only for ourselves, but also other conscious, sentient beings, wherever they may come from. And perhaps, after welcoming them into our global community, we can offer them a glass of champagne.

No comments:

Post a Comment