The Turing test, developed by Alan Turing in 1950, is a test of a machine’s ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human.
Although there had been numerous claims that Eugene Goostman passed the Turing test, that simply is not true. Let us just say that he cheated the test in a lot of ways (further reading). So when will we pass the Turing Test? Good question and I was reading the New York Times report on Google Brain from last December, which gives an insight into the answer.
The NYTimes article is also a great primer on how Google’s team is developing Artificial Intelligence (AI) systems, but needs about 30 minutes to read the lengthy article so I thought I’d summarise the nub of it.
An average brain has something on the order of 100 billion neurons. Each neuron is connected to up to 10,000 other neurons, which means that the number of synapses is between 100 trillion and 1,000 trillion.
What Google and others are trying to do is to recreate that using compute power, but that is challenging as a neural network with trillions of connections is still a long way off. For example, Google has been working heavily on developing AI over the past decade, with the milestone of The Cat Paper in 2012.
Imagine you want to program a cat-recognizer on the old symbolic-A.I. model. You stay up for days preloading the machine with an exhaustive, explicit definition of “cat.” You tell it that a cat has four legs and pointy ears and whiskers and a tail, and so on. All this information is stored in a special place in memory called Cat. Now you show it a picture. First, the machine has to separate out the various distinct elements of the image. Then it has to take these elements and apply the rules stored in its memory. If(legs=4) and if(ears=pointy) and if(whiskers=yes) and if(tail=yes) and if(expression=supercilious), then(cat=yes). But what if you showed this cat-recognizer a Scottish Fold, a heart-rending breed with a prized genetic defect that leads to droopy doubled-over ears? Our symbolic A.I. gets to (ears=pointy) and shakes its head solemnly, “Not cat.” It is hyperliteral, or “brittle.” Even the thickest toddler shows much greater inferential acuity.
What the cat paper demonstrated was that a neural network with more than a billion “synaptic” connections — a hundred times larger than any publicized neural network to that point, yet still many orders of magnitude smaller than our brains — could observe raw, unlabelled data and pick out for itself a high-order human concept.
Why is Google so keen on AI? Because it’s a natural extension of today for tomorrow. For example, much of the work has been directed towards language translation, and Google has come some way on this, as illustrated by this paragraph from Ernest Hemingway’s short story “The Snows of Kilimanjaro” which opens with this paragraph:
Kilimanjaro is a snow-covered mountain 19,710 feet high, and is said to be the highest mountain in Africa. Its western summit is called the Masai “Ngaje Ngai,” the House of God. Close to the western summit there is the dried and frozen carcass of a leopard. No one has explained what the leopard was seeking at that altitude.
The Google Translate system that had been running for over a decade using old AI training, based upon directed learning, would have translated this paragraph as follows:
Kilimanjaro is 19,710 feet of the mountain covered with snow, and it is said that the highest mountain in Africa. Top of the west, “Ngaje Ngai” in the Maasai language, has been referred to as the house of God. The top close to the west, there is a dry, frozen carcass of a leopard. Whether the leopard had what the demand at that altitude, there is no that nobody explained.
The new neural network based Google Translate is far more accurate:
Kilimanjaro is a mountain of 19,710 feet covered with snow and is said to be the highest mountain in Africa. The summit of the west is called “Ngaje Ngai” in Masai, the house of God. Near the top of the west there is a dry and frozen dead body of leopard. No one has ever explained what leopard wanted at that altitude.
That is where AI is really making its mark. I would add more, but think you should read the whole thing if you want to really understand how they did it, as it’s a real feat of human engineering and it doesn’t stop there. The basic AI described in old translate is like Watson playing Jeaopardy. It’s basic AI that can do one thing really well.
There’s a second level of AI called General AI, where a machine can multi-task and do several activities, not just play Jeapoardy. We’re at that level today and neural network AI is giving us the ability to develop these areas faster and better than ever before.
This means we may reach Super AI – the ultimate level of intelligence where machines are as capable as humans at learning and developing – before the end of the next decade. Passing the Turning Test, where a human cannot distinguish whether they are talking to another human or a machine, is going to probably be achieved within five years.
Exciting times indeed and, if you want to know more, here’s a few more links:
- Computing Machinery and Intelligence by Alan M. Turing, the original paper covering the idea of the Turing test (.pdf version)
- An interactive Flash presentation explaining the Turing test
- Response to announcement of chatbot Eugene Goostman passing the Turing test by Ray Kurzweil
- The Singularity Is Near: When Humans Transcend Biology, a book by Ray Kurzweil (there’s also a documentary available)