AI learns new tricks to learn faster

Over the past few years, we have witnessed huge advances in artificial intelligence (AI).
2017may see even more.


Signs of an imminent AI boom are all around us. Companies like Amazon, Baidu, Facebook and Google are buying start-ups as well as recruiting researchers and opening laboratories. Their focus is on taking tasks that used to be performed uniquely by humans and to make them amenable to machines. 
Much of current excitement centres on ‘deep learning’, a subfield of machine learning in which computers learn new tasks by crunching large data sets.
Algorithms created for this purpose are bridging a gap that has haunted all AI research: tasks that are difficult for humans are easy for computers and vice versa.

The simplest computer runs rings around the brightest person when it comes to mathematical calculations. But the most powerful computer struggles with things that people find trivial, such as recognising faces.
For humans, solving complex mathematical equations means writing a set of formal rules. Turning these rules into a computer program is then fairly easy. For tasks human beings find simple though, there is no similar need for explicit rules and trying to formulate them can be daunting.


So, can a machine think?

The question ‘Can a machine think?’ is as old as computer science itself. In the 1950s, Alan Turing proposed that a machine could be taught like a young child. Shortly afterwards, in 1955, John McCarthy, inventor of the family of programming languages Lisp,
coined the term ‘artificial intelligence’. 
As AI researchers began to use computers to translate between languages and understand instructions in normal language – and not just code – so the idea that computers could eventually develop the ability to speak and think crossed into mainstream culture.
Well beyond the iconic HAL 9000 from Arthur Clarke’s Space Odyssey series, the tech genius of 2015’s Ex Machina put a breath-taking humanoid through the hoops to determine whether her thinking was indistinguishable from a human’s. Her ability to learn how to interact was a fascinating story of what makes us human.
Such synthetic intelligent beings may still be a long way off, but robots can be found on every assembly line, and smartphones can be found in every pocket. AI is far from science fiction.
Today, supported by AI, speech-recognition algorithms bring the internet to millions of computer-illiterate people around the world; doctors have the augmented ability to spot malignant and non-cancerous tumours in medical images; authorities can pick out suspects from billions of conversations or video recordings by their voice and face.

What exactly is AI?

The way in which computers learn from their mistakes is based on the human nervous system, and specifically on how neurones connect with each other to interpret information.
Researchers have developed algorithms, known as artificial neural networks, to perform tasks like facial recognition without supervision. These machine- learning algorithms scan vast databases containing millions of images and in doing so, train themselves.
DeepFace, the face verification algorithm unveiled by Facebook in 2014 is nearly as accurate as the human brain, recognising human faces 97 % of the time.
To date, the design of computer algorithms has been motivated by ideas originating in mathematics. The new approach, used in both software and hardware, is driven by the explosion of scientific knowledge of how the human brain functions. 
New processors comprise electronic components connected by wires mimicking synapses – the connections between neurones through which information flows from one neurone to another. These neuromorphic processors are not programmed. Each new signal transmitted though their components changes the neural network in much the same way that new information alters human thoughts.
One significant advantage of the new programming approach is its ability to tolerate glitches. Algorithms continuously adapt and work around failures to complete tasks.

Pushing the boundaries of 
brain-like computers

In 2016, the AlphaGo algorithm created by Google’s London-based AI company DeepMind showcased the strength of AI when it stunned the Go world by defeating some of the very best players at the board game. It was an important victory for the technique known as reinforcement learning, which involves learning to solve problems differently – not through programming or specific examples like conventional computers. but through experimentation combined with positive reinforcement. Neural networks provide the support needed to make it work on really complex problems, like Go which is regarded as one of the most complex board games ever invented.
AlphaGo’s approach could have broad applications, for example using clinical data more efficiently to improve diagnosis, decision making and planning. And this is not the only way in which the boundaries of AI could be pushed.
Generative adversarial networks, or GANs for short, promise to make computers a lot more intelligent over the next few years. 
Relatively new, GANs involve a network that generates new synthetic data after having been trained, and a second one that tries to discriminate between real and fake data. By working together, the two networks help computers learn from unlabelled data.

Next big step

Despite recent advances in AI, including Siri – Apple’s voice-controlled digital assistant – the technology still has severe limitations. Not surprisingly, techniques that improve voice recognition and help computers parse language more effectively are high on AI researchers’ agendas.

This is a long-standing challenge but the prospect of communicating and interacting with computers using language has long been a fascinating one. Better speech recognition would render computers a lot more useful. Still, do not expect to get into a meaningful conversation with your smartphone anytime soon!