Artificial Intelligence

The Most Amazing Artificial Intelligence Milestones So Far


Artificial Intelligence (AI) is the hot topic of the moment in technology, and the driving force behind most of the big technological breakthroughs of recent years.

In fact, with all of the breathless hype we hear about it today, it’s easy to forget that AI isn’t anything all that new. Throughout the last century, it has moved out of the domain of science fiction and into the real world. The theory and the fundamental computer science which makes it possible has been around for decades.

The Most Amazing Artificial Intelligence Milestones So FarAdobe Stock

Since the dawn of computing in the early 20th century, scientists and engineers have understood that the eventual aim is to build machines capable of thinking and learning in the way that the human brain – the most sophisticated decision-making system in the known universe – does.

Today’s cutting-edge deep learning using artificial neural networks are the current state-of-the-art, but there have been many milestones along the road which have made it possible. Here’s my rundown of those that are generally considered to be the most significant.

1637 – Descartes breaks down the difference

Long before robots were even a feature of science fiction, scientist and philosopher Rene Descartes pondered the possibility that machines would one day think and make decisions. While he erroneously decided that they would never be able to talk like humans, he did identify a division between machines which might one day learn about performing one specific task, and those which might be able to adapt to any job. Today, these two fields are known as specialized and general AI. In many ways, he set the stage for the challenge of creating AI.

1956 – The Dartmouth Conference

With the emergence of ideas such as neural networks and machine learning, Dartmouth College professor John McCarthy coined the term “artificial intelligence” and organized an intensive summer workshop bringing together leading experts in the field.

During the brainstorming session, attempts were made to lay down a framework to allow academic exploration and development of “thinking” machines to begin. Many fields which are fundamental to today’s cutting-edge AI, including natural language processing, computer vision, and neural networks, were part of the agenda.

1966 – ELIZA gives computers a voice

ELIZA, developed at MIT by Joseph Weizenbaum, was perhaps the world’s first chatbot – and a direct ancestor of the likes of Alexa and Siri. ELIZA represented an early implementation of natural language processing, which aims to teach computers to communicate with us in human language, rather than to require us to program them in computer code, or interact through a user interface. ELIZA couldn’t talk like Alexa – she communicated through text – and she wasn’t capable of learning from her conversations with humans. Nevertheless, she paved the way for later efforts to break down the communication barrier between people and machines.

1980 – XCON and the rise of useful AI

Digital Equipment Corporation’s XCON expert learning system was deployed in 1980 and by 1986 was credited with generating annual savings for the company of $40 million. This is significant because until this point AI systems were generally regarded as impressive technological feats with limited real-world usefulness. Now it was clear that the rollout of smart machines into business had begun – by 1985 corporations were spending $1 billion per year on AI systems.

1988 – A statistical approach

IBM researchers publish A Statistical Approach to Language Translation, introducing principles of probability into the until-then rule-driven field of machine learning. It tackled the challenge of automated translation between human languages – French and English.

This marked a switch in emphasis to designing programs to determine the probability of various outcomes based on information (data) they are trained on, rather than training them to determine rules. This is often considered to be a huge leap in terms of mimicking the cognitive processes of the human brain and forms the basis of machine learning as it is used today.

1991 – The birth of the Internet

The importance of this one can’t be overstated. In 1991 CERN researcher Tim Berners-Lee put the world’s first website online and published the workings of the hypertext transfer protocol (HTTP). Computers had been connecting to share data for decades, mainly at educational institutions and large businesses. But the arrival of the worldwide web was the catalyst for society at large to plug itself into the online world. Within a few short years, millions of people from every part of the world would be connected, generating and sharing data – the fuel of AI – at a previously inconceivable rate.

1997 – Deep Blue defeats world chess champion Garry Kasparov

IBM’s chess supercomputer didn’t use techniques that would be considered true AI by today’s standards. Essentially it relied on “brute force” methods of calculating every possible option at high speed, rather than analyzing gameplay and learning about the game. However, it was important from a publicity point of view – drawing attention to the fact that computers were evolving very quickly and becoming increasingly competent at activities at which humans previously reigned unchallenged.

2005 – The DARPA Grand Challenge

2005 marked the second year that DARPA held its Grand Challenge – a race for autonomous vehicles across over 100 kilometers of off-road terrain in the Mojave desert. In 2004, none of the entrants managed to complete the course. The following year, however, five vehicles made their way around, with the team from Stanford University taking the prize for the fastest time.

The race was designed to spur the development of autonomous driving technology, and it certainly did that. By 2007, a simulated urban environment had been constructed for vehicles to navigate, meaning they had to be able to deal with traffic regulations and other moving vehicles.

2011 – IBM Watson’s Jeopardy! Victory

Cognitive computing engine Watson faced off against champion players of the TV game show Jeopardy!, defeating them and claiming a $1 million prize. This was significant because while Deep Blue had proven over a decade previously that a game where moves could be described mathematically, like chess could be conquered through brute force, the concept of a computer beating humans at a language based, the creative-thinking game was unheard of.

2012 – The true power of deep learning is unveiled to the world – computers learn to identify cats

Researchers at Stanford and Google including Jeff Dean and Andrew Ng publish their paper Building High-Level Features Using Large Scale Unsupervised Learning, building on previous research into multilayer neural nets known as deep neural networks.

Their research explored unsupervised learning, which does away with the expensive and time-consuming task of manually labeling data before it can be used to train machine learning algorithms. It would accelerate the pace of AI development and open up a new world of possibilities when it came to building machines to do work which until then could only be done by humans.

Specifically, they singled out the fact that their system had become highly competent at recognizing pictures of cats.

The paper described a model which would enable an artificial network to be built containing around one billion connections. It also conceded that while this was a significant step towards building an “artificial brain,” there was still some way to go – with neurons in a human brain thought to be joined by a network of around 10 trillion connectors.

2015 – Machines “see” better than humans

Researchers studying the annual ImageNet challenge – where algorithms compete to show their proficiency in recognizing and describing a library of 1,000 images – declare that machines are now outperforming humans.

Since the contest was launched in 2010, the accuracy rate of the winning algorithm increased from 71.8% to 97.3% – promoting researchers to declare that computers could identify objects in visual data more accurately than humans.

2016 – AlphaGo goes where no machine has gone before

Gameplay has long been a chosen method for demonstrating the abilities of thinking machines, and the trend continued to make headlines in 2016 when AlphaGo, created by Deep Mind (now a Google subsidiary) defeated world Go champion Lee Sedol over five matches. Although Go moves can be described mathematically, the sheer number of the variations of the game that can be played – there are over 100,000 possible opening moves in Go, compared to 400 in Chess) make the brute force approach impractical. AlphaGo used neural networks to study the game and learn as it played.

2018 – Self-driving cars hit the roads

The development of self-driving cars is a headline use case for today’s VR – the application which has captured the public imagination more than any other. Like the AI that powers them, they aren’t something which has emerged overnight, despite how it may appear to someone who hasn’t been following technology trends. General Motors predicted the eventual arrival of driverless vehicles at the 1939 World’s Fair. The Stanford Cart – originally built to explore how lunar vehicles might function, then repurposed as an autonomous road vehicle – was debuted in 1961.

But there can be no doubt that 2018 marked a significant milestone, with the launch of Google spin-off Waymo’s self-driving taxi service in Phoenix, Arizona. The first commercial autonomous vehicle hire service, Waymo One is currently in use by 400 members of the public who pay to be driven to their schools and workplaces within a 100 square mile area.

While human operators currently ride with every vehicle, to monitor their performance and take the controls in case of emergency, this undoubtedly marks a significant step towards a future where self-driving cars will be a reality for all of us.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.