Artificial Intelligence as a discipline consists of hundreds of individual technologies, concepts, and applications. These terms have become increasingly important as STEM education expands and there is a boom in practical household and consumer-facing applications for the technology.
Despite that, there is a lack of consistency in how many AI concepts are discussed, not just at the STEM education level, but in popular entertainment, science writing, and even at times in scientific journals. To address this, we need to standardize how we describe AI and its many subsets, and accurately define these terms both in general and specific to individual technologies and applications of those technologies. We discuss some of the most commonly misused and what they really mean.
Through decades of alternating optimism and disappointment in the prospects of artificial Intelligence, AI experts like Nick Bostrom, Demis Hassabis, Geoffrey Hinton, and Ray Kurzweil have pushed through challenges in funding and declining public interest to define what we know of the science today. From the basics of abductive reasoning to the utilization of Deep learning algorithms, the language used to describe AI has developed over the course of six decades to represent a complex combination of concepts and ideas.
All that time has created a problem, though, with so many terms now commonly misused in STEM education, popular entertainment, and even in scientific journals where certain parts of the vernacular have become interchangeable with one another. From conflation of common terms like machine learning and AI to the use of more advanced concepts like behavioral analysis, it’s likely that you are currently using some of these common phrases incorrectly.
Let’s take a close look at what they really mean.
Artificial Intelligence (AI) is a blanket term that often refers to a suite of technologies and concepts. When we talk about self-learning machines in any form, it’s usually tagged with “AI.” What AI really refers to is “weak AI” or the simulation of human intelligence to complete a very narrow task. Broader ability to perform tasks within the realm of human intelligence is artificial general intelligence (AGI) and is more commonly what we are referring to when describing chatbots, voice systems, or medical bots.
This is often used interchangeably with artificial intelligence but is nuanced in that it emphasizes that machine intelligence can be a distinct form of intelligence, separate from our own. It is therefore not artificial, but genuine in its own way.
The AI boom of the last eight years often refers to breakthroughs here, in the ability of a machine to process large volumes of data to develop or improve upon a specific skill. Machine learning is the process by which artificial systems learn to beat humans at Go, analyze large volumes of call center data, or personalize voice assistant responses.
Deep learning is often used in conjunction with machine learning to describe the process by which machines take in large data sets to “learn” new things. Specifically, deep learning refers to the use of an artificial neural network (ANN) to simulate the structure and process of the human brain. It is machine learning that utilizes layered algorithms to identify and learn from specific patterns in big data.
Supervised learning is used in instances where the system needs to develop a map that can accurately predict the outcome based on the input. To do this, input and the desired output are provided to the system. This form of learning is most often used in retrieval-based AI, as opposed to unsupervised learning in which the system is only provided input data without any corresponding output to measure against.
There are some advantages to be gained from the supervised model, but also several limitations. Because the input and output expectations are limited (and often defined by humans) such systems don’t always react well to unexpected input.