This is the first post in the Primer Series where I will cover emerging technologies
over the next couple months and attempt to explain them in the simplest way possible.
At its most basic level, artificial intelligence is the ability for machines to make decisions using inputs, similar to how human beings do. Pop culture has depicted AI in many robot-themed feature films such as WALL-E, Transformers, and Chappie. However, the truth is that AI can take on many forms and is already pervasive in our everyday lives. The future of the field and timeline for predicted developments are very much up for debate, but one can categorize the field into three buckets:
1. Artificial Narrow Intelligence (ANI) – “Weak AI”
AI that is skilled at one particular task
Examples: email spam filters, “recommended for you” type e-commerce shopping suggestions, Google Translate, internet search algorithms, virtual assistants (Siri), self-driving cars (involves multiple ANI systems)
ANI systems have been in place in various capacities for years now. It’s likely many of us use devices and services powered by ANI on a daily basis. These systems operate within a pre-defined range and leverage a specific data set to do so. These applications easily beat humans in terms of speed but have difficulty processing abstract concepts.
2. Artificial General Intelligence (AGI) – “Strong AI”
AI that is considered “human-level”, sentient, and can perform a variety of tasks
Examples: Westworld, Terminator
As shown by the fictional examples, AGI is not yet ready for prime time. We will need a significant increase in affordable high computing power and a reliable method to improve AI to this level, which is still up for debate. There are scientists who are attempting to reverse-engineer the brain to draw inspiration from what makes it work efficiently (i.e., neural networks). Another method involves mimicking the evolutionary development of humans through “genetic algorithms” in which successful algorithms are bred together to form a new computer over and over again. The last potential method is creating a computer that researches AI and codes improvements into its own architecture. While these methods are being experimented with in parallel, the exponential nature of AI improvements means we’ll see widespread human-level intelligence sooner than most expect.
3. Artificial Superintelligence (ASI)
AI that is smarter than humans in every domain
This is a topic that elicits a wide variety of reactions and speculation. Scenarios include AI forms that exist solely to augment human capability to doomsday scenarios where AI takes over the world. There isn’t enough information today to make an educated guess as to how this stage will play out.
An Accelerating Future
As the world progresses into an era of machines that are capable of human-level intelligence, things start getting very interesting. The jump from narrow to general AI could happen within the next 10-15 years, but the jump from general to superintelligence will be much quicker.
AI vs Machine Learning vs Deep Learning
Now that we’ve established an understanding of the state of AI and where it is likely to go, let’s dig in to AI as it exists today (Weak AI). As the graphic below shows, AI is an umbrella term that includes Machine Learning (a way of achieving AI) and Deep Learning (a more efficient implementation of ML).
Within Artificial Intelligence, one of the first techniques to take off has been Machine Learning, which is a technique that trains algorithms with data sets such that its predictive ability improves / “learns” over time. In this model, the key is to give the algorithm as much data as possible in order for it to become as useful as it can be.
Case Study: Branch
A popular implementation of ML has been to develop alternative credit scores for consumers. Branch uses alternative data, such as GPS data, social media activity, and cell phone usage, to offer financial products to individuals with little to no credit history as well as those that are underserved by traditional banks.
As a subset of ML, Deep Learning takes AI a step further by mimicking the information processing patterns of the human brain. Just as our brains recognize patterns and develop neural links to operate more efficiently, Deep Learning algorithms achieve increased accuracy with less human input by leveraging multiple layers of “neurons” that each pick a specific feature to learn so that the algorithm is learning on multiple layers instead of just one.
Case Study: Netradyne
The company sells products that monitor and alert drivers and
commercial fleet managers to maximize safety. Its flagship product, Driveri, provides fleet management through insights into driving behavior and sends alerts if the system detects certain driver gestures that could indicate danger, such as drowsiness.
The Rise of AI Continues
AI will increasingly play a role in many industries. While the initial phase of hype is mostly behind us, I would not be surprised if we start to see a significant increase in companies leveraging the technology in new and different ways in the short-term. Stay tuned for Part 2, where I’ll dive in to the various cognitive modes where AI is being applied today.