Artificial intelligence (AI) is the name for techniques that allow a computer to perform actions with some autonomy, imitating the behavior of human intelligence in the process — including the ability to make decisions and learn from experience.
AIs are increasingly common in everyday life, present in services, devices and a series of advanced research in different fields of science. Its operation, however, is still unknown by most users.
Does an AI “think”? As impressive as the results and potential for the future are, artificial intelligence applications are still subject to limitations. Next, you will understand these limitations, the general functioning of such a technology and the applications in which AI has been excelling.
What differentiates artificial intelligence from a shared computer program is pattern recognition. In software created to solve a task, the computer only follows the rules defined by the developer: there is no margin for the machine to have autonomy and make decisions that come out of the script.
In AI, however, the approach is different. The computer is trained to solve a specific problem by accessing thousands (or millions) of copies that define that problem. The computer starts to assimilate patterns, recognize rules and be able to identify these same patterns in other data samples.
As all this occurs independently of programming — the AI constructs its rules and the “interpretation” of the data it has access to by itself — this type of technology is defined as artificial intelligence.
Does Artificial Intelligence Think?
Computers can’t think. Despite this limitation, they have the advantage of speed and can perform several calculations in a few seconds, keeping a team of mathematicians busy for hours. This ability, added to very sophisticated programming algorithms, gives the PC, cell phone, automobile or another device the ability to appear intelligent.
If it doesn’t think, then how does AI work? Generally speaking, actions that seem to be the result of intelligence are, in fact, the result of a learning process in which the computer is exposed to patterns. Being exposed to millions of examples of a particular type — photos, text or even chess moves —the system slowly starts to categorize the existing variations between each of the examples, to the point of recognizing elements and situations.
For example, an AI trained to play chess can learn the game’s rules simply by observing the natural evolution of the moves it examines. In addition to learning the game’s laws, the computer can master varied patterns to create complex activities sufficient to beat even the best human player.
Technologies of this type have also been applied to verify the results of measurements of climate variations in agriculture, space research, and the stock market.
A practical example of this type of machine learning has been observed in medicine. A specialist needs to analyze an exam result to diagnose the patient carefully — and, eventually, he may even miss a detail. However, a computer trained with millions and millions of exams can become better qualified in identifying disease traits than an expert.
This doesn’t mean that the computer “thought” to achieve the result—it just means that it used a repertoire of patterns collected from reading countless images to be able to identify the same traits in other photos.
Limits Of Artificial Intelligence
Current applications of artificial intelligence have a statistical bias. This is an excellent solution for solving problems where the answer is not binary—that is, the answer is not yes or no. Artificial intelligence today stands out in generating results in evaluating complex problems in which patterns are decisive.
A hypothetical example: an AI may find it difficult, for example, to observe the traffic light and make the correct decision based on the variation between two types of colors: red to stop, green to go ahead. The reason is that any fluctuation in data collection or in the repertoire that the AI had access to during training can introduce variations and unexpected behavior in creating decisions based on the result.
The computer may decide on the wrong path if there are any inconsistencies. For example, manufacturers of cars with some autonomy have to face extreme cases all the time, not only in traffic light recognition but also in asphalt guides, signs and everything else (besides, of course, other vehicles, pedestrians and animals ).
Leaving the hypothesis and going to actual cases, Tesla, famous for its autonomous driving technology, has already faced some fatal accidents in which the cause of collisions can be traced to the inability of the AI to detect obstacles correctly. At least on two occasions, with fatalities, the brand’s cars did not notice a truck crossing the road. The AI kept the cars on course and collided crosswise with the trucks.
Where Does AI Already Work?
Application of some AI is already common in the routine of many people: the virtual assistant on your cell phone uses technology. Autonomous vehicles also apply sophisticated artificial intelligence technologies to navigate traffic on highways and large cities safely.
It has also become common to migrate to connected home environments in which an AI platform can control routines and automate tasks within the home. Social networking apps that apply effects to images and offer “diagnostics” about the user’s personality also run with some AI.
Also Read: Conversational AI: What Are The Benefits