Delirium

Delirium

Current track

Title

Artist


ARTIFICIAL INTELLIGENCE

on 15/03/2020

AI (Artificial Intelligence) was targeted by scientists from a very early age. Humankind has always aspired to build an artificially intelligent entity which:

•    will not get tired

•    will not complain

•    will always be accurate

•    and will able to work continuously and quickly.

But first things first, what is AI? Scientifically, the term ‘Artificial Intelligence’ refers to the IT sector that deals with the design and implementation of computer systems which mimic aspects of human behaviour that demonstrate the elementary characteristics of intelligence: learning, adaptability, drawing conclusions, contextual understanding and problem solving.  John McCarthy, the American cognitive and computer scientist, defined this area as ” the science and methodology of creating intelligent machines.”

AI could describe anything with the ability to take a decision, whether at a software or a hardware level. In that sense, even the safety relay at your home is considered a unit with artificial intelligence regardless of how simple its accompanying software is. But things get increasingly more interesting when we discuss AI at a more sophisticated level where both the hardware and the software are interwoven.

Indeed, nowadays, almost every aspect of our modern daily life is flooded with AI that is trying to keep us safe, to accelerate the workflow, to simplify our daily life, and even to enhance entertainment. This, of course, wouldn’t have been possible wasn’t it for the tireless efforts and rigorous studies stretching over several years dedicated to the design of such ‘intelligent’ software.

In software, for instance, various factors contribute to the creation of AI, but forming accurate neural networks dominates the scene. The intricate process of creating an accurate neural network has its roots in understanding the basic functions of the human brain. This is because as a human being grows and starts learning, he/she creates neuronal synapses. For example:

“I am standing at the pavement of a street where cars are passing by. If the cars stop, I walk, otherwise I wait.”

This, we could say, is a simple neural network with a criss-cross decision which examines whether or not cars are currently passing. Only if the condition of the stopped cars is achieved will the brain give an electric signal to your feet to walk. However, if this condition is not met, the human being will remain static. This is precisely what AI is programmed to mimic. Currently, AI is able to mimic this process to a very good degree, where we now have mathematical models that make up BDTs (Boosted Decision Trees) or even simpler MLPs (Multi-Layer Perceptron). This is part of an artificial neural network which is designed to return a value of 0 or 1 depending on the value of the linear function of its inputs; its results are extremely accurate.

But is every neural network suitable? Clearly not. And this is because the mathematical implementation of the neural network must correspond to a specific model or describe a particular state or behaviour. Scientists noticed this problem relatively early, so they moved towards neural networks of genetic algorithms. As the term suggests, this specific type of neural networks is not static, it evolves. The neural networks of a genetic algorithm re-create synapses, modify existing ones and essentially try to ‘learn’ from the environment to which they are exposed. This way, these specific networks function a lot like a human brain.

Another difficult component concerns the rigor of decision making. Without going into great detail, we know that when we measure something, this measurement is followed by an error. In the previous example where you had to decide when to cross the street, if you make an inaccurate measurement of the number of cars on the street, the result will also be inaccurate. Let’s go one step further; suppose that you are not counting cars, you’re counting flies instead. Why flies? Because they are small and need better instruments to be accurately detected. Can you allow one step further? As a human being, you might have missed some flies, nonetheless, you would still feel confident and the result would be to successfully cross the street regardless of the input! But what would an AI do? How would it define the value of “I saw all the flies”?

In essence, the question concerns the IA’s standards of tolerance. At its beginning, AI was not at all tolerant. It was mainly driven by step-function algorithms, (if x = …. then), whose result was quite binary. This problem was solved with the introduction of genetic algorithms which allowed a higher bar of tolerance. Thus, scientists realized that it was important for a neural network to have the opportunity to evolve freely. Accordingly, what scientists need to do nowadays, with the alternative of creating a genetic algorithm, is merely to programme the logarithm’s primitive step-function at its original state since the rest will evolve on its own in time.

But there’s a difference here. Let’s go back to the original dilemma with a criss-cross decision. At a criss-cross, the human brain is able to pair two or more states, and depending on the link established, the decision is made. This is possible since the human brain is also influenced because of the presence of various stimuli, hormones, emotions and feelings.

Clearly, in the genetic logarithm, this is impossible. The human mind has feelings and emotions, which makes us different from AI and robots. Feelings and emotions contribute to humans’ getting tired and not acting deterministically.  It is true that these feelings and emotions are absent in AI, and no matter how much we want to free them from this burden, it is not possible because we, humans, are the ones who create the algorithm in the first place. For instance, look at the Microsoft and Google bots which later developed Nazi behaviour. Certainly, these bots were not initially programmed with said political agenda, but later adapted this algorithm based on stimuli and input data they collected by observing the human behaviour around them. Needless to say, for better or for worse, these companies had to interfere with the existence of said bots and decided to terminate them.

But the question is: Is it possible to bring these bots back to life, explain the evils of Nazism and cause them to change their behaviour?

Only the future will tell.

By Empedocles

Disclaimer:

The opinions, beliefs and viewpoints expressed in this article belong solely to the author, and do not necessarily reflect or represent those of Delirium Station.


Send us a message 
Enter your username