Read

Is AI as scary as it sounds?

Feb 23, 2023

Artificial Intelligence (AI) is a buzzword that is hard to escape. It conjures up images of talking, humanoid robots, and daunting futuristic scenarios, most likely inspired by the science-fiction film industry. However, what really is AI? And are machines really going to take over the world?

In this article, Simeon Harrison, a machine learning expert Simeon Harrison from EuroCC Austria, explains what AI is, touches upon the topics of machine learning and deep learning, and gives some hints as to where we can find this technology around us.

So, let’s start.

To answer the questions from the very beginning, we need to step back in time: The first academic research on AI began in 1956 at Dartmouth College in New Hampshire, USA. Some of the leading scientists believed that AI would surpass human intelligence within a generation. Millions of Dollars were poured into research until it became apparent that specialists had grossly underestimated the difficulty of realising such a project. Public funding eventually ebbed away in the early 1970s which became known as the first “AI-winter”. Ten years later, in the early 1980s, the Japanese Government pushed an initiative for industry funding for AI. However, by the end of the decade, investors became disillusioned again.

It wasn’t until the 21st century that computational power was sufficient enough to deliver credible and useful results. Since then, AI has revolutionised areas such as supply chain management, image recognition, diagnostics, autonomous driving and yes, playing computer games, just to mention a few. These are all examples belonging to something experts call artificial narrow intelligence (ANI). ANI is where AI excels at tasks that are narrowly defined and very specific. AI can definitely outperform humans in these highly specialised areas. In contrast, artificial general intelligence (AGI) is the kind of AI that comes up in the science-fiction industry. It is the type of AI that can understand and learn any intellectual challenge, just like a human being. You can however rest assured that this form of AI is not going to be around any time soon, as data scientists do not even know how this could be brought about. Apologies to all science-fiction fans.

Now that we have cleared that up, we still do not really know what AI includes.

Generally, there is no official definition of the term. Usually, every mannerism of technological devices that mimics intelligent behaviour (at least to some degree), is coined as AI. For example, knowledge-based systems, learn by mapping and gathering pieces of information which they then utilise to solve a specific problem. Such systems draw their conclusions by applying previously programmed rules.

Usually, when we speak of AI, we actually mean machine learning (ML). There, a chosen model determines the inherent patterns and rules of a problem, without someone having to code them out. Several statistical models can be used to achieve a fast and efficient learning curve. These are comprised of k-nearest-neighbours, k-means-clustering, the naive-Bayes-classifier, regression methods, Support-Vector-Machines, Decision Trees and Neural Networks.

What all these methods have in common is that a loss function needs to be constructed out of the resulting discrepancy between predicted and actual values. While the model is training, the loss function is minimised as much and as fast as possible. This is achieved by optimisation methods which are usually variants of the gradient descent algorithm, which in turn works on the basis of differential calculus to determine the slope of a function.

Deep Learning (DL), on the other hand, is a subdomain of ML. Here, Neural networks with several interlayers are deployed. Not only do they consist of an input and an output layer, but also at least two hidden layers thereby increasing the complexity and thus the possibility of tuning the model very finely. This is what enables a Deep Neural Network to detect very subtle differences in the input data.

In principle, we distinguish between three different training methods in ML.

With supervised learning, the system does not only get to see the input data but also the correct output data or labels. This makes it relatively easy to construct a loss function which depicts the difference between the predicted and actual values. The loss function then needs to be minimised by optimisation and the thereby adapted parameters are fed back through the neural network, which we call backpropagation. This kind of training goes on for as long as it takes to reduce the error expressed by the loss function under a previously defined tolerance.

Unsupervised learning does not involve predefined output values. The algorithm needs to find similarities and patterns within the input data and classify the values accordingly. Thereby, previously unknown connections can be detected among the input data.

Both aforementioned training methods can be combined into Semi-supervised Learning.

Reinforcement learning is used when the underlying rules of a problem are too complex or opaque to be programmed manually. The algorithm makes a connection on whether actions that have been taken result in success or failure. The model then gets “punished” for failures and “rewarded” in case of success. In this way, complex connections can be internalised quickly and the resulting insights can be applied successfully to yet unknown problems.

What are the fields of application for ML or DL, you might ask?

Well, they can literally be found all around you. From the object tracking the autofocus of your camera does so well to the facial recognition software you might use to sort your favourite family shots.

  • Have you ever sent a handwritten postcard and wondered how the postal service deciphered your unreadable scrawl? Well, now you know. It’s called handwriting recognition.
  • When was it that you last spoke with Alexa, Siri, Cortana, or Google Home? Either way, that probably was the last time you used ML for voice recognition.
  • Does your car have road sign recognition or lane assist built in? Then it uses Machine Learning.
  • Maybe you use autocomplete when texting on your phone or playing computer games against bots. In any case, it is hard to escape AI with all its subdomains.
  • Even if you don’t use any of the aforementioned, your pension fund is being traded by bots, your energy supplier manages to provide you with electricity thanks to forecasts from time series analysis utilising AI.

AI is neither spooky nor incomprehensible. It’s just the next big technological leap. Better grab the chance to understand it and work with it sooner rather than later.

About Simeon Harrison:

Simeon Harrison has gained experience in teaching high school mathematics for eight years. He is currently affiliated with EuroCC Austria and TU Wien and is responsible for organising and conducting training for industry users in the area of high-performance computing, high-performance data analytics and artificial intelligence.

Podcast: Art of Making with Hans Maenhout

Podcast: Art of Making with Hans Maenhout

Hans Maenhout: Is corporate venture capital right for you? Art of Making podcast · Episode 8 The general reduction in venture capital activity over the last two years, how difficult it is for industrial startups to raise venture investment, and the case for corporate...

Podcast: Art of Making with Dr. Sarah Theinert

Podcast: Art of Making with Dr. Sarah Theinert

Creating value for your deep tech portfolio – Sarah Theinert Art of Making podcast · Episode 6 In the April episode of the Art of Making podcast we hosted Sarah Theinert, who talks about creating value for deep tech portfolio  Together with Dr. Sarah Theinert of UVC...