#

Back to Blog

Why Artificial Intelligence (AI) Is Neither

by | Jun 20, 2024

Artificial Intelligence (AI) is the buzzword du jour of not just tech, but the entire online world. We see it in the daily headlines of everything from industry stalwarts such as Wired (There’s an AI Candidate Running for Parliament in the UK) through the stiff-collared set at the Wall Street Journal (What the Apple-OpenAI Deal Means for Four Tech Titans). Everyone who is anyone is talking about it, training it, or trying leverage against it.

But very few people understand what “it” actually is. And, more importantly, how AI works.

While AI is a multifaceted subject, the core of any present-day AI system is some type of machine learning structure trained with a data set (the larger, the better). Machine Learning (ML) is the act of providing a computer with a recursive set of tasks, along with a feedback mechanism to positively reinforce good/desired outcomes and negatively reinforce bad/undesired outcomes.

Let’s break that down with a real world example of image identification. Let’s say you want to build an application that will recognise Santa Claus from any image you run through it. You’ll start with a small sample of images, some of which match your criteria (“Is It Santa?”) and some which don’t. We’ll simplify the steps of identifying attribute points (Is there a person in the image? Does the image have a beard? Is the person in the image wearing half-moon spectacles? et al.) and then begin feeding the images through your application, modifying the code for each result.

Imagine doing this on as many parallel iterations as your budget would allow, and then scaling out to include as many different images as you could find—an image library of not just tens of thousands but hundreds of millions. Santas. Santas everywhere!

But it isn’t just the Santas that you’re training for. Of equal, or greater import is the negative use case. An image of a tree. Of a pot plant. Of a car. And what about the edge case? For instance, someone hangs a long grey beard on a tree with a red cap above it. Is this Santa? Or ZZ Top at a Christmas Party…a trio of rocking Santas?

Let’s recap where we’re at. We’ve built a base set of attributes that identify an image of Santa and loaded it into an application. We’ve passed millions of images through this application, altering the matching criteria with each image.

This then presents the issue. The rub, as it were. Or rubs, as there are numerous.

  1. The initial set of attributes will contain some sort of bias. A seemingly harmless determination of what Santa looks like will need to be made by an individual who is likely to base these attributes on their type of societal “Santa” concept. Girth, beard length, and height are all attributes that are non-exact and inherent to potential bias from the software engineer(s).
  2. The codification of these attributes, along with the algorithms that will be used to determine between “Yes, this is a Santa” and “No, this is a figgy pudding”, require a certain level of programmatic ability to generate. This means that additional bias will be introduced from the social-economic and world view of the software engineers (i.e., more male than female, more first world than developing, more young than old, etc.).
  3. Finally, the image base itself. What methodology ensures the veracity of the Santa images? Or, more simply put, what control do we have that all the training images brought in are correct? At tens, hundreds, perhaps even thousands of images, individual/human control is possible. But when a program is written to consume all of the image data held within Meta, nothing more than the occasional spot check is possible.

Artificial Intelligence isn’t artificial. We’ve designed the base characteristics and created a set of attributes. That’s not artificial. Someone built it. And it isn’t intelligence. Sure, we can provide enough data to the algorithms to generate a base level of quasi-conversation, but it isn’t capable of independent fact-checked output…especially when the “facts” are being derived from an uncontrolled source (i.e., the internet as a whole).

I’ve seen AI used to debug code, rewrite service layers, and advise on security/PEN testing. Well beyond the “Hey Google, help me plan my next trip to the Goldie.” And I’ve seen some interesting choices.

And it isn’t limited to my experience in the Services industry. A lawyer mate of mine expressed his concern at lunch last week. He’s seen cases in his firm where a junior associate has looked for legal precedence using ChatGP, only to find that ChatGPT cited a law that wasn’t factual.

Yes, there are definite uses for highly trained matching algorithms–fraud detection, license plate recognition, and predictive analysis of events. But there should always be a human element at the end of the decision tree to ensure the veracity of the recommended action.

Security products, data services, legal casework, and anything “mission critical” are too important not to be scrutinised at the lowest possible level. If something is writing your solution and you’re not sure what it is based on, how can you put your hand on your heart and say it is correct?

For assistance in your data services journey (or just an amusing conversation about AI/ML), please reach out to the archTIS Services Team

Share This