The AI vs Human Debate

There has been a fascination with AI for as long as humans have had computers. Whether it’s Chappie, Lucy, iRobot, Short Circuit or even Wall-E, AI has been inspiring SCI-FI films for years. Stephen Spielberg even directed a film called A.I in 2001. The AI phenomenon isn’t anything new. But in recent years, and certainly recent months, it has rocketed to the forefront of conversation in the business and tech worlds and is even becoming a mainstream conversation.

What is AI?

There are a lot of different descriptions of what AI is. AI itself stands for Artificial Intelligence, so theoretically all the AI models we talk about should be that, at least in some way. However, there are a lot of philosophical questions about AI, and what makes an AI truly intelligent. There are debates about the nature of intelligence and whether it can be captured by a computer, whether AI can be intelligent if it isn’t conscious (which then leads to debates about what is true consciousness), there are even questions about whether AI truly understands the information it is processing, or if it’s simply repeating symbols in an order. All these questions are for philosophers, and generally not what people have been meaning when they discuss AI in the mainstream.

When I’ve been asked about AI in recent weeks and months, the AI people often refer to most is ChatGPT, and with over 100 million users as of July 1st 2023, and being free to sign up and play around with, it’s easy to see why. It’s at the forefront of the AI conversation right now.

So, what exactly are we talking about when we say AI and mean something like ChatGPT? Who better to ask than the AI itself? I asked ChatGPT and it told me that it is “a specific type of AI model know as a language model. . . GPT models are designed to process and generate human-like text based on the patterns they’ve learned from a vast amount of text data”.

Put in layman’s terms, ChatGPT is a computer programme that has been fed examples of text from all corners of the globe, about a vast range of different subjects. It then uses the information it has been fed or “trained” on to provide lifelike, or humanlike, answers to prompts. For the purposes of this article, it’s the ChatGPT’s of the world we’re talking about, the broader philosophical questions mentioned earlier aren’t the ones to be answered here.

Why Are There AI Concerns?

With the advancement of modern technology and computing power we are seeing more and more powerful and realistic AI systems being introduced. From ChatGPT as an AI chatbot, to programmes like Mid journey generating art that is indistinguishable from that created by actual human artists.

This has reignited not only the broader philosophical conversations around AI and creativity, AI and ethics and whether AI is truly intelligent, but also about AI’s role in society. The AI phenomena is now the next wave in the debate about how technology will affect the future of industries and workforces across the country, and the world.

There are fears that AI represents the next generation of an industrial revolution that will see people lose their jobs to machines. Afterall, if we can learn how to safely make self-driving cars, why would anyone need Uber? Or even further, why would any company pay for Haulage or Truck drivers when they can simply pay once for the self-driving vehicle and trust it will get where it’s going, both reliably and safely?

As mentioned earlier, AI models work on “training”, and it is from this training that they pull their frames of reference, so in theory the more they are trained on, the better they will be. If you’re wondering how much information goes into this “training” for the AI models, the most recent version of ChatGPT is rumoured to have been trained on over 4 Trillion pieces of information, but hasn’t been confirmed.
 
For a little extra context since the numbers always sound so confusingly similar: 1 Million seconds is roughly 11 days, 1 Billion seconds is roughly 32 years, 1 Trillion seconds is roughly 31, 688 years. If the newest version of ChatGPT has been train on 4 Trillion pieces of information, it’s incredibly difficult to imagine anything it doesn’t have a frame of reference for.
 
The point of an AI like ChatGPT is to hold a realistic sounding conversation over text with something that sounds like a person. If you wanted to you could ask ChatGPT how it’s day was going, what the weather was doing, or who it thought would win in a particular sports game this weekend. Depending on how you worded your question it would either give you a human sounding answer, or it might give you a more mechanical answer.
 
(One thing to note about the current version of ChatGPT is that it was trained on information up to 2021, so it won’t tell you anything from after that point, as it has no internet access. So, it can’t update in real time, although I’m personally sure it won’t be long before that changes.)

 

The Problem with AI

 
So what’s the problem? You can ask the AI a question, and it can give you an answer based on the information it has. It’s actually quite cool. You could ask it to rewrite the lyrics to Suspicious Minds in the style of Shakespeare, and it would probably do a good job. The problem isn’t that you could ask an AI model to do something entertaining like rewrite lyrics into a different genre. The problem is that you can ask it to write them from scratch, and it will.
 
Why is that a problem I hear you ask? Well, if you can ask AI to write song lyrics – you could ask it to write a novel, or as many universities found when ChatGPT first became publicly available, you could ask it to write an essay. This is a problem for any industry that requires creativity and for the people doing those creative roles. After all, why would someone pay a member of staff, or an entire team of them to create ideas when they can ask AI to create it for them?
 
It’s very easy to see a world where bosses in industries use AI to replace their staff to cut costs. Afterall, on the face of it, it could make sense. AI can run tirelessly; it doesn’t need to be paid or have a benefits package; it won’t take holidays; it won’t even need to go home to sleep. Not only that but it can give you response after response to prompt after prompt.
 
Therein lies the problem with AI. It isn’t the AI itself that’s the threat to jobs across the globe. But rather its deployment. If AI is used by the people who don’t understand how it works and aren’t willing to learn, then it will quickly become a disaster for a lot of people. Partly because there could be job losses across the globe. But also, because quite frankly AI can’t quite do what it’s sold as being able to.

 

What AI Can’t Do

 
When ChatGPT first came into the public forum, there were concerns raised that someone was able to pass medical school entrance exams, and the bar exam to become a lawyer in the US. However, to anyone familiar with how AI works (as discussed earlier in this piece) this didn’t come as a shock.
 
If you train an AI and part of the material it is trained with is legal or medical information, then it makes absolute sense that you could ask it a question and it would give you the answer. It would be no different than looking up the answer to the exam questions yourself in a textbook, or simply googling them. All an AI model does is do essentially the same thing but wrap the answer in a well-presented bow.
 
When you understand AI in this context, all of a sudden, its limitations become clearer.
 
I could very easily ask an AI model to come up with copy for a marketing campaign for a fast-food chain, or a shoe brand. Based on the information available to it the AI will look through the information it knows and use that information to try and do what I’ve asked it to. But the AI is only able to do what it has been programmed to – not by me asking it to fill in a prompt – but by the people who programme it behind the scenes, at the very beginning of the journey from random code to the finished product.
 
This is where the biggest problem with AI lays. When asked to generate a snazzy tagline for a fast-food chain, it lacks the ability to make the kind of creative connection that comes out of nowhere and gives McDonalds “I’m Lovin’ It”. If I asked an AI to generate a marketing campaign for a type of basketball shoe it could do so. But it couldn’t come up with the lightning in a bottle that launched Nike to new heights by partnering with Michael Jordan.
 
Ultimately, the AI we have today is a computer code. It can only do what you ask it to do, never any more than that. It might look fancy, but it can still only do what you ask it to do. I asked ChatGPT what drives the algorithms and patterns you’re trained on? Its own answer was I think the most telling explanation of its limitations.
 
“The model learns patterns, language structures, grammar, and facts from the text data it’s been exposed to. It doesn’t have inherent understanding or consciousness but can generate text that appears coherent and contextually relevant based on the patterns it has learned. . . In summary, the patterns and algorithms I rely on are the result of the training process on vast amounts of text data, but I do not have true understanding or awareness. My responses are the outcomes of statistical probabilities and patterns in the data I’ve learned from.”
 
AI can’t create anything. It gives you what it thinks is the most likely outcome based on what it has been trained on, not anything necessarily new, or original.

 

AI is a Tool

 
But this is where AI’s biggest weakness becomes its biggest strength. If it is deployed by someone who understands what AI can do, and more importantly what it can’t, then it becomes the ultimate reference tool, the ultimate sounding board, and the ultimate partner to a person in a creative role.
 
Imagine someone thinking of marketing slogans for a company, there is only so much creativity a person or a team can come up with before they get burnt out.
 
When you get to that point of “we’re out of ideas, let’s take 5” or “sleep on it, we’ll come back fresh tomorrow” that’s where AI comes into play. You can ask the AI to come up with suggestions for the problem you have, and if you’re unsatisfied with its answers then you can prompt it 10 different ways.
 
It might give you nothing useful, but as a sounding board and a resource full of endless suggestions it can be the tool that helps to give you that little extra spark that fires your creativity back into gear and gets your team and their original ideas rolling again.
 
Ultimately AI is a tool, and it should be deployed as such. There are industries that have embraced A Iand deployed it to great effect, for example there are AI programmes that have been trained to accurately read MRI and CT scans for anomalies that appear to an incredibly accurate degree. But those results are still checked by a trainer medical professional.
 
However, the deployment of AI by people who misunderstand it, or wish to use it with ill intent, is absolutely something that we should all be worried about. If we all work together to understand the capabilities and restrictions of AI models as they are in their current iteration, then we can learn to embrace AI and move our industries forwards with such a powerful new tool at our disposal.