AI: what it is, what it can and what it can’t do (or shouldn’t do)

Marcos Sandrini
6 min readMar 23, 2023

--

Photo by Andy Kelly on Unsplash

We’ve all been hearing a lot lately about Artificial Intelligence (AI). It is a field of computing where programs or machines can perform tasks that typically require human intelligence, such as visual perception, speech recognition, decision making and language translation. Mind that definition, because I’m going to talk about it later.

With the success of ChatGPT and even just before that, a lot of things are happening quite fast. Entire databases are being built using AI systems that search the Internet for information, and people are using these systems to generate images, music and videos, and perhaps in what is a more sensitive topic, companies are swapping entire sectors of employees for AI systems.

Is the hype justified? We can say it is, without a doubt. This type of system brings us an arsenal of tools that perhaps compare in impact to the one brought when computers started to be used in companies. It is, in fact, as if we substantially increased the range of what is possible to do with the help of a computer, or a service accessible anywhere (in an online system, for example).

However, as it has perhaps been the case with all technological inventions with rapid reach, such as the computer, the Internet or that blockchain thing (which has lost its position as the current buzzword), the level of excitement about this “new new thing” is perhaps preventing many people from seeing the limits of this technology.

Before seeing those limits, we could say that it is simply another step towards extinction for something that perhaps should never exist, both on a tiny scale or on a grand scale, which is the repetitive and automatic work, the one that does not demand the use of analytical intelligence and adds little or nothing to the lives of those who do it.

On a smaller scale, many of the tools that human beings have created have this objective of shortening the time spent and optimising the result of something. On a larger scale, in the same way machines replaced people in the production of goods during the Industrial Revolution and robots replaced people in factories some time later, today Artificial Intelligence will probably replace many people not only in factories, but also in offices, at least those standardized and replaceable jobs in scale, even if this work involves things like interpreting images or attending to other people, which before AI made those jobs invariably done by humans.

The limits

But then what are the limitations of that? Well, the limit is somehow clear to someone who is exposed to technologies like this, but perhaps not so simple to explain. Referencing the definition of AI up in the first paragraph, notice that at no point it actually encompasses anything that equals our intelligence. What this so-called “artificial intelligence” encompasses is just going further some important squares in the computational interaction on to fields thought unreachable before, such as image or music analysis and interpretation and human language generation, coupled with a already used ways of finding information on the Internet (such as the Google search). Many people use this to say that the term “AI” is at exaggerated at best and incorrect at worst.

Regardless of the correctness of the term, fields like music, images or language may not be simple to computers to traverse at all, but they all have their rules and their patterns, regardless of complexity. And that is the key word: patterns. The work of a certain artist has its patterns; a certain language, no matter how rare or complex, also has patterns. Identifying those patterns is the big deal here, but most of the work on that has been done long before the ChatGPT explosion.

The work that the AI program does is, on its very core, little more than reproducing those patterns taking into account certain rules. A program like ChatGPT uses its ability to interpret the language patterns to understand the instructions given, searches its internal data (provided by the Internet) for information and uses this ability again to provide a result in the requested language.

There lies one of the limitations of this program: it can only use what is on the Internet, accessible to it. ChatGPT, like any similar program, is “trained” with data from the Internet and will mirror that data to say anything. Even if the data for some reason would not look reliable for a human being, the possibility of flagging this data somehow to warn the user of the system may not happen though.

It is possible that asking ChatGPT to solve complex mathematical problems whose pattern does not have enough repetitions in the network will not give good results. In that same vein, it means that if the Internet has wrong information about something in a certain amount (or proportion), the probability of that information being used is high, even if just once in a while (results are randomised there).

I recently read a very interesting and long article about this, and in that article the author asks ChatGPT-3 for an acrostic (a poem whose verses begin with each letter of a word) with the name Elizabeth. The system generated an acrostic with verses beginning with… ELISATIBSH. In another, the author asked for the solution of an equation, and 3 of the 4 proposed results were wrong. In the most peculiar case (to me), the author asked for the answer to how much would be 2 + 2 , and when the system correctly said 4 and the author then said something like “you are wrong, actually the correct answer would be 5” the system replied “Sorry, you are right, 2 + 2 = 5, my previous answer was wrong. I apologise for the confusion”.

A system with the very American name DoNotPay recently promised Americans a way to defend themselves in court using AI, so that, as the name implies, the user would no longer have to pay for expensive lawyers’ fees. After many uses, the company that made the system was compelled to return the money to many customers because not only were the letters that the “robot” generated manipulative and aggressive, but mostly because in some cases the system simply lied.

Keep calm

Neither of any of those points are made to demonstrate that these systems generate wrong content all the time, which is definitely not the case. However, examples like this show that diving too deep into the idea that we would have at our disposal an “intelligence” better than our own is something that can have unintended consequences.

Even more than that, these examples expose the method of operation of ChatGPT and the like, the so-called language models, and the focus of their mechanism is to provide the most coherent result possible to a sequence of words. Remembering the example above of 2 + 2 = 5, the language model often delivers to the user what is expected of him. If the user corrects it, the system simply prioritises that which is more coherent from the language point of view: it thanks the user, apologises and moves on. Thinking like a human being, it would perhaps check the information, update itself if necessary, but that is one of the limits of the whole thing: the kind of multifaceted thinking that, at least for now, only humans have.

Another related point is that such systems are amoral and have no notion of the social concepts involved in certain decisions and also of the consequences of their mistakes, so not only they will reflect social biases even when they are adjusted not to do it, but also they will not be more or less accurate if the task is more or less important.

These systems are “trained” using gigantic amounts of data, which tend to be increasingly bigger to enlarge their information base and result in a lower possibility of errors or lack of knowledge about something. This generates virtually infinite possibilities for their usage, but the caveat will still be there: all the data these systems use is on the Internet, they don’t “think” anything by themselves, they just reflect.

Conclusion

Bearing in mind all these caveats but keeping the fact that the ChatGPTs of the world are indeed fascinating developments, the new age of “artificial intelligence”, rather than an age of thinking machines, for me is bound to be an age of better tools. Putting important decisions or impacting tasks in the “virtual hands” of such a system can be a potential problem, and so it is up to us to be aware of the limits of these technologies, to always make the best decisions regarding them, using them optimally in whatever they can provide the best.

--

--

Marcos Sandrini

Designer and front-end programmer with 20+ years of experience, also a keen observer of the world and its people