Photo by Possessed Photography on Unsplash

About AI, con artists and hysteria

Marcos Sandrini

--

I remember watching one episode of an American show, about a supposed medium that, being surrounded by people in an audience, chooses one person seemingly at random and starts telling them messages from dear people of their lives, deceased but supposedly present there in spirit.

Would that be a true “vision” into the afterlife? Clearly not, but it was interesting to decode the technique used. Blaming the fuzziness of spiritual communications, he asked several questions to the person being the object of such “visions” with the claimed intention of getting a clearer picture.

While he never came with any new concrete information on anything, the messages from the “spirits” were always full of statements made for the receiver to feel good and, most importantly, to feel they had enough answers (“She is well”, “He wanted to say he didn’t suffer”).

Above all, much more than the naiveté of the people there, especially given very emotionally charged circumstances like that, the important thing was how the “medium” basically said what people wanted him to say. He was a tool to “materialise” the expectations of the person being “read”, giving them immense comfort in return (so much so that sometimes those people can become really indebted to the “medium”).

In that regard, general AI language models are born similar. Obviously, the goal of those tools is completely different from the goal of a con artist who earns money convincing people he sees things he doesn’t. So please don’t misunderstand me, as I’m not saying that AI tools are a scam by any stretch, but it is amusing that the mechanism has some striking similarities. The biggest one is that often you get what you expect to get, more than the answer to what you asked.

The mechanism

The basic cogs of this mechanism are the predictions. Both mechanisms try to predict what is coming next based on data: the medium gathers this data from his own pragmatic/empathic view of his subjects and probably some hints are taken from the production of the TV show. AI language models rely on text taken from real websites and documents (by the way, there is the problem of not compensating the creators of the data used to train the models, but I digress).

Language models like ChatGPT rely on a ridiculous amount of stored text, that in the event of a question is sorted by probability and with a randomiser to make things more “fun” (and perhaps its creators less accountable), leading to the next word in a sequence. The fact that the “chat” there is written word by word by the AI mechanism is not a cosmetic choice, it is how things are done internally: one word at a time.

One of the true achievements of this kind of service is to cross the capability of generating correct language and the capability of gathering data to provide a coherent answer. However, one of the things these AI-driven systems display by definition is that the first thing is more important in this hierarchy than the second. In other words: as they are a language model and, being driven by language, they tend to prioritise the language coherence of the generated answer over the coherence of its content.

The amazing engineers at the current AI leading companies did an excellent job in tuning this mechanism from inherently unreliable into one very reliable most of the time, although they can only do so much. Normally, the less information we have for a given piece of information on the web, the more it may miss. Either asking about very specific things without plentiful information online, or things in languages other than English, for example, will decrease the success rate of the answer dramatically.

In short, ChatGPT (or any other tools like Google Bard) are not liars by default, but we should always be cautious when asking critical things for those services and double-check the answers whenever possible.

We are already witnessing some mildly bad (at the time of writing at least) consequences of trusting too much the AI results, for example with AI services that attempt to replace actual human lawyers. By the way, if this type of model is by definition prone to imperfections, replacing human lawyers with AI seems to me quite, well, adventurous.

The fact that such a company exists, and the whole wave of exaggerated optimism and hysteria towards AI that followed the “ChatGPT boom” is more telling about three fundamental aspects of the recent trend around AI:

1 — Crafts and customs

Using the example of AI lawyers again, the current craft and work of being a lawyer is a particular one. Trying not to go too deep on it, much of the (normal) work a lawyer does is in fact automatable because it is tied to the protocol of language customs I discussed in my previous article about AI and language.

Lawyers have to write with the right register (linguistic term), have in mind the previous jury decisions on similar cases and make their cases. When their cases are not that unique they surely can (and do) copy and paste, nothing wrong with that in itself. The catch is when to copy, and what to put when copying is not feasible. It can be a bit too risky to put this decision in the amoral hands of AI, which tends to create false statements in what is widely known, and called “hallucinations”.

2 — False expectations

As an example, I am completely sure that most, if not all of the CEOs of major tech companies we have today have envisioned the possibility of having tech departments that would be 10% of their size by the power of AI and all of them got extremely eager to seize this opportunity.

This is because tech workers are expensive and their work takes time and team discipline, which is very hard to build. However, this is more telling about the fact those big figures know very little of what AI can (and can’t) do, and maybe even less about what actual development is, on a higher level. Even if AI can replace some programming work, streamlining everything that is done in a high-complexity development environment from product design to dev-ops is another universe.

This is just an example though. In the same sense that the AI app that aimed to replace lawyers did have some unintended consequences, AI cannot do everything, and surely cannot achieve perfection, at least not yet. Because it emulates so well our language (or generates amazing pictures), people are lured into thinking it actually thinks like a human, or even better than one.

One can argue that this is an illusion especially effective with people who rely too much on human interactions (therefore on language) and tend to know little about how things work (what is sometimes called the engineer’s mind”). Maybe this fits many speakers and journalists out there.

I am definitely not saying all this makes it useless. For ChatGPT and many other types of AI-driven services, this is fine almost always. Expecting they will be better than us at everything is too much, at least for this point in time (some people argue they will never be as good as us for some things).

I just happened to read a very good (free) article about this in much more depth than what’s here: https://blog.metamirror.io/mckinsey-are-wrong-that-genai-is-the-silver-bullet-fa79b4fe8ed7

3 — Media and tendencies

Media, since the time of early newspapers, has always been subject to what makes news sell more. Because of our negativity bias, we have this tendency to fall for hysteric headlines and even the most reputable media organisms are falling for easy sell traps today. Recently we have been seeing AI as one of those traps.

Countless articles everywhere popped up about AI, often written by people who didn’t have a clue about it. Very frequently, I crossed articles full of hyperboles where equally clueless “experts” (usually people from the tech industry but without tech expertise) prophecised about AI taking everyone’s jobs (or some specific kinds of jobs) or even taking over humankind. Never before have journalists contributed so prolifically to writing low-quality dark apocalyptic sci-fi with the sole objective of profiting from people’s fears.

Thankfully, after making a lot of people fear the future more than they should, the hysteric first wave of AI calamitous predictions has passed and the result was… people did with AI what they did several times before with other innovations: started using them as tools to improve their workflow. Not very exciting, I know.

Conclusion

More than a scam or a novelty but much less than a digital panacea, AI is there to stay. It is an invaluable tool that adds to so many other tools in our endless quest to keep making things better. People may vastly overestimate its potential because they don’t know how it really works but, in the end, we humans are clearly still in control.

More important, aside from any eventual exaggeration, is to know that relying on such a tool requires a bit of caution and scepticism as to its results because, in the end, AI has its inherent conceptual limits even if we are led to believe this is not the case.

--

--

Marcos Sandrini

Designer and front-end programmer with 20+ years of experience, also a keen observer of the world and its people