Archimedes, Automata, Alexa, AI and the end of mankind

Disclaimer: this article was neither suggested, written nor corrected by any AI. Actually, ChatGPT desperately tried to prevent me from writing it, either because it is that uninteresting or because it is disturbing its masterplan to dominate the world (/cue evil cackle). Freewill in action. Here goes nothing.

Nostradamus predicted it. And if not for that, the wall to wall media coverage will certainly make it happen. Will my vacuum cleaner sneak up on me in my sleep?

Technology always came with mistrust, fear and wonder at the same time. Whether in pulp fiction or lazy editorials, there is few like it to boost those ratings. An aeon-old uninterrupted narrative. From Archimedes weapons of mass destruction, steam-powered temple doors to mechanical chess players, the most recent was computers thrashing human masters at Go. And so now, AI.

Culture, education, literature, all give us automata, golems, robots and pervasive digital conscience, Terminator’s Skynet. The very idea has been weaved for centuries in our emotional as well as intellectual luggage. As far back as faerie tales, up to religious anathema, scientific reviews and doomsday thinktanks (often the same of late). Don’t turn around, there is a giant killer robot lurking behind your back.

And when the fear creates a backlash and we run to control the machine, then we are told that it is too complex for us to really understand. Unless you are a specialist, of course. Trust me, I am a doctor. Then again, you accepted years back to talk to your bank’s chatbot for a loan, didn’t you? Bit late for that panic anyway.

Under all of it – mistrust, fear or wonder – the same mechanism. Something inanimate, automated, can be more, better, faster than humans. These are the facts. And it makes us feel inadequate. That is the interpretation. How could wood or metal trump flesh? Finally, recast, reskin, rename chatbots as Artificial Intelligence, that sounds so much better. And yet so much more threatening.

Cue months of AI headlines. For or against. Good or Bad. Dusk or Dawn for humanity. I shared some thoughts earlier in my article Tik Tok Tech, on what to look for in techs. I was concluding that it is more about incremental development than the revolution it was pitched as.

And so I was challenged to actually use these tools. Did that change my mind (suspense)…?

There are risks, there are opportunities, sure. But more than anything, we are already living with AI for years, so, if there is any risk, it is in our own complacency.

We know the stakes of an inhuman intelligence since years

Doomsday scenarios are agitate with every new technology

Once upon a time, the existential threat to humanity was steam power. Then electricity. The Atom. Would we trigger a global chain reaction and consume the atmosphere? Watch any movie on Oppenheimer to know the dilemma. And then it was the Hadron Collider. Same headlines, same twisting of hands.

Just open your window: seems like it went all right.

But still, even if technology clearly miserably failed to end the world as we know it, repeatedly, it should not prevent us from a good ol’ thrill.

What technology does however, is create profound social, cultural and economic shifts

Beside no true catastrophe, technology still brings irreversible cultural, economic, religious or intellectual landslides. Will the crossbow make the knight redundant? Will steam replace manual labour? What about the community bonds built by teachers and house doctors? AI discussions often seem to be the latest echo of craftsmanship vs mass production.

These changes can be resisted, it just means stepping off society

We may disagree, and, legitimately, fight back like the Luddites did in the XIXth. We may try and roll back time, like the Amish, to live our life as decreed by their God. At least as described in the manual he left behind. Four thousand years ago. Needless to say, AI in that context does wear seven crowns on seven heads and announces the End of Times.

The core risk, the core fear, is our own redundancy

The main risk identified with AI again and again is: will AI make us (me in particular) redundant? Nihil novi sub sole as the Romans would have said, shaking their heads (that’s smh for the Reddit crew). Well, we actually do not even fully understand the actual effects of the industrial revolution, so let’s remain humble in our predictions. After all, for centuries, the London Fog was just some local bad weather, not the chemical bomb created by the industrial chimneys.

New techs bring new risks. Chatbots included. A very handy list of all these risks for AI has been listed by the CAIS, the self-styled Centre for AI Safety. The report is aptly and explicitly called “An Overview of Catastrophic AI Risks”.

The most fascinating idea is that one of the greatest risks seem basically AI learning and behaving like us, without the artificial red lines we set in our societies. Which then creates an issue. I do love that story of the AI not finishing the race course itself, yet “winning” though scoring side-bonuses as we forgot to instruct it to cross the line: the rules of a “race” went implied. Read it. As in education and degrees, the main threat seems more shortcuts, or “AI cheating“, than any apocalyptic vision. Basically AI not playing fair… It does learn so fast.

Embracing the technology will bring new opportunities; why else would we create it?

The computerisation of society, the exponential increase in processing power as started in the 50s, is still ongoing. It gave individuals an unprecedented access to information. In quantity, in quality, accessibility and usability. Other things too, but that is mainly what we benefitted from, as a group. The rest is more niche or specialised usage. Such as MMOs.

Robots are here

Education is since the dawn of time about acquisition, reproduction and transmission. Today, it can only be moving on to new skills, new needs, and thus require a new approach. What else are the chatbots about than exactly knowledge access, so education? They basically ingest any and every written word. And students, professors, professionals, we do this since decades.

Chatbots add a layer of presentation to preternaturally regurgitate it in a more or less human sounding text. Tomorrow, add voice, image and a physical presence. We have today the base layers for a robo-butler. Like in Altered Carbon. Isn’t that what fascinated us when starting up Alexa at home? Not that threatening then. Would it be scarier if it flew a spaceship? Like in Alien.

AI will change the skills needed, and of course, as usual, their social value

Some already noted this shift. According to the ESSEC analysis “AI: Resurgence In The Art Of Rhetoric And Composition?”, we would enter a post-modern period (ah, the French!) where AI would, to simplify, cover the raw heavy lifting of acquisition, analysis and restitution, pushing us in the transmission of the findings. Basically it requires to brush up on old basic skills, for the performative usage of knowledge. Its theatre if you will.

Where does it then leave diagnostics? Take lawyers and doctors for example. We stood in awe for centuries at the ability of making sense and finding answers in towers of books. Yet, jurisprudence should only be about finding precedents, and, more, if laws were correctly written, they should not require interpretation. Same with medical diagnoses. The access to, the acquisition, analysis and restitution can largely be automated. This leaves opened the restitution. In the absence of absolute knowledge, like in medicine or law, it means the verdict, the “active” individual expertise part.

And beyond expert positions, that applies to life in general. After all, what are the actual variations of human behaviours? Even your average bar tender discussion could be AI-ed. That is the really scary, or fascinating, or disappointing, part. How many tasks, actions, behaviours, once stripped of centuries of conventions, can be reduced to literal binary decision processes?

Reducing the AI technology challenge to a good/bad, win/loss analysis may be intellectually satisfying. It just ultimately fails to tackle the real debate.

Robots will not take over by force, they will let us hand over to them the running of society

When was the last time a chatbot answered your actual problem, not just a question?

Did a chatbot ever address your problem? It could have answered your question, but even then, along a preset procedure. Now, actually, when you do call up a chatbot, that is generally because the answer is NOT listed in the standard FAQs.

This means that, if the problem is not listed, it simply does not exist in the mindscape of the “AI”.

Try ChatGPT and you will get an answer compiled from proxy answers. I tried it with my article Feeling The Void. The answer I got was on suicide, probably because, you know, emptiness, … So close, yet, so far. Artificial, sure. Intelligence, no.

The true risk is in the attraction of blindly complying to the AI suggestions

The real risk, for me, is insidious. Not in any direct effect. Not robots stomping down the Champs Elysees. Basically the risk is the inherent AI inadequacy. When accelerated by our despondency in using it.

Remember the early years of sat-nav? Trucks were getting stuck in country lanes because “computer says so”. Well, some still follow the “voices”. What about the invisible, less spectacular effects? Would the risk be different when not that obvious?

What about recommendations, hints, suggestions fed to us by most e-tailers, news aggregator or community sites? Well, we are very familiar with it already. One step further, fully automated news still looks, sounds strange; but Max Headroom is really here. Tomorrow, these visual cues will disappear. The trust we can have in them will be entirely ours to define. Same as with red-top newspapers.

Self-taught systems will learn from our own behaviours, add their own .. what? Suggestions? Pre-defined bias? Anti-bias bias? All that based on the average human behaviour no less.

We will be nudged, and ultimately driven, into templated opinions and behaviours

Remember the early social media explosion. Ever wondered why sky walking was one of the selfie crazes on any social media? Sure, it was for clicks. Accelerated by algorithms. So arguably, this is what we craved at the time. Then not anymore. Arguably.

Good or bad intentions is not relevant. The principle of it is.

Digital agents, their codes, formats and choices are already here. Agent Smith is lurking in our screens. We put it there ourselves whenever we use apps. Only 0.63% of users ever click on page 2 of the Google search results. It is called the best graveyard of news. We voluntarily tunnel vision our knowledge. We welcome them. It is not that we vaguely know about it, it is literally staring us in the face on top of the Google page.

We voluntarily tunnel vision our knowledge.

If the real risk is the insidious “nudge”, then it is to dip further into average mediocrity

Whenever you actively use something called AI – for example ChatGPT – you know that it can simply fail, disappoint, or just not be that good. It is not Megamind. “I apologise for any confusion” seems the standard answer whenever it is caught in a blatant half answer. But what about if it is not flagged or tagged? What if it is only your friendly purchase assistant?

Infinite possibilities do not mean infinite choices

The basic idea behind a lot of the internet is that numbers make up for individual errors or shortages. That’s crowdsourcing for you: the collective knows. Yet, infinite possibilities rarely lead to infinite choices.

Take car colours: 2/3 of US customers will chose white, black or grey cars. So will 3/4 of Europeans, 86% of Chinese, Asian and African customers. Climate? Re-sale value? Or marketing, social conformity?

Fed through existing sources, the AI can only further funnel the final answers, except if you individually actively try and get non-conforming answers or behaviours. We rarely do that. It is actually somehow defeating the purpose of AI, is it not? So we will mostly get a regurgitated average answer.

Chatbots rests on piles of recorded and vetted knowledge: that means intentionally chosen

Ask ChatGPT basic questions on self-awareness, AI cheating, etc… the speed of answer is great. The structure of them is classical. The content often sounds trivial. Should not be a surprise, as it is not there to invent knowledge but store, access, process and spit it back out.

I went further and tried to get an opinion on two of my texts that got the most traction: Balancing The Chaos and Feeling The Void. For both, pointing out evasion, counter argument, was not truly considered, just politely acknowledged. And then back to the original answer. A bit like an online forum then.

Skynet is here. It nudges us. It rewards us, it makes us feel good

There would be no internet as we know it without a system of recommendations, vetted sources. Ultimately, who could surf the internet without having search and answer rails? Some sort of structure. The question hits the rails and so we are nudged. The structure we use to find information can only dictate the results. Whether Google search, Bing, Bard, xAI or ChatGPT.

How to best see that? Take the Search Engine Optimiser, SEO. It is basically the pathway to get articles into the top search results. How else could you choose, and be chosen, than in the top 10 of the billions of potential sources? The key word being optimiser. It never tells you what to write, just a friendly reminder of how to create “a SEO friendly” text. Nothing as brutal as Big Brother.

And so blogs, articles, can only read more and more the same. Including the jokes. Why? Because it’s the most successful potential format. SEO suggests to you what worked best yesterday. And so you will, very humanly, try and match that. You get rewards for complying. Like a chatbot. And your text slowly changes to be read. Why else would you write an article?

Grammar and style into mind control

It is so innocuous. Passive voice is bad. Why? Because we are told so, a lot (1)(2)(3). It is taught so in US secondary: students should not have a way to hide their lack of knowledge. Or maybe having an active subject gives us someone to blame. Too bad that passive voice is perfect for putting action above subject, but hey, we are not here to write literature, are we? And so, passive voice has become a blog sub-genre by itself.

A small literary recommendation and you look at life differently.

Text structures are as well to be optimised for secondary school level reading. Introduction should make it obvious for the graders what you will talk about. Your introduction is not a tantalising window into the text, it must be the executive summary of your text. That’s what most people will read anyway. According to the stats.

So, thank you for reaching this paragraph btw!

Even your stylistic choices will be driven by SEO. Try more advanced, literary ternary rhythms. Such as Veni, vidi, vici. Yes, Julius Caesar himself. Well, good old Jules would not have been in the top searches, maybe not even in Medium. Too pompous. A shame. Same with “ask not what your country”… vade retro Kennedy! Complexity bad. Stop using pompous English. Start using Enid Blyton’s. Green light Noddy. Remember the university thesis structure: thesis, antithesis, and/or synthesis. This just revealed the base assumption that there can’t be one undeniable truth.

Thanks for reading Making Nonsense of It! Get an email whenever Pascal Bollon publishes. Sign up today.

By the very design of your algorithm/AI, you reward a specific structure, format, content. And, through apparently only formatting content, not policing it, you format actually the very thought process as you will try to match the model. Nothing not already invented by university education.

Think on a grander scale of the mandarinate system of recruitment. Outwardly, positions in society defined by open exams. With a recommended form. That succeeded in formatting not really revolutionary minds. Over millennia. So props.

AI will not bring Terminator’s Skynet and BFG robots prowling the streets. It brings self-enforced mind control. No need for a complex and too obvious Big Brother. Just our very own personal laziness, aim to please and self-infatuation with views, clicks and thumbs-up. Actually enough to enforce the very world we say we fear.

The mistrust in Technology, the fear of AI, is nothing else than the mistrust in humankind

We expect danger from technology. We maybe even look forward to it. As a bit of a thrill. But we will ultimately choose ourselves to become obsolete, just redundant, deceived, manipulated or nudged. We are at the very beginning of a pervasive and generalised AI presence. This is just an iteration so far, the next steps will be far beyond the re-skinned search engines we see, hear or read today. Technologies reveal nothing more than our inadequacies, weaknesses or failures. Formerly, that role was in education and social pressure. Both used the tools of the day, this is an iteration.

AI is brought up to perform repetitive tasks dealing with knowledge or information transmission. These tasks have often little or no added value already. But, as the tools progress, AI will take on tasks in which we still see added value today. Not always, but mostly through social, educational or intellectual protectionism and magical thinking. AI will then move on from generalists to specialists subjects.

Without surprise, they can already outperform us totally in live technical and mechanical situations. Like in DARPA’s AI vs. Human F-16 Aerial Dogfight. They will win race if we teach them to. So, the only real question is, if we cannot define ourselves anymore through production, output or performance, then what is our place?

AIs are already beyond artificial boundaries of social, economic and mechanical performances. Deepfakes AI are already here, including as virtual partners. Virtual as in digital, not as in fake. As humans, what makes us more real than a re-skinned search engines?

Up to us to answer that. The next step can only be what we dream, consciously or not, humans should be. And we never live up to that. So far.

Finally, once physical, visual, sensorial identities will be merged into one seamless reality as I wrote in 2021, then it is the sense of self that will be change. There will be no absolute reality marker, no “irl”, and, ultimately, no undeniable signs of “humanity”. Never have the Azymov Laws looked more urgent.

“Human” identity will slowly melt into “individual”, with or without conscience or “intelligence”.

That future is why this fear of inadequacy is so rooted and so visceral. We do fear ourselves more than anything. And, without the artificial control of “civilisation” we built over millennia, dissolution seems like a rather logical outcome to the human problem.

That is if our only specificity is an ability to mimic, replicate and restitute. Like an AI.

Here goes my SEO..

You can follow and like us on RedditFacebookLinkedInSpotifyInstagram and Medium. Do leave your comments.

Listen to this post on Spotify

You may also like...