Artificial intelligence: Is it really easy to mimic humanness?

By ELSIE EYAKUZE

So, where does Tanzania fit into this AI revolution? Are we going to be part of it, and if so how? The few times I played with ChatGPT while it was free and interesting, it did some impressive things that no program I have ever used before could have done.

It made me pay attention to LLMs and their capabilities, and grew my interest in the whole AI thing, which I am now trying to learn about within the limitations of my tired little human brain.

ChatGPT did something that comforted me early on: It got things completely wrong. When testing a search tool, best to search about a subject you are an expert in, right? So I asked it for my biography. The first time round, it was almost accurate. By the second time, it had fudged my academic credentials and public profile, giving me the great honour of becoming Tanzania’s ambassador to the United States of America. 

Read: EYAKUZE: When the created become the creator

I was flattered and amused since real live human beings have made this very understandable mistake a couple of times prior to ChatGPT being launched. But I had just experienced my first Large Language Model “hallucination.” Hallucination is the term we use when AI gets something completely wrong, and I like the word because it implies that computer programs can indeed be psychotic.

“It is surprisingly easy to mimic humanness, to mimic the way that human beings talk about the world without actually being a human being.” Sean Carroll, physicist, philosopher, cat enthusiast, said this in a recent podcast about his thinking on AI and AGI and I thought: Bingo! Sean, you nailed it my man.

Advertisement

Other points of discussion were the way we anthropomorphise our technology and really shouldn’t, especially when it comes to AI, and that spoke to the malaise that I raised in last week’s column, when I referred to the limits of human understanding within one lifetime.
Though I am generations removed from my grandmothers, there is something visceral in me that feels connected to them, their way of life and their anxieties and suspicions about the modernity they saw me grow up in as a child.

So, about Tanzania and the AI opportunity/threat: The good news is that I agree with Sean’s assessments that not only are we nowhere near the Skynet type of AGI that we all seem to fear. The bad news is that his statement about how easy it is to mimic humanness is going to be a real problem, I think, for Tanzania.

Read: BUWEMBO: Why I think AI should be human right in Africa

We are a credulous people, for many reasons, some of which I have talked about when admitting that I suspect my state of actively suppressing our full human capabilities through atrocious public services, especially the education system, and other methods of control. As a result, our intellectual class is small and often beleaguered, not nearly a big enough cohort to help us navigate challenges like internet oligarchies and AI in a manner beneficial to the country.

And that’s what I want to dig into next week, because there is an undeniable phenomenon that takes place when the human intelligence resources of a collective encounters modern AI in this Age of Information.

Elsie Eyakuze is an independent consultant and blogger for The Mikocheni Report; Email [email protected]

Source link

credite