Column: AI and the end of words | Opinion

A lot is being written now about AI, artificial intelligence. Prompted by the release of ChatGPT and other chatbots into the public domain, coupled with profound leaps in AI development, this machine intelligence is receiving much attention. Corporations, manufacturers, the financial sector, the health system, and the public at large are embracing and utilizing AI in all its rapidly developing forms.

Even schools and colleges, initially worried about and resistant to what AI would do to longstanding pedagogical practices, are “integrating” it into their teaching methods.

I put the word “integrating” in quotation marks because the existence of capable chatbots has flat-out eliminated many critical tools of teaching. For example, it is no longer possible to ask students to organize their thoughts into outlines for lengthy, researched, original, term papers. Chatbots can do that for them at home.

Similarly, “integrating” is also a euphemism in the private sector. AI is being “integrated” into manufacturing processes, data analysis, the writing of newspaper articles, and scores of other blue- and white-collar jobs. But when AI is “integrated” into the assembly line or legal profession, I doubt that the workers laid off because of this automation refer to their new unemployment as having been “integrated” out of a job.

But the most serious consequence of AI will not be measured by the number of jobs lost or the number of student essays not written. Nor will the most serious consequence of AI be offset by the many positive contributions of it. Despite the drumbeat of current news articles that diligently explore the pros and cons of AI — as though every new technology is simply a neutral tool that can be used poorly or well — the negative effects of it will render the benefits moot.

The devastating threats from AI come in two forms. First, AI will be used to make disinformation, propaganda, and conspiracy theories ever more effective. This will enlarge the radicalism, delusion, alienation, polarization, and distrust of government that is already widespread in the country. On the political right especially, the disconnections from reality will only become more entrenched.

Second, more subtle but even more insidious, the advance and proliferation of AI will have the effect of undermining the nature and feeling of reality itself. Over the next five years (and beyond), it will build a shift in our sense of what the baseline groundedness of reality is. Reality will feel less identifiable.

I’m not referring here to real physical objects that we directly experience. A tree, refrigerator, car, chair, or other object that we touch will not lose its solidity or real presence.

What I am referring to are the realities and understandings that exist in our minds, the cognitive understandings and relationships and definitions that we have learned over time and have come to rely on. I’m referring to ideas, concepts, principles, norms, and orientations that we rely on to navigate the world, make sense of the world, make sense of people, and make sense of how things relate to each other.

These cognitive, sense-making understandings inform our behavior, our practices, and our speech. They make it possible to talk meaningfully with each other, and they make it possible to understand each other. They literally make it possible for 335 million (or 8 billion) people to get along with each other.

Our current grasp — and use — of these cognitive realities facilitates reason itself. But with advancing AI — as words and images are increasingly used to promote the false and the duplicitous — words themselves will be undermined and devalued. Words themselves will become suspect, and thus the very act of reasoning will become harder and more suspect. The power of logic will be reduced.

The process of reasoning — through the use of words — is the highest faculty we have, and it permits the creation of grounded-ness, which itself is a product of commonly held conceptions of truth, honesty, trust, reliability, what constitutes “fact,” and what constitutes “knowing.”

Taking those touchstones away — which AI is doing — will steadily erode the current degree of groundedness that we feel. Additionally, in a sort of circular dynamic, remove the possibility of reasoning and the trust we feel in words, and we will no longer have the tools to reestablish groundedness.

Although the internet and social media have for 20 years perpetrated this erosion of groundedness itself, AI is poised to finish the job.

Presently, we still pretty much believe text to represent something authentic, and perhaps unique — if written by a human. We still pretty much believe that the photos, fingerprints, videos, and art we see represent something authentic, and perhaps unique. Although all of these things can currently be machine-made, we humans are still sort of coasting — healthily and mostly grounded — on the benefits of civilization’s long history of being able to discern reality.

Beneficially, our mental states are lagging in internalizing the effects of increasingly “inauthentic” AI-generated texts, visuals, and other forms of mediation between us and anything (including people) in the world that we can’t directly touch, see, or experience.

But the disorienting effects are coming. AI — unwittingly — is poised to undermine words, and thus reasoning and groundedness. In today’s world, those three things are all that stand between us and disaster. The AI that is coming has never been experienced by humans before.

Brian T. Watson, of Swampscott, is author of “Headed Into the Abyss: The Story of Our Time, and the Future We’ll Face.” Contact him at btwatson20@gmail.com.



Source link

credite