AI may be news reporting’s future. So far, it’s been an embarrassment.

In their short life as machine-generators of news stories, artificial intelligence programs have screwed up simple interest calculations, botched the chronology of Star Wars movies, and produced sports stories that appeared to contain little actual knowledge of sports.

The latest embarrassing bit of robots-gone-wild “reporting”: An obituary of a former NBA player described in the headline as “useless at 42.”

The article — published by an obscure news site, Race Track, but shared widely by MSN.com — appears to be based on a legitimate news story from TMZ about the death of Brandon Hunter and then run through a tool known as a “spinner” that masks plagiarism by replacing certain words with synonyms.

But some synonyms don’t scan — hence, the bizarre description of Hunter as a former NBA “participant” who “performed” for the Boston Celtics and Orlando Magic and “achieved a career-high of 17 factors in a recreation.”

The belly flop, ridiculed across social media wasn’t just an embarrassment for MSN — whose actual human editors took the story down — and its parent Microsoft, a leading AI developer, but for automated journalism generally.

Newsrooms have utilized simple AI tools for several years, mostly to produce corporate earnings reports, transcribe recordings and check spelling. The potentially revolutionary advance is generative AI, which remixes vast amounts of data to create new stories — raising fears that publishers could someday replace their ever-diminishing news staffs with armies of bot reporters.

But generative AI still has bugs and limitations that would get a rookie reporter fired. They can’t discern fact from fiction, which means they can pass off nonsense just as easily as the real goods. They can’t call up experts and sources to gather new information, which limits their effectiveness on breaking news stories. They also have trouble understanding context and cultural nuance — that is, what’s appropriate in the body of a news article.

And so a travel article generated by AI and published by Microsoft in August recommended that tourists in Ottawa pay a visit to the Ottawa Food Bank. “Consider going into it on an empty stomach,” the article suggested, rather cruelly. Microsoft removed the piece after it was mocked on Twitter.

Microsoft hasn’t explained how the AI content slipped past its human gatekeepers, if any were involved. The company issued a statement about the Hunter obituary saying that “we continue to enhance our systems to identify and prevent inaccurate information from appearing on our channels.” The original publisher, Race Track, could not be reached for comment and appears to have been deleted.

While AI-written prose can be serviceable, it can also be painfully clunky. Readers of the Columbus Dispatch last month encountered an article about a high school football game described as a “close encounter of the athletic kind” and another reporting that a team “avoided the brakes and shifted into victory gear.” They were the product of Lede AI, a program deployed by Gannett, the nation’s largest newspaper chain, and suspended after the stories drew mockery.

“As with any new technological advance, some glitches can occur,” Jay Allred, chief executive of Lede AI, conceded in a statement to The Washington Post.

To be fair, the defects exposed in AI-written articles so far suggest the flaw is as much in the humans as in the robots. MSN and other red-faced publishers of AI-generated news articles all appear to have skipped a critical step in the journalistic assembly line — double-checking and editing copy before it’s published.

The root of the issue may be Microsoft’s decision in 2020 to lay off dozens of journalists who maintain MSN’s homepage and news pages of its Edge browser, said Victor Tangermann, a senior editor at Futurism, which has closely covered AI’s march into journalism.

“Publishers are trying to cut costs and keep the content machine spinning,” he said. “But what we’re seeing over and over is that AI isn’t quite up to the job yet, so it’s backfiring embarrassingly for publications that try to use it … This has allowed a lot of bad material to slip through.”

He added, “It’s been hard to identify any cases of compelling or even acceptable journalism” produced by AI so far.

Human input also appears to have been lacking at Gizmodo, a tech site, after its i09 entertainment section published the flawed list of Star Wars movies and TV shows in early July. The site’s deputy editor, James Whitbrook, said on Twitter that his editorial team had no involvement with its publication and blasted it in an email to parent company G/O Media as “embarrassing, unpublishable, disrespectful of both the audience and the people who work here, and a blow to our authority and integrity.”

Some AI news developers worry that defective automated news stories will damage both AI and the news media before AI develops its full potential.

The viral AI stories “detract and distract from more thoughtful and useful applications of the technology that could help to sustain news organizations that are publishing unique, quality information for their readers,” said Matt MacVey, who is leading an AI news-development initiative at New York University.

“AI and automation are here to stay and will be integrated into all sorts of tech and software that we use daily,” he said. But “it just takes one unscrupulously published article going viral to raise a lot of scrutiny and concern.”

Among others, Google has been testing an AI-based system that can take in information and produce news stories; it has pitched its Genesis system to various publishers, including The Washington Post and the New York Times. Neither have implemented it.

In the meantime, publications should be more forthcoming about when a robot is the reporter, said Tangermann. He urges editors to clearly mark machine-generated copy so that readers know what they’re getting.

“In a few years, AI may have become astoundingly clever, and if its capabilities do surpass that of human journalists, it’ll be a whole different conversation,” he said.

But at the moment, he’s not impressed. So far, he said, generative AI has mostly generated “chaos.”

Daniel Wu contributed to this report.



Source link

credite