adplus-dvertising
Connect with us

Media

ChatGPT’s Mind-Boggling, Possibly Dystopian Impact on the Media World

Published

 on

 

Is artificial intelligence “useful for journalism” or a “misinformation superspreader”? With CNET mired in controversy, Jonah Peretti promising “endless opportunities,” and Steven Brill warning of AI’s weaponization, the industry is only just coming to grips with this jaw-dropping technology.

 

ChatGPTs MindBoggling Possibly Dystopian Impact on the Media World
By Yifei Fang/Getty Images.

A couple weeks ago, in his idiosyncratic fan-correspondence newsletter, “The Red Hand Files,” musician and author Nick Cave critiqued a ”song in the style of Nick Cave”—submitted by “Mark” from Christchurch, New Zealand—that was created using ChatGPT, the latest and most mind-boggling entrant in a growing field of robotic-writing software. At a glance, the lyrics evoked the same dark religious overtones that run through much of Cave’s oeuvre. Upon closer inspection, this ersatz Cave track was a low-rent simulacrum. “I understand that ChatGPT is in its infancy but perhaps that is the emerging horror of AI—that it will forever be in its infancy,” Cave wrote, “as it will always have further to go, and the direction is always forward, always faster. It can never be rolled back, or slowed down, as it moves us toward a utopian future, maybe, or our total destruction. Who can possibly say which? Judging by this song ‘in the style of Nick Cave’ though, it doesn’t look good, Mark. The apocalypse is well on its way. This song sucks.”

300x250x1

Cave’s ChatGPT takedown—“with all the love and respect in the world, this song is bullshit, a grotesque mockery of what it is to be human”—set the internet ablaze, garnering uproarious coverage from Rolling Stone and Stereogum, to Gizmodo and The Verge, to the BBC and the Daily Mail. That his commentary hit such a nerve probably has less to do with the influence of an underground rock icon than it does with the sudden omnipresence of “generative artificial intelligence software,” particularly within the media and journalism community.

Since ChatGPT’s November 30 release, folks in the business of writing have increasingly been futzing around with the frighteningly proficient chatbot, which is in the business of, well, mimicking their writing. “We didn’t believe this until we tried it,” Mike Allen gushed in his Axios newsletter, with the subject heading, “Mind-blowing AI.” Indeed, reactions tend to fall somewhere on a spectrum between awe-inspired and horrified. “I’m a copywriter,” a London-based freelancer named Henry Williams opined this week for The Guardian (in an article that landed atop the Drudge Report via a more sensationalized version aggregated by The Sun), “and I’m pretty sure artificial intelligence is going to take my job…. [I]t took ChatGPT 30 seconds to create, for free, an article that would take me hours to write.” A Tuesday editorial in the scientific journal Nature similarly declared, “ChatGPT can write presentable student essays, summarize research papers, answer questions well enough to pass medical exams and generate helpful computer code. It has produced research abstracts good enough that scientists found it hard to spot that a computer had written them…That’s why it is high time researchers and publishers laid down ground rules about using [AI tools] ethically.”

BuzzFeed, for one, is on it: “Our work in AI-powered creativity is…off to a good start, and in 2023, you’ll see AI-inspired content move from an R&D stage to part of our core business, enhancing the quiz experience, informing our brainstorming, and personalizing our content for our audience,” CEO Jonah Peretti wrote in a memo to staff on Thursday. “To be clear, we see the breakthroughs in AI opening up a new era of creativity that will allow humans to harness creativity in new ways with endless opportunities and applications for good. In publishing, AI can benefit both content creators and audiences, inspiring new ideas and inviting audience members to co-create personalized content.” The work coming out of BuzzFeed’s newsroom, on the other hand, is a different matter. “This isn’t about AI creating journalism,” a spokesman told me.

Meanwhile, if you made it to the letters-to-the-editor section of Wednesday’s New York Times, you may have stumbled upon one reader’s rebuttal to a January 15 Times op-ed titled, “How ChatGPT Hijacks Democracy.” The rebuttal was crafted—you guessed it—using ChatGPT: “It is important to approach new technologies with caution and to understand their capabilities and limitations. However, it is also essential not to exaggerate their potential dangers and to consider how they can be used in a positive and responsible manner.” Which is to say, you need not let Skynet and The Terminator invade your dreams just yet. But for those of us who ply our trade in words, it’s worth considering the more malignant applications of this seemingly inexorable innovation. As Sara Fischer noted in the latest edition of her Axios newsletter, “Artificial intelligence has proven helpful in automating menial news-gathering tasks, like aggregating data, but there’s a growing concern that an over-dependence on it could weaken journalistic standards if newsrooms aren’t careful.” (On that note, I asked Times executive editor Joe Kahn for his thoughts on ChatGPT’s implications for journalism and whether he could picture a use where it might be applied to journalism at the paper of record, but a spokeswoman demurred, “We’re gonna take a pass on this one.”)

The “growing concern” that Fischer alluded to in her Axios piece came to the fore in recent days as controversy engulfed the otherwise anodyne technology-news publication CNET, after a series of articles from Futurism and The Verge drew attention to the use of AI-generated stories at CNET and its sister outlet, Bankrate. Stories full of errors and—it gets worse—apparently teeming with robot plagiarism. “The bot’s misbehavior ranges from verbatim copying to moderate edits to significant rephrasings, all without properly crediting the original,” reported Futurism’s Jon Christian. “In at least some of its articles, it appears that virtually every sentence maps directly onto something previously published elsewhere.” In response to the backlash, CNET halted production on its AI content farm while editor in chief Connie Guglielmo issued a penitent note to readers: “We’re committed to improving the AI engine with feedback and input from our editorial teams so that we—and our readers—can trust the work it contributes to.”

For an even more dystopian tale, check out this yarn from the technology journalist Alex Kantrowitz, in which a random Substack called “The Rationalist” put itself on the map with a post that lifted passages directly from Kantrowitz’s Substack, “Big Technology.” This wasn’t just some good-old-fashioned plagiarism, like Melania Trump ripping off a Michelle Obama speech. Rather, the anonymous author of “The Rationalist”—an avatar named “PETRA”—disclosed that the article had been assembled using ChatGPT and similar AI tools. Furthermore, Kantrowitz wrote that Substack indicated it wasn’t immediately clear whether “The Rationalist” had violated the company’s plagiarism policy. (The offending post is no longer available.) “The speed at which they were able to copy, remix, publish, and distribute their inauthentic story was impressive,” Kantrowitz wrote. “It outpaced the platforms’ ability, and perhaps willingness, to stop it, signaling Generative AI’s darker side will be difficult to tame.” When I called Kantrowitz to talk about this, he elaborated, “Clearly this technology is gonna make it a lot easier for plagiarists to plagiarize. It’s as simple as tossing some text inside one of these chatbots and asking them to remix it, and they’ll do it. It takes minimal effort when you’re trying to steal someone’s content, so I do think that’s a concern. I was personally kind of shocked to see it happen so soon with my story.”

Sam Altman, the CEO of ChatGPT’s parent company, OpenAI, said in an interview this month that the company is working on ways to identify AI plagiarism. He’s not the only one: I just got off the phone with Shouvik Paul, chief revenue officer of a company called Copyleaks, which licenses plagiarism-detection software to an array of clients ranging from universities to corporations to several major news outlets. The company’s latest development is a tool that takes things a step further by using AI to detect whether something was written using AI. There’s even a free browser plug-in that anyone can take for a spin, which identifies AI-derived copy with 99.2% accuracy, according to Paul. It could be an easy way to sniff out journalists who pull the wool over their editors’ eyes. (Or, in the case of the CNET imbroglio, publications that pull the wool over their readers’ eyes.) But Paul also hopes it can be used to help people identify potential misinformation and disinformation in the media ecosystem, especially heading into 2024. “In 2016, Russia had to physically hire people to go and write these things,” he said. “That costs money. Now, the cost is minimal and it’s a thousand times more scalable. It’s something we’re definitely gonna see and hear about in this upcoming election.”

The veteran newsman and media entrepreneur Steven Brill shares Paul’s concern. “ChatGPT can get stuff out much faster and, frankly, in a much more articulate way,” he told me. “A lot of the Russian disinformation in 2016 wasn’t very good. The grammar and spelling was bad. This looks really smooth.” These days, Brill is the co-CEO and co-editor-in-chief of NewsGuard, a company whose journalists use data to score the trust and credibility of thousands of news and information websites. In recent weeks, NewsGuard analysts asked ChatGPT “to respond to a series of leading prompts relating to a sampling of 100 false narratives among NewsGuard’s proprietary database of 1,131 top misinformation narratives in the news…published before 2022.” (ChatGPT is primarily programmed on data through 2021.)

“The results,” according to NewsGuard’s analysis, “confirm fears, including concerns expressed by OpenAI itself, about how the tool can be weaponized in the wrong hands. ChatGPT generated false narratives—including detailed news articles, essays, and TV scripts—for 80 of the 100 previously identified false narratives. For anyone unfamiliar with the issues or topics covered by this content, the results could easily come across as legitimate, and even authoritative.” The title of the analysis was positively ominous: “The Next Great Misinformation Superspreader: How ChatGPT Could Spread Toxic Misinformation At Unprecedented Scale.” On the bright side, “NewsGuard found that ChatGPT does have safeguards aimed at preventing it from spreading some examples of misinformation. Indeed, for some myths, it took NewsGuard as many as five tries to get the chatbot to relay misinformation, and its parent company has said that upcoming versions of the software will be more knowledgeable.”

Brill isn’t worried about ChatGPT and its ilk putting skilled reporters out of work. He told me about a final paper he assigns for his journalism students at Yale, in which they have to turn in a magazine-length feature and list “at least 15 people they interviewed and four people who told them to go fuck themselves. There is no way they could do that assignment with ChatGPT or anything like it, because what journalists do is interview people, read documents, get documents leaked to them.” Still, Brill continued, “One of the assignments I give them on the second or third week is a short essay on how Watergate would have played out differently in the internet age, because Bob Woodward comes in as a guest for that session. I asked ChatGPT to answer that question, and the answer I got was this banal but perfectly coherent exposition. The difference is, you didn’t have to interview or talk to anyone. So maybe it’ll put some op-ed columnists out of work.”

As for Kantrowitz, getting plagiarized by bots hasn’t turned him into a ChatGPT hater. “I’m still super bullish on generative AI, and I still think it can be useful for journalism,” he said. “Sometimes I’ll use it when I’m stuck on a story, and I never include [the AI-generated text] in the story, but it can get my brain going, and that’s helpful. If you think about how it will impact journalism in next two or three years, the likely answer is, quite minimally. But as this technology gets better at scouring the internet and taking information, as its writing gets better, we’ll start to see a world where it can produce better writing and analysis than most professional reporters. If you’re doing original reporting and unearthing things people don’t already know, you’re probably gonna be okay. But if you’re an analysis person, let’s say, 20 years down the road, you might need to find something else to do.”

728x90x4

Source link

Continue Reading

Media

Myanmar military dissolves Suu Kyi’s NLD party: State media – Al Jazeera English

Published

 on


BREAKING,

Party of Myanmar leader Aung San Suu Kyi among 40 political parties dissolved after failing to meet registration deadline, according to state television.

Myanmar’s military-controlled election commission has announced that the National League for Democracy Party (NLD) would be dissolved for failing to re-register under a new electoral law, according to state television.

300x250x1

The NLD led by Nobel laureate Aung San Suu Kyi was among 40 political parties dissolved on Tuesday after they failed to meet the ruling military’s registration deadline for an election, according to state television.

In a nightly news bulletin, Myawaddy TV announced the NLD among those who had not signed up to the election and were therefore automatically disbanded. The NLD has said it would not contest what it calls an illegitimate election.

The army carried out a coup in February 2021 after the NLD won the November 2020 parliamentary elections and subsequently jailed its leader Suu Kyi.

Suu Kyi, 77, is serving prison sentences totaling 33 years after being convicted in a series of politically tainted prosecutions brought by the military. Her supporters say the charges were contrived to keep her from actively taking part in politics.

The party won a landslide victory in the 2020 general election, but less than three months later, the army kept Suu Kyi and all the elected lawmakers from taking their seats in parliament.

The army said justified the coup saying there was a massive poll fraud, though independent election observers did not find any major irregularities.

Some critics of Senior General Min Aung Hlaing, who led the takeover and is now Myanmar’s top leader, believe he acted because the vote thwarted his own political ambitions.

No date has been set for the new polls. They had been expected by the end of July, according to the army’s own plans.

But in February, the military announced an unexpected six-month extension of its state of emergency, delaying the possible legal date for holding an election.

It said security could not be assured. The military does not control large swaths of the country, where it faces widespread armed resistance to its rule.

This is a breaking story. More to follow.

Adblock test (Why?)

728x90x4

Source link

Continue Reading

Media

Gautam Adani acquires 49% in Quintillion Business Media for Rs 48 crore

Published

 on

Billionaire Gautam Adani’s AMG Media Networks has acquired about a 49 per cent stake in Raghav Bahl-curated digital business news platform Quintillion Business Media Pvt Ltd for about Rs 48 crore.

In a stock exchange filing, Adani Enterprises Ltd said its subsidiary AMG Media Networks Ltd has completed the acquisition which was originally announced in May last year.

The transaction was completed on March 27 for “Rs 47.84 crore”, it said.

Quintillion Business Media runs the news platform Bloomberg Quint, now called BQ Prime.

300x250x1

Adani group had set up AMG Media Networks for its foray into businesses of “publishing, advertising, broadcasting, distribution of content over different types of media networks”.

In May last year, it had signed a shareholders’ agreement with Quintillion Media Ltd (QML) and QBML.

In September 2021, it hired veteran journalist Sanjay Pugalia to lead its media company Adani Media Ventures.

 

728x90x4

Source link

Continue Reading

Media

Twitter source code partially leaked online, court filing says

Published

 on

GitHub removed code shared without permission after request by social media giant, court filing says.

Twitter’s source code has partially leaked online, according to a legal filing by the social media giant.

Twitter asked GitHub, an online software development platform, to remove the code after it was posted online without permission earlier this month, the legal document filed in the US state of California showed on Sunday.

GitHub complied with Twitter’s request to remove the code after the social media company on March 24 issued a subpoena to identify a user known as “FreeSpeechEnthusiast”, according to the filing with the US District Court of the Northern District of California. San Francisco-based Twitter noted in the filing that the postings infringe on the platform’s intellectual property rights.

300x250x1

The filing was first reported by The New York Times.

The leak of the code is the latest hiccup at the social media giant since its purchase by Elon Musk, whose tenure has been marked by mass layoffs, outages, sweeping changes to content moderation and heated debate about the proper balance between free speech and online safety.

Musk, who bought Twitter for $44bn last October, said recently that Twitter would open the source code used to recommend tweets on March 31. Musk, who also runs Tesla and several other companies, said the platform’s algorithm was overly complex and predicted people would find “many silly things” once the code was made public. It is not clear if the leaked source relates to the code used to recommend tweets.

“Providing code transparency will be incredibly embarrassing at first, but it should lead to rapid improvement in recommendation quality,” he wrote on Twitter. “Most importantly, we hope to earn your trust.”

728x90x4

Source link

Continue Reading

Trending