Google is in discussions with news publishers about building and selling artificial intelligence tools that could help reporters and editors produce written journalism, a potential major acceleration of the practice of using automated tools to produce news content.
Media
Google pitches media outlets on AI that could help produce news
|

Google has been presenting the tools to news outlets since early spring, according to news executives present for meetings or later briefed on them. The product was pitched as possibly being able to collect information as part of newsgathering, write an early draft of a news story, and handle postproduction elements like writing social media posts, according to one executive who sat in on a pitch, speaking on the condition of anonymity because they were not authorized to discuss the matter. Google suggested that the tool would be most appealing to local publishers.
News outlets are grappling with the latest “generative” AI tools like Bard from Google and ChatGPT from OpenAI that can write human-sounding text on any topic based on simple prompts and questions. Some news publishers have already employed the bots to speed up their ability to write lots of content quickly, spurring anxiety and anger from human writers. But the tools still make up false information and pass it off as factual, something AI experts say is an inherent part of how the technology works, raising doubts whether it can ever be trusted to write news stories.
“We have seen large-language models like ChatGPT and Bard produce factually incorrect information. Unleashing these models in the critical, and often time-crunched, field of journalism seems premature,” said Hany Farid, a computer science professor at the University of California at Berkeley and a member of its Artificial Intelligence Lab.
Jenn Crider, a Google spokesperson, confirmed the company was in discussions with news outlets with a focus on small publishers. The tools could provide different options for headlines or writing styles, with the goal of speeding up and improving how journalists work, Crider said. She compared the tools to AI features the company is adding to Gmail and Google Docs that automatically write emails, résumés or memos based on short prompts and questions entered by a human.
“Our goal is to give journalists the choice of using these emerging technologies in a way that enhances their work and productivity,” Crider said. “Quite simply these tools are not intended to, and cannot, replace the essential role journalists have in reporting, creating, and fact-checking their articles.”
The New York Times earlier reported that Google was pitching its AI product to news outlets. The news tool is code-named Genesis, and Google has had discussions with representatives from the Times, the Wall Street Journal and The Washington Post, the Times reported.
Times spokesperson Charlie Stadtlander declined to comment on whether the Times has had discussions with Google, referring instead to a memo Times Deputy Managing Editor Sam Dolnick and Chief Product Officer Alex Hardiman sent to employees on June 7. “We recognize the power, the potential and, importantly, the risks of generative AI tools both for the public and for journalism,” they wrote. “We also intend to stay at the forefront of identifying creative ways to deploy generative AI to advance our journalistic mission.”
Caitlyn Reuss, a spokesperson for the Journal, declined to comment. Kathy Baird, the chief communications officer of The Post, said, “A meeting took place this spring with Google to showcase their new tech, Genesis, and it included mostly Post executives from the Engineering and Business teams.”
News outlets should be wary of Google, said Jason Kint, the chief executive of Digital Content Next, a lobbying group for online news organizations. “The various tools which can be enhanced by AI are exciting and should be explored with an eye on the future,” Kint said. “At the same time, publishers should have their other eye on Google’s long history of harvesting their copyrighted material and their users’ data in a manner that maximizes Google’s own profits and interests.”
The latest crop of generative AI products has sent a shock wave of anxiety through content-producing industries such as art, film, music, marketing and news publishing. The bots, which have been trained on billions of words of text scraped from the open internet, are able to create human-sounding text based on simple prompts.
The generative AI tools are trained on content taken from the news outlets themselves, without payment or permission. A Post analysis of a data set used to train an earlier version of ChatGPT showed that news stories from the New York Times, the Los Angeles Times and The Post were major sources of training data for the bot. News outlets are part of a growing movement of content creators who argue that AI companies need to compensate those whose data they use to train their bots.
Last week, the Associated Press agreed to license its news archive to OpenAI in a deal that also gave the news organization access to OpenAI technology. The AP has been among a group of news outlets that have experimented with writing automated articles for years.
Some news organizations have already put chatbots to work writing news articles. In January, internet sleuths revealed the tech news website CNET published dozens of articles written by AI. The stories were littered with errors. One article on compound interest claimed a $10,000 deposit with 3 percent interest would earn the holder $10,300 in the first year of their investment, rather than the actual $300.
Google has a complicated relationship with the news industry. As the company rapidly grew through the first two decades of the 2000s, it gobbled up huge portions of the advertising industry, decimating the news business globally. Local and regional news outlets that relied on classified and local ads for decades saw their revenue crater, and thousands of them have shut down in the United States alone, leaving many towns without a news source beyond social media.
Larger news outlets pivoted toward online subscriptions, trying to avoid relying on an increasingly small share of the advertising market. In June, the largest newspaper chain in the country, Gannett, sued Google, claiming its dominance in digital advertising was further damaging the local news industry.
At the same time, Google search traffic is a lifeline for many news publishers, including ones who have subscription businesses. News outlets compete every day to have their stories show up higher in Google search results. Google has also been accused for years of cannibalizing traffic to news outlets by showing portions of articles directly in search results, a practice the company says helps its users.
For years, Google has tried to improve its reputation among news outlets by giving grants directly to local news and smaller publishers, as well as creating free tools like transcription software for news outlets to use. In some countries, governments are passing laws to require Google and Facebook to pay news producers directly for showing their content or portions of it on their platforms.
In Canada, a new law set to go into effect at the end of the year that forces the two tech giants to make payments to news outlets has become a major political flash point. Google and Facebook have said they would block Canadians from sharing links to news outlets, while the government of Prime Minister Justin Trudeau has accused the companies of “bullying tactics.”





Media
Who is Lachlan Murdoch, heir apparent of Rupert Murdoch’s media empire?
|
For Lachlan Murdoch, this moment has been a long time coming. Assuming, of course, that his moment has actually arrived.
On Thursday, his father Rupert Murdoch announced that in November he’ll step down as the head of his two media companies: News Corp. and Fox Corp. Lachlan will become the chair of News Corp. while remaining chief executive and chair at Fox Corp., the parent of Fox News Channel.
The changes make Rupert’s eldest son the undisputed leader of the media empire his father built over decades. There’s no real sign that his siblings and former rivals James and Elisabeth contested him for the top job; James in particular has distanced himself from the company and his father’s politics for several years. But Rupert, now 92, has long had a penchant for building up his oldest children only to later undermine them — and sometimes to set them against one another — often flipping the table without notice.
Given Rupert Murdoch’s advanced age, this might be his last power move. But there’s a reason the HBO drama “Succession” was often interpreted as a thinly disguised and dark satire of his family business. In Murdoch World, as in the fictional world of the Roy family, seemingly sure things can go sideways in an instant, particularly when unexpected opportunities arise.
Lachlan Murdoch has lived that first hand. Born in London, he grew up in New York City and attended Princeton, where he focused not on business, but philosophy. His bachelor’s thesis, titled “A Study of Freedom and Morality in Kant’s Practical Philosophy,” addressed those weighty topics alongside passages of Hindu scripture. The thesis closed on a line from the Bhagavad Gita referencing “the infinite spirit” and “the pure calm of infinity,” according to a 2019 article in The Intercept.
Béatrice Longuenesse, Lachlan’s thesis advisor at Princeton, confirmed the accuracy of that report via email.
After graduation, though, Lachlan plunged headlong into his father’s business, moving to Australia to work for the Murdoch newspapers that were once the core of News Corp.’s business. Many assumed he was being groomed for higher things at News Corp., and they were not wrong. Within just a few years, Lachlan was deputy CEO of the News Corp. holding company for its Australian properties; shortly thereafter, he took an executive position at News Corp. itself and was soon running the company’s television stations and print publishing operations.
Lachlan’s ascent came to an abrupt halt in 2005, when he resigned from News Corp. with no public explanation. According to Paddy Manning, an Australian journalist who last year published a biography of Lachlan Murdoch, the core problem involved two relatively minor issues on which Lachlan disagreed with Roger Ailes, who then ran Fox News.
“The real point was that Lachlan felt Rupert had backed his executives over his son,” Manning said in an interview. “So Lachlan felt, ‘If I’m not going to be supported, then what’s the point?’” Manning did not have direct access to Lachlan for his book “The Successor,” but said he spoke in depth with the people closest to his subject.
Lachlan returned to Australia, where he has often described feeling most at home, and founded an investment group that purchased a string of local radio stations among other properties.
While he was away, News Corp. entered choppy waters. The U.K. phone-hacking scandal, in which tabloid journalists at the News of the World and other Murdoch-owned publications had found a way to listen to voicemails of the British royal family, journalistic competitors and even a missing schoolgirl, had seriously damaged the company. The fracas led to resignations of several News Corp. officials, criminal charges against some, and the closure of News of the World as its finances went south.
Manning said that the damage the scandal inflicted on News Corp. — and on both Lachlan Murdoch’s father and his brother James, chief executive of News’ British newspaper group at the time — helped pull Lachlan back to the company.
“He was watching the family tear itself apart over the phone-hacking scandal,” Manning said. Lachlan was “instrumental in trying to circle the wagons and turn the guns outwards, and stop Rupert from sacking James.”
While it took more convincing, Lachlan eventually returned to the company in 2014 as co-chairman of News Corp. alongside James.
Not long afterward, Ailes was forced out of his job at Fox News following numerous credible allegations of sexual harassment.
Lachlan Murdoch has drawn criticism from media watchdogs for what many called Fox News’ increasingly conspiratorial and misinformation-promoting broadcasts. The network hit a nadir following the 2020 election when voting machine company Dominion Voting Systems sued Fox News for $1.6 billion, alleging that Fox knowingly promoted false conspiracy theories about the security of its voting machines.
Fox settled that suit for $787.5 million in March of this year. A similar lawsuit filed by Smartmatic, another voting-machine maker, may go to trial in 2025, Fox has suggested.
In certain respects, though, Lachlan Murdoch’s behavior suggests some ambivalence about his role at News Corp. In 2021 he moved back to Sidney and has been mixing commuting and remote work from Australia ever since. “I think there’s a legitimate question about whether you can continue to do that and for how long” while running companies based in the U.S., Manning said.





Media
Ukraine war: US to give Kyiv long-range ATACMS missiles
|
US President Joe Biden plans to give Ukraine advanced long-range missiles to help Kyiv with its ongoing counter-offensive, US media report.
They quote US officials familiar with the issue as saying Ukraine will get some ATACMS missiles with a range of up to 190 miles (300km).
This would enable Kyiv to hit Russian targets deep behind the front line.
A Ukrainian military source told the BBC that the attack in the port of Sevastopol used Storm Shadow missiles, which are supplied by Britain and France.
Such missiles have a range of just over 150 miles.


NBC News and the Wall Street Journal quote unnamed US officials as saying Mr Biden told his Ukrainian counterpart Volodymyr Zelensky that Kyiv would get “a small number” of ATACMS (Army Tactical Missile System) missiles. The two leaders met at the White House on Thursday.
The WSJ adds that the weapons will be sent in the coming weeks.
Meanwhile, the Washington Post cited several people familiar with the discussions as saying Ukraine would get ATACMS armed with cluster bomblets rather than single warheads.
Neither the US nor Ukraine have officially confirmed the reports.
After the Biden-Zelensky talks Washington announced a new tranche of $325m (£265m) in military aid – including artillery and ammunition – for Ukraine. America’s Abrams tanks will be delivered to Kyiv next week.
However, both presidents have been evasive on the ATACMS issue.
“I believe that most of what we were discussing with President Biden yesterday… we will be able to reach an agreement,” Mr Zelensky said on Friday during a visit to Canada.
“Yes, [this is] a matter of time. Not everything depends on Ukraine,” he added.
Kyiv has for months been pushing for ATACMS to boost its tough and bloody counter-offensive in the south.
It says key Russian supply lines, command positions and other logistical hubs deep behind the front line would then be within striking distance, forcing Moscow to move them further away and thus making it harder to resupply troops and weaponry.
Russian positions in the occupied Ukrainian regions in the south – including Crimea – would be particularly vulnerable, Ukraine says.
President Vladimir Putin launched a full-scale Russian invasion of Ukraine in February 2022, and the Biden administration was initially hesitant to provide Ukraine with modern weaponry.
But its stance has since shifted dramatically, with Kyiv getting high-precision Himars long-range rocket systems and Patriot air defence missiles.
President Biden has been hesitant on ATACMS amid fears that such missiles could bring a direct clash with nuclear-armed Russia closer.





Media
The Supreme Court showdown over social media “censorship” and free speech online
|
About a year ago, an especially right-wing panel of the far-right United States Court of Appeals for the Fifth Circuit held that Texas’s state government may effectively seize control of content moderation on social media websites such as Twitter, YouTube, and Facebook.
The Fifth Circuit’s opinion in NetChoice v. Paxton upheld an unconstitutional law that requires social media companies to publish content produced by their users that they do not wish to publish, but that the government of Texas insists that they must publish. That potentially includes content by Nazis, Ku Klux Klansmen, and other individuals calling for the outright extermination of minority groups.
Meanwhile, earlier this month the same Fifth Circuit handed down a decision that effectively prohibits the Biden administration from asking social media companies to pull down or otherwise moderate content. According to the Justice Department, the federal government often asks these platforms to remove content that seeks to recruit terrorists, that was produced by America’s foreign adversaries, or that spreads disinformation that could harm public health.
Again, the Fifth Circuit’s more recent decision, which is known as Murthy v. Missouri, would devastate a Democratic administration’s ability to ask media companies to voluntarily remove content. Meanwhile, the NetChoice decision holds that Texas’s Republican government may compel those same companies to adopt a government-mandated editorial policy.
These two decisions obviously cannot be reconciled, unless you believe that the First Amendment applies differently to Democrats and Republicans. And the Supreme Court has already signaled, albeit in a 5-4 decision, that a majority of the justices believe that the Fifth Circuit has gone off the rails. Soon after the Fifth Circuit first signaled that it would uphold Texas’s law, the Supreme Court stepped in with a brief order temporarily putting the law on ice.
Yet, while the Fifth Circuit’s approach to social media has been partisan and hackish, these cases raise genuinely difficult policy questions. Social media companies control powerful platforms that potentially allow virtually anyone to communicate their views to millions of people at a time. These same companies also have the power to exclude anyone they want from these platforms either for good reasons (because someone is a recruiter for the terrorist group ISIS, for example), or for arbitrary or malicious reasons (such as if the company’s CEO disagrees with an individual’s ordinary political views).
Worse, once a social media platform develops a broad user base, it is often difficult for other companies to build competing social networks. After Twitter, now known as X, implemented a number of unpopular new policies that favored trolls and hate speech, for example, at least eight other platforms tried to muscle into this space with Twitter-like apps of their own. Thus far, however, these new platforms have struggled to consolidate the kind of user base that can rival Twitter’s. And the one that most likely presents the greatest threat to Twitter, Threads, is owned by social media giant Meta.
It is entirely reasonable, in other words, for consumers to be uncomfortable with so few corporations wielding so much authority over public discourse. What is less clear is what role the government legitimately can play in dealing with this concentration of power.
What the First Amendment actually says about the government’s relationship with media companies
Before we dive into the details of the NetChoice and Murthy decisions, it’s helpful to understand a few basics about First Amendment doctrine, and just how much pressure the government may place on a private media company before that pressure crosses the line into illegal coercion.
First, the First Amendment protects against both government actions that censor speech and government actions that attempt to compel someone to speak against their will. As the Supreme Court explained in Rumsfeld v. Forum for Academic and Institutional Rights (2006), “freedom of speech prohibits the government from telling people what they must say.”
Second, the First Amendment also protects speech by corporations. This principle became controversial after the Supreme Court’s decision in Citizens United v. FEC (2010) held that corporations may spend unlimited sums of money to influence elections, but it also long predates Citizens United. Indeed, a world without First Amendment protections for corporations is incompatible with freedom of the press. Vox Media, the New York Times, the Washington Post, and numerous other media companies are all corporations. That doesn’t mean that the government can tell them what to print.
Third, the First Amendment specifically protects the right of traditional media companies to decide what content they carry and what content they reject. Thus, in Miami Herald v. Tornillo (1974), the Supreme Court held that a news outlet’s “choice of material to go into a newspaper” is subject only to the paper’s “editorial control and judgment,” and that “it has yet to be demonstrated how governmental regulation of this crucial process can be exercised consistent with First Amendment guarantees of a free press.”
Fourth, this regime applies equally to internet-based media. The Supreme Court’s decision in Reno v. ACLU (1997) acknowledged that the internet is distinct from other mediums because it “can hardly be considered a ‘scarce’ expressive commodity” — that is, unlike a newspaper, there is no physical limit on how much content can be published on a website. But Reno concluded that “our cases provide no basis for qualifying the level of First Amendment scrutiny that should be applied to this medium.”
Taken together, these four principles establish that neither Texas nor any other governmental body may require a media company, social or otherwise, to publish content that the company does not want to print. If Twitter announces tomorrow that it will delete all tweets written by someone named “Jake,” for example, the government may not pass a law requiring Twitter to publish tweets by Jake Tapper. Similarly, if a social media company announces that it will only publish content by Democrats, and not by Republicans, it may do so without government interference.
That said, while the government may neither censor a media platform’s speech nor demand that the platform publish speakers it does not want to publish, government officials are allowed to express the government’s view on any topic. Indeed, as the Supreme Court said in Pleasant Grove v. Summum (2009), “it is not easy to imagine how government could function if it lacked this freedom.”
The government’s freedom to express its own views extends both to statements made to the general public and to statements made in private communications with business leaders. Federal officials may, for example, tell YouTube that the United States government believes that the company should pull down every ISIS recruitment video on the site. And those officials may also ask a social media company to pull down other content that the government deems to be harmful, dangerous, or even merely annoying.
Of course, the general principle that the government can say what it wants can sometimes be in tension with the rule against censorship. While the First Amendment allows, say, Florida Gov. Ron DeSantis (R) to make a hypothetical statement saying that he opposes all books that present transgender people in a positive light, DeSantis would cross an impermissible line if he sends a police officer to a bookstore to make a thinly veiled threat — such as if the cop told the storeowner that “bad things happen to people who sell these kinds of books.”
But a government statement to a private business must be pretty egregious before it crosses the line into impermissible censorship. As the Court held in Blum v. Yaretsky (1982), the government may be held responsible for a private media company’s decision to alter its speech only when the government “has exercised coercive power or has provided such significant encouragement, either overt or covert, that the choice must in law be deemed to be that of the State.”
So how should these First Amendment principles apply to government-mandated content moderation?
In fairness, none of the Supreme Court decisions discussed in the previous section involve social media companies. So it’s at least possible that these longstanding First Amendment principles need to be tweaked to deal with a world where, say, a single billionaire can buy up a single website and fundamentally alter public discourse among important political and media figures.
But there are two powerful reasons to tread carefully before remaking the First Amendment to deal with The Problem of Elon Musk.
One is that no matter how powerful Musk or Mark Zuckerberg or any other media executive may become, they will always be categorically different from the government. If Facebook doesn’t like what you have to say, it can kick you off Facebook. But if the government doesn’t like what you say (and if there are no constitutional safeguards against government overreach), it can send armed police officers to haul you off to prison forever.
The other is that the actual law that Texas passed to deal with the Texas GOP’s concerns about social media companies is so poorly designed that it suggests that a world where the government can regulate social media speech would be much worse than one where important content moderation decisions are made by Musk.
That law, which Texas Gov. Greg Abbott (R) claims was enacted to stop a “dangerous movement by social media companies to silence conservative viewpoints and ideas,” prohibits the major social media companies from moderating content based on “the viewpoint of the user or another person” or on “the viewpoint represented in the user’s expression or another person’s expression.”
Such a sweeping ban on viewpoint discrimination is incompatible with any meaningful moderation of abusive content. Suppose, for example, that a literal Nazi posts videos on YouTube calling for the systematic extermination of all Jewish people. Texas’s law prohibits YouTube from banning this user or from pulling down his Nazi videos, unless it also takes the same action against users who express the opposite viewpoint — that is, the view that Jews should not be exterminated.
In any event, the Supreme Court already blocked the Texas law once, so it’s unlikely that it will reverse course when it hears the case a second time (the Court could announce that it will rehear the NetChoice case soon after its next conference, which will take place on Tuesday).
What should happen when the government merely asks a social media company to remove content?
But what about a case like Murthy? That case is currently before the Supreme Court on its shadow docket — a mix of emergency motions and other matters that the Court sometimes decides on an expedited basis — so the Court could decide any day now whether to leave the Fifth Circuit’s decision censoring the Biden administration in effect.
The Fifth Circuit’s Murthy decision spends about 14 pages describing cases where various federal officials, including some in the Biden White House, asked social media companies to remove content — often because federal officials thought the content was harmful to public health because it contained misinformation about Covid-19.
In many cases, these conversations happened because those companies proactively reached out to the government to solicit its views. As the Fifth Circuit admits, for example, platforms often “sought answers” from the Centers for Disease Control and Prevention about “whether certain controversial claims were ‘true or false’” in order to inform their own independent decisions about whether or not to remove those claims.
That said, the Fifth Circuit also lists some examples where government officials appear to have initiated a particular conversation. In one instance, for example, a White House official allegedly told an unidentified platform that it “remain[ed] concerned” that some of the content on the platform encouraged vaccine hesitancy. In another instance, Surgeon General Vivek Murthy allegedly “asked the platforms to take part in an ‘all-of-society’ approach to COVID by implementing stronger misinformation ‘monitoring’ program.”
It’s difficult to assess the wisdom of these communications between the government and the platforms because the Fifth Circuit offers few details about what content was being discussed or why the government thought this content was sufficiently harmful that the platforms should intervene. Significantly, however, the Fifth Circuit does not identify a single example — not one — of a government official taking coercive action against a platform or threatening such action.
The court does attempt to spin a couple of examples where the White House endorsed policy changes as such a threat. In a 2022 news conference, for example, the White House press secretary said that President Biden supports reforms that would impact the social media industry — including “reforms to section 230, enacting antitrust reforms, requiring more transparency, and more.” But the president does not have the authority to enact legislative reforms without congressional approval. And the platforms themselves did not behave as if they faced any kind of threat.
Indeed, the Fifth Circuit’s own data suggests that the platforms felt perfectly free to deny the government’s requests, even when those requests came from law enforcement. The FBI often reached out to social media platforms to flag content by “Russian troll farms” and other malign foreign actors. But, as the Fifth Circuit concedes, the platforms rejected the FBI’s requests to pull down this content about half of the time.
And, regardless of how one should feel about the government communicating with media sites about whether Russian and anti-vax disinformation should remain online, the Fifth Circuit’s approach to these communications is ham-handed and unworkable.
At several points in its opinion, for example, the Fifth Circuit faults government officials who “entangled themselves in the platforms’ decision-making processes.” But the court never defines this term “entangled,” or even provides any meaningful hints about what it might mean, other than using equally vague adjectives to describe the administration’s communications with the platforms, such as “consistent and consequential.”
The Biden administration, in other words, appears to have been ordered not to have “consistent and consequential” communications with social media companies — whatever the hell that means. Normally, when courts hand down injunctions binding the government, they define the scope of that injunction clearly enough that it’s actually possible to figure out what the government is and is not allowed to do.
The common element in NetChoice and Murthy is that, in both cases, government officials (the Texas legislature in NetChoice and three Fifth Circuit judges in Murthy) were concerned about certain views being suppressed on social media. And, in both cases, they came up with a solution that is so poorly thought out that it is worse than whatever perceived problem they were trying to solve.
Thanks to the Fifth Circuit, for example, the FBI has no idea what it is allowed to do if it discovers that Vladimir Putin is flooding Facebook, YouTube, and Twitter with content that is actively trying to incite an insurrection within the United States. And, thanks to the Fifth Circuit, there’s now one First Amendment that Democrats must comply with, and a different, weaker First Amendment that applies to Republican officials.
We can only hope that the Supreme Court decides to step back and hit pause on this debate, at least until someone can come up with a sensible and workable framework that can address whatever problems the Texas legislature and the Fifth Circuit thought they were solving.





-
News15 hours ago
India-Canada news: How the visa office suspension affects travellers
-
News16 hours ago
Why is rent going up faster in Brampton than everywhere else in Canada?
-
Business18 hours ago
Vote “No” to Unifor’s sellout Ford Canada contract! Build rank-and-File committees to fight for a North America-wide strike against the Detroit Three!
-
News14 hours ago
India-Canada news LIVE updates: Justin Trudeau says evidence was shared ‘many weeks ago’
-
News13 hours ago
Canada is still processing visas for Indian nationals
-
Economy17 hours ago
Weak Euro-Area PMI Data Suggest Economy Facing Contraction
-
Art20 hours ago
Opera House totem pole permanently removed from city’s art collection
-
Art20 hours ago
Can David Salle Teach A.I. How to Create Good Art?