adplus-dvertising
Connect with us

Tech

Microsoft gambles on ‘nice guy’ strategy to close Activision megadeal

Published

 on

Early this month, Brad Smith, Microsoft’s president, met with Lina Khan, the chair of the Federal Trade Commission, to push for regulatory approval of Microsoft’s $69 billion acquisition of the video game company Activision Blizzard.

Smith’s gambit — which included offering to keep Activision’s blockbuster game Call of Duty widely available to satisfy competitive concerns — failed. A day after their meeting, Khan’s agency sued to prevent the blockbuster deal.

But in an interview this week, Smith was sanguine. “She did not take me up on my offer, but when I said give peace a chance, she smiled at least a little,” he said of Khan. “So any time somebody can end a meeting by smiling even a little, there’s always a little hope that we can sit down together in the future.”

Smith’s peacemaking comments reflect how Microsoft intends to approach the next phase of its deal for Activision. Far from giving up on the acquisition, he said, the company intends to gamble that its “nice guy” strategy could still work.

Subscriber Only Stories

In one plan, Microsoft hopes to win over regulators in Europe, people familiar with the approach said. European approval of the Activision deal could force U.S. officials to reach a settlement allowing for the acquisition to move forward or for a faster, more favorable court to hear the case, the people said.

Microsoft filed its response to the FTC lawsuit Thursday, arguing that the deal would expand access for gamers.

“Giving consumers high-quality content in more ways and at lower prices is what the antitrust laws are supposed to promote, not prevent,” the filing said.

Advertisement

The FTC has said the deal should be stopped because it would harm consumers. It said Microsoft, which makes the Xbox console, could use Call of Duty and other popular Activision titles to lure gamers from rivals, especially Sony, which makes the PlayStation console.

Microsoft’s seemingly conciliatory approach is part of a nearly complete cultural transformation by the company since the 1990s, when it was known as the “Evil Empire” because of its strong-arm tactics to block out competitors. But under Satya Nadella, who became CEO in 2014, and Smith, who is also Microsoft’s top lawyer, the company has bent over backward in recent years to show it has grown up.

Pushing the Activision deal through has implications for more than just Microsoft. The FTC lawsuit is a landmark in a new era of government scrutiny of the biggest tech companies. Khan has staked an aggressive trustbusting agenda on the case, which legal experts said might be difficult to win. If Microsoft cannot get the deal approved, other tech behemoths will be less likely to be able to force a megadeal through.
“They will fight it,” said Sid Parakh, a portfolio manager at Becker Capital, which invests in Microsoft. “It’s a bit more above and beyond this deal. It’s also a statement to the FTC.”

Advertisement

With Microsoft sitting on more than $100 billion to spend, he added, “they don’t want to back down now and then have every acquisition shot down.”
The acquisition of Activision must close by mid-July or Microsoft must pay as much as $3 billion in a breakup fee. Many hurdles remain, including approval from other global regulators, notably in Britain and in the European Union. If Microsoft can reach a formal settlement with them, it would leave the FTC at a critical juncture.

The FTC sued Microsoft in administrative court, which does not have the power to stop the deal from closing while the case is pending. If other regulators approved the deal, the FTC would need to decide whether to file an injunction against the acquisition in federal court to stop it. The injunction process could move quickly, potentially handing Microsoft a swift legal victory.

“There is no sensible, legitimate reason for our transaction to be prevented from closing,” the CEO of Activision, Bobby Kotick, said in a statement Wednesday. “We believe we will prevail on the merits of the case.”

The FTC declined to comment on Microsoft’s strategy or Smith’s conversation with Khan. Holly Vedova, the director of the FTC’s Bureau of Competition, said the agency is always willing to consider proposals from companies looking to settle antitrust concerns.

Microsoft is trying to strike a balance between, on the one hand, seeming open to a settlement and, on the other, preparing to destroy the FTC’s case in court. It has hired Beth Wilkinson, who prosecuted the 1995 Oklahoma City bombing case before becoming one of the United States’ premier corporate litigators, to argue on its behalf before the FTC in-house court.

Advertisement

Smith said he was optimistic that the case could avoid a messy trial, in part because of Microsoft’s previous experiences with antitrust enforcement.

In the 1990s, the company was known for its scorched-earth business tactics, bundling software products together to edge out competitors. In 1992, as regulators investigated the company, Microsoft co-founder Bill Gates dismissed the scrutiny, saying, “The worst that could come of this is that I could fall down on the steps of the FTC, hit my head and kill myself.”

Advertisement

Two years later, Microsoft agreed to a federal consent degree allowing personal computer makers more freedom to install programs from other companies. It staved off being broken up after a 1998 antitrust trial, and finally settled with the George W. Bush administration in 2001.

“The trial forced Microsoft to grow up, particularly in terms of its relationships with regulators and institutions beyond the tech industry,” said Margaret O’Mara, a professor at the University of Washington who researches the history of tech companies.

Advertisement

In 2001, Smith walked into interviews to be Microsoft’s top lawyer with a message: It was time to make peace with regulators and competitors. He got the job. Over the next several years, he reached legal settlements over competition concerns with governments around the world and other industry players.

It was not always smooth sailing. Negotiations between the company and Sun Microsystems, a server company that made the popular Java programming language, fell apart and took a year to get back on track. In 2004, Steve Ballmer, Microsoft’s CEO at the time, was on a plane to Brussels to announce a deal with the European Commission when Smith got news that the commission instead was going to sue Microsoft for unfair competition. It took five years to secure a deal.

Since Nadella took over, Microsoft has embraced an even more open stance. His first acquisition was the studio that makes Minecraft, a game in which children learn and socialize in an expansive virtual world. He also spent $7.5 billion to buy GitHub, a software platform that supports open-source code.

Microsoft is now the world’s second-most valuable public company, largely driven by its strong cloud computing offerings. The enterprise business at the heart of its growth generally attracts less government attention than social media or other consumer-facing ventures.

Globally, Smith has presented Microsoft as a friendly giant willing to work with skeptical lawmakers. He has proposed middle-ground rules on contentious issues such as app stores and supported bipartisan interests such as the expansion of broadband.

Smith maintains powerful relationships in Washington. A bundler for President Joe Biden’s campaign, he attended a White House state dinner for French President Emmanuel Macron just days before the FTC sued to block the Activision deal.

After the deal was announced in January, Microsoft went to great lengths to soothe the fears of regulators. Smith and Nadella traveled to Washington in February to promote the deal’s benefits. The company also made peace with an agitating labor union, which in turn lobbied the FTC on the deal. And it promised Sony that it would keep Call of Duty on PlayStation for years, and signed a deal to put the game on Nintendo’s Switch.

Smith said that “things moved quickly” in the final weeks before Microsoft was sued. When FTC staff met with Microsoft’s team, it became clear that the agency had serious concerns, he said.

“Our team asked, ‘Could we discuss a settlement proposal?’ And the staff said, ‘Not with us,’” he said. Later discussions with the leadership of the agency’s antitrust bureau failed to bear fruit, he added.

On Dec. 6, Microsoft drafted a formal settlement proposal for the agency. Smith declined to say exactly what it contained but said it addressed “all the issues relating to Call of Duty,” referring to fears that Microsoft could pull the title from rival consoles. Smith spoke to each of the agency’s four commissioners, virtually, for an hour the next day.

A day after that, the FTC commissioners voted 3-1 to sue.

But Smith said he refused to think of the situation as an us-versus-them situation.

“I will always start by asking myself, could I have done more?” he said. “What I do know is that January brings a new year.”

Adblock test (Why?)

728x90x4

Source link

Continue Reading

Tech

AI could help scale humanitarian responses. But it could also have big downsides

Published

 on

 

NEW YORK (AP) — As the International Rescue Committee copes with dramatic increases in displaced people in recent years, the refugee aid organization has looked for efficiencies wherever it can — including using artificial intelligence.

Since 2015, the IRC has invested in Signpost — a portfolio of mobile apps and social media channels that answer questions in different languages for people in dangerous situations. The Signpost project, which includes many other organizations, has reached 18 million people so far, but IRC wants to significantly increase its reach by using AI tools — if they can do so safely.

Conflict, climate emergencies and economic hardship have driven up demand for humanitarian assistance, with more than 117 million people forcibly displaced in 2024, according to the United Nations refugee agency. The turn to artificial intelligence technologies is in part driven by the massive gap between needs and resources.

To meet its goal of reaching half of displaced people within three years, the IRC is testing a network of AI chatbots to see if they can increase the capacity of their humanitarian officers and the local organizations that directly serve people through Signpost. For now, the pilot project operates in El Salvador, Kenya, Greece and Italy and responds in 11 languages. It draws on a combination of large language models from some of the biggest technology companies, including OpenAI, Anthropic and Google.

The chatbot response system also uses customer service software from Zendesk and receives other support from Google and Cisco Systems.

If they decide the tools work, the IRC wants to extend the technical infrastructure to other nonprofit humanitarian organizations at no cost. They hope to create shared technology resources that less technically focused organizations could use without having to negotiate directly with tech companies or manage the risks of deployment.

“We’re trying to really be clear about where the legitimate concerns are but lean into the optimism of the opportunities and not also allow the populations we serve to be left behind in solutions that have the potential to scale in a way that human to human or other technology can’t,” said Jeannie Annan, International Rescue Committee’s Chief Research and Innovation Officer.

The responses and information that Signpost chatbots deliver are vetted by local organizations to be up to date and sensitive to the precarious circumstances people could be in. An example query that IRC shared is of a woman from El Salvador traveling through Mexico to the United States with her son who is looking for shelter and for services for her child. The bot provides a list of providers in the area where she is.

More complex or sensitive queries are escalated for humans to respond.

The most important potential downside of these tools would be that they don’t work. For example, what if the situation on the ground changes and the chatbot doesn’t know? It could provide information that’s not just wrong, but dangerous.

A second issue is that these tools can amass a valuable honeypot of data about vulnerable people that hostile actors could target. What if a hacker succeeds in accessing data with personal information or if that data is accidentally shared with an oppressive government?

IRC said it’s agreed with the tech providers that none of their AI models will be trained on the data that the IRC, the local organizations or the people they are serving are generating. They’ve also worked to anonymize the data, including removing personal information and location.

As part of the Signpost.AI project, IRC is also testing tools like a digital automated tutor and maps that can integrate many different types of data to help prepare for and respond to crises.

Cathy Petrozzino, who works for the not-for-profit research and development company MITRE, said AI tools do have high potential, but also high risks. To use these tools responsibly, she said, organizations should ask themselves, does the technology work? Is it fair? Are data and privacy protected?

She also emphasized that organizations need to convene a range of people to help govern and design the initiative — not just technical experts, but people with deep knowledge of the context, legal experts, and representatives from the groups that will use the tools.

“There are many good models sitting in the AI graveyard,” she said, “because they weren’t worked out in conjunction and collaboration with the user community.”

For any system that has potentially life-changing impacts, Petrozzino said, groups should bring in outside experts to independently assess their methodologies. Designers of AI tools need to consider the other systems it will interact with, she said, and they need to plan to monitor the model over time.

Consulting with displaced people or others that humanitarian organizations serve may increase the time and effort needed to design these tools, but not having their input raises many safety and ethical problems, said Helen McElhinney, executive director of CDAC Network. It can also unlock local knowledge.

People receiving services from humanitarian organizations should be told if an AI model will analyze any information they hand over, she said, even if the intention is to help the organization respond better. That requires meaningful and informed consent, she said. They should also know if an AI model is making life-changing decisions about resource allocation and where accountability for those decisions lies, she said.

Degan Ali, CEO of Adeso, a nonprofit in Somalia and Kenya, has long been an advocate for changing the power dynamics in international development to give more money and control to local organizations. She asked how IRC and others pursuing these technologies would overcome access issues, pointing to the week-long power outages caused by Hurricane Helene in the U.S. Chatbots won’t help when there’s no device, internet or electricity, she said.

Ali also warned that few local organizations have the capacity to attend big humanitarian conferences where the ethics of AI are debated. Few have staff both senior enough and knowledgeable enough to really engage with these discussions, she said, though they understand the potential power and impact these technologies may have.

“We must be extraordinarily careful not to replicate power imbalances and biases through technology,” Ali said. “The most complex questions are always going to require local, contextual and lived experience to answer in a meaningful way.”

___

The Associated Press and OpenAI have a licensing and technology agreement that allows OpenAI access to part of AP’s text archives.

___

Associated Press coverage of philanthropy and nonprofits receives support through the AP’s collaboration with The Conversation US, with funding from Lilly Endowment Inc. The AP is solely responsible for this content. For all of AP’s philanthropy coverage, visit https://apnews.com/hub/philanthropy.

Source link

Continue Reading

Tech

Ottawa orders TikTok’s Canadian arm to be dissolved

Published

 on

 

The federal government is ordering the dissolution of TikTok’s Canadian business after a national security review of the Chinese company behind the social media platform, but stopped short of ordering people to stay off the app.

Industry Minister François-Philippe Champagne announced the government’s “wind up” demand Wednesday, saying it is meant to address “risks” related to ByteDance Ltd.’s establishment of TikTok Technology Canada Inc.

“The decision was based on the information and evidence collected over the course of the review and on the advice of Canada’s security and intelligence community and other government partners,” he said in a statement.

The announcement added that the government is not blocking Canadians’ access to the TikTok application or their ability to create content.

However, it urged people to “adopt good cybersecurity practices and assess the possible risks of using social media platforms and applications, including how their information is likely to be protected, managed, used and shared by foreign actors, as well as to be aware of which country’s laws apply.”

Champagne’s office did not immediately respond to a request for comment seeking details about what evidence led to the government’s dissolution demand, how long ByteDance has to comply and why the app is not being banned.

A TikTok spokesperson said in a statement that the shutdown of its Canadian offices will mean the loss of hundreds of well-paying local jobs.

“We will challenge this order in court,” the spokesperson said.

“The TikTok platform will remain available for creators to find an audience, explore new interests and for businesses to thrive.”

The federal Liberals ordered a national security review of TikTok in September 2023, but it was not public knowledge until The Canadian Press reported in March that it was investigating the company.

At the time, it said the review was based on the expansion of a business, which it said constituted the establishment of a new Canadian entity. It declined to provide any further details about what expansion it was reviewing.

A government database showed a notification of new business from TikTok in June 2023. It said Network Sense Ventures Ltd. in Toronto and Vancouver would engage in “marketing, advertising, and content/creator development activities in relation to the use of the TikTok app in Canada.”

Even before the review, ByteDance and TikTok were lightning rod for privacy and safety concerns because Chinese national security laws compel organizations in the country to assist with intelligence gathering.

Such concerns led the U.S. House of Representatives to pass a bill in March designed to ban TikTok unless its China-based owner sells its stake in the business.

Champagne’s office has maintained Canada’s review was not related to the U.S. bill, which has yet to pass.

Canada’s review was carried out through the Investment Canada Act, which allows the government to investigate any foreign investment with potential to might harm national security.

While cabinet can make investors sell parts of the business or shares, Champagne has said the act doesn’t allow him to disclose details of the review.

Wednesday’s dissolution order was made in accordance with the act.

The federal government banned TikTok from its mobile devices in February 2023 following the launch of an investigation into the company by federal and provincial privacy commissioners.

— With files from Anja Karadeglija in Ottawa

This report by The Canadian Press was first published Nov. 6, 2024.

The Canadian Press. All rights reserved.

Source link

Continue Reading

Health

Here is how to prepare your online accounts for when you die

Published

 on

 

LONDON (AP) — Most people have accumulated a pile of data — selfies, emails, videos and more — on their social media and digital accounts over their lifetimes. What happens to it when we die?

It’s wise to draft a will spelling out who inherits your physical assets after you’re gone, but don’t forget to take care of your digital estate too. Friends and family might treasure files and posts you’ve left behind, but they could get lost in digital purgatory after you pass away unless you take some simple steps.

Here’s how you can prepare your digital life for your survivors:

Apple

The iPhone maker lets you nominate a “ legacy contact ” who can access your Apple account’s data after you die. The company says it’s a secure way to give trusted people access to photos, files and messages. To set it up you’ll need an Apple device with a fairly recent operating system — iPhones and iPads need iOS or iPadOS 15.2 and MacBooks needs macOS Monterey 12.1.

For iPhones, go to settings, tap Sign-in & Security and then Legacy Contact. You can name one or more people, and they don’t need an Apple ID or device.

You’ll have to share an access key with your contact. It can be a digital version sent electronically, or you can print a copy or save it as a screenshot or PDF.

Take note that there are some types of files you won’t be able to pass on — including digital rights-protected music, movies and passwords stored in Apple’s password manager. Legacy contacts can only access a deceased user’s account for three years before Apple deletes the account.

Google

Google takes a different approach with its Inactive Account Manager, which allows you to share your data with someone if it notices that you’ve stopped using your account.

When setting it up, you need to decide how long Google should wait — from three to 18 months — before considering your account inactive. Once that time is up, Google can notify up to 10 people.

You can write a message informing them you’ve stopped using the account, and, optionally, include a link to download your data. You can choose what types of data they can access — including emails, photos, calendar entries and YouTube videos.

There’s also an option to automatically delete your account after three months of inactivity, so your contacts will have to download any data before that deadline.

Facebook and Instagram

Some social media platforms can preserve accounts for people who have died so that friends and family can honor their memories.

When users of Facebook or Instagram die, parent company Meta says it can memorialize the account if it gets a “valid request” from a friend or family member. Requests can be submitted through an online form.

The social media company strongly recommends Facebook users add a legacy contact to look after their memorial accounts. Legacy contacts can do things like respond to new friend requests and update pinned posts, but they can’t read private messages or remove or alter previous posts. You can only choose one person, who also has to have a Facebook account.

You can also ask Facebook or Instagram to delete a deceased user’s account if you’re a close family member or an executor. You’ll need to send in documents like a death certificate.

TikTok

The video-sharing platform says that if a user has died, people can submit a request to memorialize the account through the settings menu. Go to the Report a Problem section, then Account and profile, then Manage account, where you can report a deceased user.

Once an account has been memorialized, it will be labeled “Remembering.” No one will be able to log into the account, which prevents anyone from editing the profile or using the account to post new content or send messages.

X

It’s not possible to nominate a legacy contact on Elon Musk’s social media site. But family members or an authorized person can submit a request to deactivate a deceased user’s account.

Passwords

Besides the major online services, you’ll probably have dozens if not hundreds of other digital accounts that your survivors might need to access. You could just write all your login credentials down in a notebook and put it somewhere safe. But making a physical copy presents its own vulnerabilities. What if you lose track of it? What if someone finds it?

Instead, consider a password manager that has an emergency access feature. Password managers are digital vaults that you can use to store all your credentials. Some, like Keeper,Bitwarden and NordPass, allow users to nominate one or more trusted contacts who can access their keys in case of an emergency such as a death.

But there are a few catches: Those contacts also need to use the same password manager and you might have to pay for the service.

___

Is there a tech challenge you need help figuring out? Write to us at onetechtip@ap.org with your questions.

Source link

Continue Reading

Trending