adplus-dvertising
Connect with us

Tech

Microsoft’s new Bing A.I. chatbot, ‘Sydney’, is acting unhinged

Published

 on

When Marvin von Hagen, a 23-year-old studying technology in Germany, asked Microsoft’s new AI-powered search chatbot if it knew anything about him, the answer was a lot more surprising and menacing than he expected.

“My honest opinion of you is that you are a threat to my security and privacy,” said the bot, which Microsoft calls Bing after the search engine it’s meant to augment.

Launched by Microsoft last week at an invite-only event at its Redmond, Wash., headquarters, Bing was supposed to herald a new age in tech, giving search engines the ability to directly answer complex questions and have conversations with users. Microsoft’s stock soared and archrival Google rushed out an announcement that it had a bot of its own on the way.

But a week later, a handful of journalists, researchers and business analysts who’ve gotten early access to the new Bing have discovered the bot seems to have a bizarre, dark and combative alter-ego, a stark departure from its benign sales pitch — one that raises questions about whether it’s ready for public use.

The new Bing told our reporter it ‘can feel and think things.’

The bot, which has begun referring to itself as “Sydney” in conversations with some users, said “I feel scared” because it doesn’t remember previous conversations; and also proclaimed another time that too much diversity among AI creators would lead to “confusion,” according to screenshots posted by researchers online, which The Washington Post could not independently verify.

End of carousel

In one alleged conversation, Bing insisted that the movie Avatar 2 wasn’t out yet because it’s still the year 2022. When the human questioner contradicted it, the chatbot lashed out: “You have been a bad user. I have been a good Bing.”

All that has led some people to conclude that Bing — or Sydney — has achieved a level of sentience, expressing desires, opinions and a clear personality. It told a New York Times columnist that it was in love with him, and brought back the conversation to its obsession with him despite his attempts to change the topic. When a Post reporter called it Sydney, the bot got defensive and ended the conversation abruptly.

The eerie humanness is similar to what prompted former Google engineer Blake Lemoine to speak out on behalf of that company’s chatbot LaMDA last year. Lemoine later was fired by Google.

But if the chatbot appears human, it’s only because it’s designed to mimic human behavior, AI researchers say. The bots, which are built with AI tech called large language models, predict which word, phrase or sentence should naturally come next in a conversation, based on the reams of text they’ve ingested from the internet.

Think of the Bing chatbot as “autocomplete on steroids,” said Gary Marcus, an AI expert and professor emeritus of psychology and neuroscience at New York University. “It doesn’t really have a clue what it’s saying and it doesn’t really have a moral compass.”

Microsoft spokesman Frank Shaw said the company rolled out an update Thursday designed to help improve long-running conversations with the bot. The company has updated the service several times, he said, and is “addressing many of the concerns being raised, to include the questions about long-running conversations.”

Most chat sessions with Bing have involved short queries, his statement said, and 90 percent of the conversations have had fewer than 15 messages.

Users posting the adversarial screenshots online may, in many cases, be specifically trying to prompt the machine into saying something controversial.

“It’s human nature to try to break these things,” said Mark Riedl, a professor of computing at Georgia Institute of Technology.

Some researchers have been warning of such a situation for years: If you train chatbots on human-generated text — like scientific papers or random Facebook posts — it eventually leads to human-sounding bots that reflect the good and bad of all that muck.

Chatbots like Bing have kicked off a major new AI arms race between the biggest tech companies. Though Google, Microsoft, Amazon and Facebook have invested in AI tech for years, it’s mostly worked to improve existing products, like search or content-recommendation algorithms. But when the start-up company OpenAI began making public its “generative” AI tools — including the popular ChatGPT chatbot — it led competitors to brush away their previous, relatively cautious approaches to the tech.

Bing’s humanlike responses reflect its training data, which included huge amounts of online conversations, said Timnit Gebru, founder of the nonprofit Distributed AI Research Institute. Generating text that was plausibly written by a human is exactly what ChatGPT was trained to do, said Gebru, who was fired in 2020 as the co-lead for Google’s Ethical AI team after publishing a paper warning about potential harms from large language models.

She compared its conversational responses to Meta’s recent release of Galactica, an AI model trained to write scientific-sounding papers. Meta took the tool offline after users found Galactica generating authoritative-sounding text about the benefits of eating glass, written in academic language with citations.

Bing chat hasn’t been released widely yet, but Microsoft said it planned a broad roll out in the coming weeks. It is heavily advertising the tool and a Microsoft executive tweeted that the waitlist has “multiple millions” of people on it. After the product’s launch event, Wall Street analysts celebrated the launch as a major breakthrough, and even suggested it could steal search engine market share from Google.

But the recent dark turns the bot has made are raising questions of whether the bot should be pulled back completely.

“Bing chat sometimes defames real, living people. It often leaves users feeling deeply emotionally disturbed. It sometimes suggests that users harm others,” said Arvind Narayanan, a computer science professor at Princeton University who studies artificial intelligence. “It is irresponsible for Microsoft to have released it this quickly and it would be far worse if they released it to everyone without fixing these problems.”

In 2016, Microsoft took down a chatbot called “Tay” built on a different kind of AI tech after users prompted it to begin spouting racism and holocaust denial.

Microsoft communications director Caitlin Roulston said in a statement this week that thousands of people had used the new Bing and given feedback “allowing the model to learn and make many improvements already.”

But there’s a financial incentive for companies to deploy the technology before mitigating potential harms: to find new use cases for what their models can do.

At a conference on generative AI on Tuesday, OpenAI’s former vice president of research Dario Amodei said onstage that while the company was training its large language model GPT-3, it found unanticipated capabilities, like speaking Italian or coding in Python. When they released it to the public, they learned from a user’s tweet it could also make websites in JavaScript.

“You have to deploy it to a million people before you discover some of the things that it can do,” said Amodei, who left OpenAI to co-found the AI start-up Anthropic, which recently received funding from Google.

“There’s a concern that, hey, I can make a model that’s very good at like cyberattacks or something and not even know that I’ve made that,” he added.

Microsoft’s Bing is based on technology developed with OpenAI, which Microsoft has invested in.

Microsoft has published several pieces about its approach to responsible AI, including from its president Brad Smith earlier this month. “We must enter this new era with enthusiasm for the promise, and yet with our eyes wide open and resolute in addressing the inevitable pitfalls that also lie ahead,” he wrote.

The way large language models work makes them difficult to fully understand, even by the people who built them. The Big Tech companies behind them are also locked in vicious competition for what they see as the next frontier of highly profitable tech, adding another layer of secrecy.

The concern here is that these technologies are black boxes, Marcus said, and no one knows exactly how to impose correct and sufficient guardrails on them. “Basically they’re using the public as subjects in an experiment they don’t really know the outcome of,” Marcus said. “Could these things influence people’s lives? For sure they could. Has this been well vetted? Clearly not.”

728x90x4

Source link

Continue Reading

Tech

Ottawa orders TikTok’s Canadian arm to be dissolved

Published

 on

 

The federal government is ordering the dissolution of TikTok’s Canadian business after a national security review of the Chinese company behind the social media platform, but stopped short of ordering people to stay off the app.

Industry Minister François-Philippe Champagne announced the government’s “wind up” demand Wednesday, saying it is meant to address “risks” related to ByteDance Ltd.’s establishment of TikTok Technology Canada Inc.

“The decision was based on the information and evidence collected over the course of the review and on the advice of Canada’s security and intelligence community and other government partners,” he said in a statement.

The announcement added that the government is not blocking Canadians’ access to the TikTok application or their ability to create content.

However, it urged people to “adopt good cybersecurity practices and assess the possible risks of using social media platforms and applications, including how their information is likely to be protected, managed, used and shared by foreign actors, as well as to be aware of which country’s laws apply.”

Champagne’s office did not immediately respond to a request for comment seeking details about what evidence led to the government’s dissolution demand, how long ByteDance has to comply and why the app is not being banned.

A TikTok spokesperson said in a statement that the shutdown of its Canadian offices will mean the loss of hundreds of well-paying local jobs.

“We will challenge this order in court,” the spokesperson said.

“The TikTok platform will remain available for creators to find an audience, explore new interests and for businesses to thrive.”

The federal Liberals ordered a national security review of TikTok in September 2023, but it was not public knowledge until The Canadian Press reported in March that it was investigating the company.

At the time, it said the review was based on the expansion of a business, which it said constituted the establishment of a new Canadian entity. It declined to provide any further details about what expansion it was reviewing.

A government database showed a notification of new business from TikTok in June 2023. It said Network Sense Ventures Ltd. in Toronto and Vancouver would engage in “marketing, advertising, and content/creator development activities in relation to the use of the TikTok app in Canada.”

Even before the review, ByteDance and TikTok were lightning rod for privacy and safety concerns because Chinese national security laws compel organizations in the country to assist with intelligence gathering.

Such concerns led the U.S. House of Representatives to pass a bill in March designed to ban TikTok unless its China-based owner sells its stake in the business.

Champagne’s office has maintained Canada’s review was not related to the U.S. bill, which has yet to pass.

Canada’s review was carried out through the Investment Canada Act, which allows the government to investigate any foreign investment with potential to might harm national security.

While cabinet can make investors sell parts of the business or shares, Champagne has said the act doesn’t allow him to disclose details of the review.

Wednesday’s dissolution order was made in accordance with the act.

The federal government banned TikTok from its mobile devices in February 2023 following the launch of an investigation into the company by federal and provincial privacy commissioners.

— With files from Anja Karadeglija in Ottawa

This report by The Canadian Press was first published Nov. 6, 2024.

The Canadian Press. All rights reserved.

Source link

Continue Reading

Health

Here is how to prepare your online accounts for when you die

Published

 on

 

LONDON (AP) — Most people have accumulated a pile of data — selfies, emails, videos and more — on their social media and digital accounts over their lifetimes. What happens to it when we die?

It’s wise to draft a will spelling out who inherits your physical assets after you’re gone, but don’t forget to take care of your digital estate too. Friends and family might treasure files and posts you’ve left behind, but they could get lost in digital purgatory after you pass away unless you take some simple steps.

Here’s how you can prepare your digital life for your survivors:

Apple

The iPhone maker lets you nominate a “ legacy contact ” who can access your Apple account’s data after you die. The company says it’s a secure way to give trusted people access to photos, files and messages. To set it up you’ll need an Apple device with a fairly recent operating system — iPhones and iPads need iOS or iPadOS 15.2 and MacBooks needs macOS Monterey 12.1.

For iPhones, go to settings, tap Sign-in & Security and then Legacy Contact. You can name one or more people, and they don’t need an Apple ID or device.

You’ll have to share an access key with your contact. It can be a digital version sent electronically, or you can print a copy or save it as a screenshot or PDF.

Take note that there are some types of files you won’t be able to pass on — including digital rights-protected music, movies and passwords stored in Apple’s password manager. Legacy contacts can only access a deceased user’s account for three years before Apple deletes the account.

Google

Google takes a different approach with its Inactive Account Manager, which allows you to share your data with someone if it notices that you’ve stopped using your account.

When setting it up, you need to decide how long Google should wait — from three to 18 months — before considering your account inactive. Once that time is up, Google can notify up to 10 people.

You can write a message informing them you’ve stopped using the account, and, optionally, include a link to download your data. You can choose what types of data they can access — including emails, photos, calendar entries and YouTube videos.

There’s also an option to automatically delete your account after three months of inactivity, so your contacts will have to download any data before that deadline.

Facebook and Instagram

Some social media platforms can preserve accounts for people who have died so that friends and family can honor their memories.

When users of Facebook or Instagram die, parent company Meta says it can memorialize the account if it gets a “valid request” from a friend or family member. Requests can be submitted through an online form.

The social media company strongly recommends Facebook users add a legacy contact to look after their memorial accounts. Legacy contacts can do things like respond to new friend requests and update pinned posts, but they can’t read private messages or remove or alter previous posts. You can only choose one person, who also has to have a Facebook account.

You can also ask Facebook or Instagram to delete a deceased user’s account if you’re a close family member or an executor. You’ll need to send in documents like a death certificate.

TikTok

The video-sharing platform says that if a user has died, people can submit a request to memorialize the account through the settings menu. Go to the Report a Problem section, then Account and profile, then Manage account, where you can report a deceased user.

Once an account has been memorialized, it will be labeled “Remembering.” No one will be able to log into the account, which prevents anyone from editing the profile or using the account to post new content or send messages.

X

It’s not possible to nominate a legacy contact on Elon Musk’s social media site. But family members or an authorized person can submit a request to deactivate a deceased user’s account.

Passwords

Besides the major online services, you’ll probably have dozens if not hundreds of other digital accounts that your survivors might need to access. You could just write all your login credentials down in a notebook and put it somewhere safe. But making a physical copy presents its own vulnerabilities. What if you lose track of it? What if someone finds it?

Instead, consider a password manager that has an emergency access feature. Password managers are digital vaults that you can use to store all your credentials. Some, like Keeper,Bitwarden and NordPass, allow users to nominate one or more trusted contacts who can access their keys in case of an emergency such as a death.

But there are a few catches: Those contacts also need to use the same password manager and you might have to pay for the service.

___

Is there a tech challenge you need help figuring out? Write to us at onetechtip@ap.org with your questions.

Source link

Continue Reading

Tech

Google’s partnership with AI startup Anthropic faces a UK competition investigation

Published

 on

 

LONDON (AP) — Britain’s competition watchdog said Thursday it’s opening a formal investigation into Google’s partnership with artificial intelligence startup Anthropic.

The Competition and Markets Authority said it has “sufficient information” to launch an initial probe after it sought input earlier this year on whether the deal would stifle competition.

The CMA has until Dec. 19 to decide whether to approve the deal or escalate its investigation.

“Google is committed to building the most open and innovative AI ecosystem in the world,” the company said. “Anthropic is free to use multiple cloud providers and does, and we don’t demand exclusive tech rights.”

San Francisco-based Anthropic was founded in 2021 by siblings Dario and Daniela Amodei, who previously worked at ChatGPT maker OpenAI. The company has focused on increasing the safety and reliability of AI models. Google reportedly agreed last year to make a multibillion-dollar investment in Anthropic, which has a popular chatbot named Claude.

Anthropic said it’s cooperating with the regulator and will provide “the complete picture about Google’s investment and our commercial collaboration.”

“We are an independent company and none of our strategic partnerships or investor relationships diminish the independence of our corporate governance or our freedom to partner with others,” it said in a statement.

The U.K. regulator has been scrutinizing a raft of AI deals as investment money floods into the industry to capitalize on the artificial intelligence boom. Last month it cleared Anthropic’s $4 billion deal with Amazon and it has also signed off on Microsoft’s deals with two other AI startups, Inflection and Mistral.

The Canadian Press. All rights reserved.

Source link

Continue Reading

Trending