adplus-dvertising
Connect with us

Tech

Microsoft Limits Bing AI Chats to 5 Replies to Keep Conversations Normal

Published

 on

 

Microsoft is limiting how extensively people can converse with its Bing AI chatbot, following media coverage of the bot going off the rails during long exchanges.

Bing Chat will now reply to up to five questions or statements in a row for each conversation, after which users will be prompted to start a new topic, the company said in a blog post Friday. Users will also be limited to 50 total replies per day.

The restrictions are meant to keep conversations from getting weird. Microsoft said long discussions “can confuse the underlying chat model.”

300x250x1

On Wednesday the company had said it was working to fix problems with Bing, launched just over a week before, including factual errors and odd exchanges. Bizarre responses reported online have included Bing telling a New York Times columnist to abandon his marriage for the chatbot, and the AI demanding an apology from a Reddit user over whether we’re in the year 2022 or 2023.

The chatbot’s responses have also included factual errors. Microsoft said on Wednesday that it was tweaking the AI model to quadruple the amount of data from which it can source answers. The company said it would also give users more control over whether they want precise answers, which are sourced from Microsoft’s proprietary Bing AI technology or more “creative” responses that use OpenAI’s ChatGPT tech.

Bing’s AI chat functionality is still in beta testing, with potential users on a wait list for access. With the tool, Microsoft hopes to get a head start on what some say will be the next revolution in internet search.

The ChatGPT technology made a big splash when it launched in November, but OpenAI itself has warned of potential pitfalls, and Microsoft has acknowledged limitations with AI. Despite AI’s impressive qualities, concerns have been raised about artificial intelligence being used for nefarious purposes like spreading misinformation and churning out phishing emails.

 

With Bing’s AI capabilities, Microsoft would also like to get a jump on search powerhouse Google, which announced its own AI chat model, Bard, last week. Bard has had its own problems with factual errors, fumbling a response during its first public demo.

In its Friday blog post, Microsoft suggested the new AI chat restrictions are based on information gleaned from the beta test.

“Our data has shown that the vast majority of you find the answers you’re looking for within 5 turns and that only ~1% of chat conversations have 50+ messages,” it said. “As we continue to get your feedback, we will explore expanding the caps on chat sessions to further enhance search and discovery experiences.”

728x90x4

Source link

Continue Reading

Tech

Microsoft unveils OpenAI-based chat tools for fighting cyberattacks – Financial Post

Published

 on


Article content

Microsoft Corp., extending a frenzy of artificial intelligence software releases, is introducing new chat tools that can help cybersecurity teams ward off hacks and clean up after an attack.

Article content

The latest of Microsoft’s AI assistant tools — the software giant likes to call them Copilots — uses OpenAI’s new GPT-4 language system and data specific to the security field, the company said Tuesday. The idea is to help security workers more quickly see connections between various parts of a hack, such as a suspicious email, malicious software file or the parts of the system that were compromised.

300x250x1

Article content

Microsoft and other security software companies have been using machine-learning techniques to root out suspicious behaviour and spot vulnerabilities for several years. But the newest AI technologies allow for faster analysis and add the ability to use plain English questions, making it easier for employees who may not be experts in security or AI.

Article content

That’s important because there’s a shortage of workers with these skills, said Vasu Jakkal, Microsoft’s vice president for security, compliance, identity and privacy. Hackers, meanwhile, have only gotten faster.

“Just since the pandemic, we’ve seen an incredible proliferation,” she said. For example, “it takes one hour and 12 minutes on average for an attacker to get full access to your inbox once a user has clicked on a phishing link. It used to be months or weeks for someone to get access.”

The software lets users pose questions such as: “How can I contain devices that are already compromised by an attack?” Or they can ask the Copilot to list anyone who sent or received an email with a dangerous link in the weeks before and after the breach. The tool can also more easily create reports and summaries of an incident and the response.

Article content

Microsoft will start by giving a few customers access to the tool and then add more later. Jakkal declined to say when it would be broadly available or who the initial customers are. The Security Copilot uses data from government agencies and Microsoft’s researchers, who track nation states and cybercriminal groups. To take action, the assistant works with Microsoft’s security products and will add integration with programs from other companies in the future.

As with previous AI releases this year, Microsoft is taking pains to make sure users are well aware the new systems make errors. In a demo of the security product, the chatbot cautioned about a flaw in Windows 9 — a product that doesn’t exist.


  1. Generative AI set to affect 300 million jobs in U.S., Europe

  2. LinkedIn said it has sought to block tens of millions of fake accounts in recent months.

    LinkedIn scammers step up sophistication of online attacks

  3. Microsoft France headquarters in Issy-les-Moulineaux near Paris, France.

    Microsoft recovers services after cloud outage hits users around world

But it’s also capable of learning from users. The system lets customers choose privacy settings and determine how widely they want to share the information it gleans. If they choose, customers can let Microsoft use the data to help other clients, Jakkal said.

“This is going to be a learning system,” she said. “It’s also a paradigm shift: Now humans become the verifiers, and AI is giving us the data.”

Bloomberg.com

Adblock test (Why?)

728x90x4

Source link

Continue Reading

Tech

The Legend of Zelda: Tears of the Kingdom – 10 Awesome New Gameplay Details – IGN

Published

 on


Adblock test (Why?)

728x90x4

Source link

Continue Reading

Tech

iOS 16.4—Apple Just Gave iPhone Users 33 Reasons To Update Now

Published

 on

Apple’s iOS 16.4 upgrade is finally here, along with a bunch of brilliant new iPhone features. There are also important security reasons to update to iOS 16.4, because the latest iPhone upgrade fixes 33 vulnerabilities, some of which are serious.

Apple doesn’t give much detail about what’s fixed in iOS 16.4, to give as many people the opportunity to update before attackers can get hold of the details.

The iOS 16.4 upgrade fixes two flaws in the Kernel at the heart of the iPhone operating system tracked as CVE-2023-27969 and CVE-2023-27933 that could allow an attacker to execute code. A Sandbox issue tracked as CVE-2023-28178 could allow an app to bypass Privacy preferences, according to Apple’s support page.

Other issues fixed in iOS 16.4 include two vulnerabilities in WebKit, the engine that powers the iPhone maker’s Safari browser. Overall, iOS 16.4 fixes 33 security vulnerabilities in 32 iPhone components, making it the biggest update in a while.

300x250x1

Reasons to update to iOS 16.4

Apple’s last iPhone update—the iOS 16.3.1 upgrade issued in February—was an emergency fix for issues already being used in attacks.

None of the flaws fixed in iOS 16.4 have been used in real life attacks yet, according to Apple, but given the amount of issues, it still makes sense to update as soon as possible.

Apple also released iOS 15.7.4 and iPadOS 15.7.4 for users of older devices.

Experts say some of the bugs fixed in iOS 16.4 could be chained together to form more effective attacks. While the iOS 16.4 security fixes aren’t particularly worrying, it is possible to chain vulnerabilities together to gain root level access to the device, says independent security researcher Sean Wright.

However, Wright concedes that this is a lot harder to do remotely. “Most of the vulnerabilities are either privacy related or require local access—for example installing a malicious app making remote exploitation a lot more difficult.”

At the same time, Kernel level vulnerabilities fixed in iOS 16.4 make Apple’s latest update important, says Wright.

While you don’t need to panic, the issues fixed in iOS 16.4 make updating to the latest iPhone software a priority. You know what to do—go to your Settings > General > Software Update and upgrade to iOS 16.4 now to keep your iPhone safe.

728x90x4

Source link

Continue Reading

Trending