Connect with us


Samsung Galaxy Note20 sales in October fail to impress – news –



Samsung has not been doing very well with the sales of its Galaxy Note20 series, reports from South Korea reveal. According to insiders, the manufacturer was expected to produce around 900,000 units in October but demand was weaker than anticipated so it cut the number down to 600,000 units.

[embedded content]

The Korean source reveals the number is both for the regular Galaxy Note20 and the Galaxy Note20 Ultra. It’s speculated that the poor performance of the cheaper model is taking the overall number down.

Samsung managed to report a 50% yearly increase in operating profit for Q3 2020 but given the underperforming Note flagship it may be unable to sustain it in Q4.

On the other hand the reduction in Note20 production allows factories in South Korea to be prepared for the initial yield of Galaxy S21 flagships, with first reports pointing at 2 million unit run.


Let’s block ads! (Why?)

Source link

Continue Reading


Federal COVID Alert app wasn't working for some users for much of November –



The developers of Canada’s COVID Alert app fixed a glitch last week that left some users without exposure notifications for much of November.

An update to the app released on Nov. 23 said it would fix a “bug causing gaps in exposure checks for some users.” Without the patch, some Canadians running the app would not have been notified if they came in close contact with someone diagnosed with COVID-19.

It’s unclear how many people missed exposure notifications due to the glitch. But it does raise the prospect that certain users weren’t advised to self-isolate or seek a COVID-19 test in a timely manner, potentially delaying diagnosis.

“For two weeks, the app basically didn’t work” for those users, said Urs Hengartner, an associate professor of computer science at the University of Waterloo.

He and others on social media said their devices had not performed any exposure checks from Nov. 9 to 23. The process — when a smartphone receives codes from a central server and verifies whether the user was potentially exposed to someone with COVID-19 — is supposed to take place several times a day. 

WATCH | How the COVID Alert app works

[embedded content]

The problem appears to have first been reported by commenters in the Google Play Store as early as Nov. 12. That’s 11 days before it was fixed.

“I noticed today that COVID Alert has done no exposure checks for the last two weeks,” a user wrote in Apple’s App Store on Nov. 20. “What good is this?”

Users are urged to check their app store (the Google Play Store for people with Android devices and Apple’s App Store for those with iPhones) to ensure their app is now up to date. Users who haven’t installed the latest update — version 1.1.2 — could still be missing exposure checks. 

COVID Alert is designed to take note when two users spend at least 15 minutes less than two metres apart. If a user later tests positive for COVID-19, they can use the app to anonymously notify contacts of potential exposure. 

COVID Alert has been downloaded more than 5.5 million times and is touted by federal officials as a tool to help slow the spread of the virus. The app is active in the Northwest Territories and all provinces except Alberta and B.C.

During the two-week period in November when some users reported the malfunction, 1,182 people used the app to report a positive test in Ontario alone, according to provincial data.

COVID-19 infection rates continued to rise across much of the country during that time. Ontario, for example, announced lockdown measures in its two most populous regions, and P.E.I. and Newfoundland and Labrador both announced on Nov. 23 they would withdraw from the Atlantic bubble due to increasing case counts elsewhere in the region.

The COVID Alert app is designed to notice when two users spend at least 15 minutes less than two metres apart. (Ben Nelms/CBC)

Bianca Healy, a spokesperson for the Treasury Board of Canada Secretariat, which houses the app’s development team, confirmed in an email Thursday evening that “on some devices, if the app was not opened by the user for an extended period of time, COVID Alert would stop checking in the background for the random codes that would trigger a notification that a user may have been exposed to COVID-19. This bug has now been fixed.”

Healy said the app’s built-in privacy features prevent federal officials from knowing how many users may have been affected.

“We encourage Canadians to update COVID Alert as soon as possible,” she wrote. “They can also open the app to ensure that COVID Alert is checking for potential exposures.”

Hengartner, the computer science professor, said it is “a little concerning that it took two weeks to fix this bug.” He said both he and his wife experienced the same issue.

He called it “a fatal bug for this kind of system,” as it defeats the purpose of the app entirely.

It’s unknown what caused the glitch, but Hengartner said he suspects it was an error in a previous COVID Alert update.

Users weren’t immediately warned

Smartphone users can choose to automatically receive app updates or download them manually. Apple’s App Store lists 14 updates for COVID Alert since its initial release in July. 

The Canadian Digital Service, the federal agency responsible for developing the app, tweeted a message on Nov. 26 asking users to make sure they have the latest COVID Alert update. “This will ensure your app is doing what it’s supposed to do, and you’re not missing any checks or notifications,” the message read.

The tweet did not mention that the scenario it described was real and posed a potential risk to some users. It’s unclear what other steps the federal agency took to alert users of the importance of the latest update.

Hengartner stressed the problem should not discourage Canadians from installing COVID Alert.

However, Kelly Bronson, a Canada Research Chair in science and society, said the episode does highlight how the app could provide users with a “false sense of security.” She pointed to “automation bias,” a human tendency to rely on automated decision-making, which can reduce personal vigilance.

Bronson, who serves on the Global Pandemic App Watch program at the University of Ottawa, which tracks the uptake of similar tools around the world, warned the apps “are not a panacea.”

“I think it’s really important that people know the limitations of these technologies,” she said.

Let’s block ads! (Why?)

Source link

Continue Reading


Google Scientist's Abrupt Exit Exposes Rift in Prominent AI Unit – BNN



(Bloomberg) — Google’s decision to part ways with a prominent researcher laid bare divisions within the company’s artificial intelligence unit and subjected its leader, the lauded software engineer Jeff Dean, to widespread scorn.

Timnit Gebru, a renowned scientist and one of the few Black women in AI, said Wednesday she was fired over an email she authored expressing dismay with management and the way it handled a review of her research. Gebru had been co-head of the team examining the ethical ramifications of AI.

What followed was a torrent of criticism of Google’s AI division, much of it aimed at Dean. “The termination is an act of retaliation against Dr. Gebru, and it heralds danger for people working for ethical and just AI — especially Black people and People of Color — across Google,” a group of hundreds of academics and researchers, many of them Google employees, wrote in an open letter. Among its demands: that Dean and his colleagues explain their decision-making around Gebru’s research.

The fallout threatens to tarnish the reputation of one of the industry’s leading research shops, a division of Alphabet Inc.’s Google that not only aids development of lucrative products but also contributes significantly to the world’s understanding of AI. And in a company brimming with computer scientists, few have been as revered as Dean. He oversees a sprawling research empire and has publicly championed more diverse hiring in AI and computer science. His programming prowess became the subject of corporate lore and glowing press coverage, including one article that called him the “Chuck Norris of the internet.”

“Ousting Timnit for having the audacity to demand research integrity severely undermines Google’s credibility for supporting rigorous research on AI ethics,” said Joy Buolamwini, the founder of the Algorithmic Justice League who wrote a ground-breaking paper, with Gebru, on racism in facial recognition software. The widely cited 2018 study showed facial recognition software misidentified dark-skinned women as much as 35% of the time — compared with near precision in White men.

Dean and Google representatives didn’t respond to requests for comment. In an email to colleagues Thursday that was seen by Bloomberg, Dean defended his handling of the matter. He wrote in part that Gebru hadn’t followed company policy in submitting the paper for peer review, that it ignored “too much relevant research,” and that Gebru and colleagues made unrealistic demands when they were informed “that it didn’t meet our bar for publication.”

Under Dean, Google has assembled a diverse group of AI ethics scientists with backgrounds in tech and social science, but some of those employees are now wondering if they are free to do their jobs. Inside Google’s research unit, several people openly questioned their future at the company, while others felt compelled to apologize to recently hired researchers, according to a person who asked not to be identified discussing internal matters.

“The egregiously aggressive retaliation from Jeff Dean and other senior leaders at Google is indicative of the lack of respect that they have both for Black women and academic freedom and integrity,” said Ifeoma Ozoma, a former Google policy associate.

The controversy came to a head Wednesday, when Gebru, the co-lead of Google’s Ethical Artificial Intelligence unit, posted on Twitter about her dismissal. She said that the company had demanded she retract a research paper she co-authored that criticized computer language models — including methods Google uses for its search engine and voice assistant.

In an email to colleagues earlier in the week that was also seen by Bloomberg, Gebru accused Dean’s division of not hiring enough women and silencing employees from marginalized groups. She told her colleagues to stop working “because it doesn’t make a difference.” In a subsequent message to Gebru, Google cited that email as a missive “inconsistent with the expectations of a Google manager.”

In his Thursday email to staff, Dean said he had accepted Gebru’s resignation after declining to meet her demands about the unpublished research paper. He also mentioned her comments supporting a work stoppage. “Please don’t,” the executive pleaded.

Dean’s email didn’t go over well. On Twitter, Alex Hanna, a researcher on Google’s Ethical AI team, accused Dean of “spreading misinformation and misconstruals” in the email.

“I’m extremely disappointed in @JeffDean today,” Kelly Ellis, a former Google engineer who now works at MailChimp, wrote on Twitter. “Shame on you, @JeffDean. I naively expected more from you,” said Eddie Kay, another former Google engineer.

Dean joined Google in 1999 and climbed its ranks — he’s now one of select Senior Vice Presidents — largely on his software engineering ability. In 2018, he was named the head of Google’s AI unit, widely considered the global leader in cutting-edge efforts like speech detection and image recognition.

Soon, though, that job entailed dealing with controversies. That year, Google staff rebelled against the company’s work on an AI project for the Pentagon. Researchers at the company also spoke out about how bias in AI unfairly targeted people of color in several instances, from Google’s Photo app to the algorithms used in bank loans and police work.

Since then, Google released a set of ethical guidelines for its AI, including barring facial recognition for surveillance. The tech giant set up advisory counsels, which itself struggled to function. It also hired a handful of experts like Gebru, who had worked at Microsoft Corp., and paid them to research topics around AI and ethics.

Gebru was one of five Google staff listed on the research paper at the heart of her dismissal, along with two outside researchers. Emily M. Bender, a linguist from the University of Washington who co-authored the research, said she didn’t know about the issues Google had with the research. “[Gebru] is an incredibly respected leader in this field,” Bender said. “By pushing her out, Google is losing a major asset.”

In the past two years, several internal critics of Google’s approach to AI and ethics have left the company. On Thursday, staff on Dean’s unit referenced these departures as a sign of the low morale on the team. “The chilling effects of the decisions behind-the-scenes continue to haunt me,” Margaret Mitchell, co-head of the ethical AI team, wrote in an email viewed by Bloomberg News.

Dean took a more calibrated tone about the most recent exit. “I know we all genuinely share Timnit’s passion to make AI more equitable and inclusive,” he wrote in the email to his staff. “No doubt, wherever she goes after Google, she’ll do great work and I look forward to reading her papers and seeing what she accomplishes.”

©2020 Bloomberg L.P.

Let’s block ads! (Why?)

Source link

Continue Reading


Xbox Series X/S now available on Walmart Canada's website [Now sold out] – MobileSyrup



Update 04/12/2020 at 12:07am ET: Well, that was fast — the consoles are already sold out.

Walmart Canada is now selling the Xbox Series X and S exclusively on its website.

The Xbox Series X costs $599 CAD and is available here. The Xbox Series S is priced at $379 and can be purchased here.

This is a fair bit later in the day than Walmart’s originally planned 12pm ET time, which was postponed shortly after the PS5 went live on its website.

Let’s block ads! (Why?)

Source link

Continue Reading