Microsoft released a free software patch Tuesday to fix a major security flaw in its Windows 10 operating system.
The flaw, which was discovered by the U.S. National Security Agency, could allow hackers to intercept seemingly secure communications.
But rather than exploit the flaw for its own intelligence needs, the NSA tipped off Microsoft so that it can fix the system for everyone.
Microsoft credited the NSA for discovering the flaw. The company said it has not seen any evidence that hackers have used the technique discovered by the NSA.
Microsoft said an attacker could exploit the vulnerability by spoofing a code-signing certificate so it looked like a file came from a trusted source.
“The user would have no way of knowing the file was malicious, because the digital signature would appear to from a trusted provider,” the company said.
Could be used to decrypt confidential info
If successfully exploited, an attacker would have been able to conduct “man-in-the-middle” attacks and decrypt confidential information on user connections, the company said.
Some computers will get the fix automatically if they have the automatic-update option turned on. Others can get it manually. Microsoft typically releases security and other updates once a month and waited until Tuesday to disclose the flaw and the NSA’s involvement.
Priscilla Moriuchi, who retired from the NSA in 2017 after running its East Asia and Pacific operations, said this is a good example of the “constructive role” that the NSA can play in improving global information security. Moriuchi, now an analyst at the U.S. cybersecurity firm Recorded Future, said it’s likely a reflection of changes made in 2017 to how the U.S. determines whether to disclose a major vulnerability or exploit it for intelligence purposes.
The revamping of what’s known as the “Vulnerability Equities Process” put more emphasis on disclosing unpatched vulnerabilities whenever possible to protect core internet systems and the U.S. economy and general public.
Those changes happened after a group calling itself “Shadow Brokers” released a trove of high-level hacking tools stolen from the NSA.
iOS 17 beta 1 offers an overhauled Apple Translate app that is more straightforward and easy to use. The redesign is available on iPadOS 17, too.
Readers like you help support XDA Developers. When you make a purchase using links on our site, we may earn an affiliate commission. Read More.
WWDC is our favorite time of the year for many reasons. Not only do we get to try out the upcoming major updates to Apple’s operating systems, but we also sometimes get fresh hardware releases. This time around, we witnessed the debut of Apple’s Vision Pro, some new Mac models, iOS 17, iPadOS 17, macOS Sonoma, and watchOS 10. And while the Cupertino firm has provided the public with a comprehensive list of what’s new in these releases, many additions remain undocumented publicly. For example, we have just discovered that iOS 17 beta 1 redesigns the built-in Apple Translate app. The new user interface offers more intuitive controls, making the application more straightforward and easy to use.
As our screenshots above reveal, Apple Translate on iOS 17 beta 1 (right) is cleaner than that on iOS 16 and earlier versions (left). The new design simplifies the entire section, making it both more intuitive to operate and easy on the eyes. As someone who tries to rely on Google services as little as possible, I had always found Apple Translate unintuitive to use when compared to its Google counterpart. Through the iOS 17 update, users finally get to enjoy a more direct app.
For example, Apple Translate on iOS 16 continues to shift between the two selected languages, making it hard to tap on the right field straightaway. Furthermore, dismissing a translated phrase to type another one was also a pain. On iOS 17, pretty much all of my concerns have been addressed in the Apple Translate app.
While Google Translate remains superior in terms of translation accuracy and language availability, Apple Translate can handle my occasional translation needs just fine. And thanks to this overhaul, I feel even more motivated to depend on it and ditch Google’s solution completely. We can only hope that this design makes it to the final release in September, as Apple could change its mind at any given moment.
Launched during the company’s annual World Wide Developers Conference (WWDC) in Cupertino, Calif., the Apple Vision Pro is a wearable headset. The device will be capable of toggling between virtual reality (VR), and augmented reality (AR), which projects digital imagery while users still can see objects in the real world.
It can be used for immersive experiences in everything from work meetings and FaceTime, to photos, movies and apps.
“Today marks the beginning of a new era for computing,” said Apple CEO Tim Cook.
The headset, which Apple says will be available in 2024, won’t be cheap, starting at $3,499 US, or about $4,700 Cdn.
Apple unveiled its first new product since the Apple Watch in 2015. The Vision Pro VR headset lets users blend augmented reality with everyday life, but its $4,700 Cdn price tag may be a tough sell.
“VR kind of resurfaces every 10 years or so as the big thing,” Alla Sheffer, a professor of computer science at the University of British Columbia whose research areas include virtual and augmented reality, told CBC News. “And then it goes away.”
The question on many people’s minds: is this time different?
What’s the difference between VR and AR?
To grasp the technology’s implications, it helps to understand the technology itself. Traditional virtual reality is a computer-generated environment. Typically, a user wears a head-mounted display or headset like ski goggles, Sheffer explained. But instead of looking through those goggles, users see a display.
“You only see the virtual content. You don’t see the outside world,” Sheffer said.
VR also includes capture setups, and software that responds to them: think, for example, of a virtual reality golf game where you’re moving your hands, and that’s captured automatically and translated into a gesture using a virtual golf club.
There are two types of augmented reality, Sheffer said: head-mounted display, and cell phone. With head-mounted display AR, imagine you’re wearing the same ski goggles, but now they’re transparent. You can see what’s in front of you in the physical world, but you can also see what’s on the screen.
Cell phone AR, Sheffer explained, combines what you see on your phone’s camera with virtual elements. Imagine choosing a couch model on a retail website, and seeing it in your living room through your phone’s camera.
“You probably interact with AR a lot and don’t realize it,” said Bree McEwan, an associate professor at the Institute of Communication, Culture, Information and Technology at the University of Toronto, and the director of the McEwan Mediated Communication Lab.
Pokemon GO, Snapchat, TikTok filters and even Google Maps already utilize AR, McEwan said.
What already exists in this sphere?
The Vision Pro combines both VR and AR in one device, McEwan and Sheffer explained. But Apple is far from the first company to venture into the virtual and augmented worlds.
There are a number of VR headsets already on the market, including Meta’s Oculus Quest 2 and Pro. Its Quest 3 is set to launch later this year, starting at $499 US or about $667 Cdn. That device will feature colour mixed reality, which combines augmented and virtual reality elements, according to CEO Mark Zuckerberg.
Meta’s Quest 2 and Quest Pro devices comprised nearly 80 per cent of the 8.8 million virtual reality headsets sold in 2022, according to an estimate by market research firm IDC. Still, Meta has struggled to sell its vision of an immersive “metaverse” of interconnected virtual worlds and expand the market for its devices beyond the niche of the gaming community.
A pilot project at Reddam House School in Berkshire, England, has students using VR headsets in the classroom to learn traditional subjects in a new way. Petting woolly mammoths, holding planets in their hands, and examining the human heart are just a few of the experiences students have in this future-facing take on education.
It’s also used for interpersonal skills and public speaking training, McEwan said. Education is another major opportunity, she added. In one of the classes she teaches, McEwan gives students headsets and they do five weeks of classes virtually. It’s a model she started utilizing during COVID, instead of using Zoom.
Screen-based AR is already used in several industries, such as warehousing and manufacturing, Sheffer said, where you can point your camera at an object and a recognition software identifies it.
So, is the future virtual?
McEwan sees a potential future for headsets in the business sphere, and predicts more organizations may start providing them for meetings and training. And if people get comfortable using something in a business setting, that may bleed into the social environment, she said, noting that’s what happened with e-mail and intranet messaging systems.
But while there’s what she calls a “cultural imagination,” for popping a device on your head and appearing in the metaverse, she said we’re not there yet. “The average person is probably not quite ready to jump into VR all of the time.”
Whether or not headsets are going to finally take off is what Sheffer calls “the billion-dollar question.” VR had surges of popularity over the last several decades, but people didn’t want to wear the headsets, she said.
“I think if anyone can make it, it’s Apple,” she continued. “If they can make the headset convenient, and make people want to wear it, then all of the sudden this can go places.”