adplus-dvertising
Connect with us

Tech

Reviews Of The New HomePod Reveal The Tech Media Has Work To Do In Appreciating Accessibility

Published

 on

The advent of the second generation HomePod brings with it yet another opportunity to acknowledge the smart speaker’s accessibility to people with disabilities. Besides ecosystem-centric amenities like Handoff, Apple supports a bevy of accessibility features in the device; they include VoiceOver, Touch Accommodations, and much more. This is an important distinction to point out, as I’ve done in this space before. This column is precisely the forum for it.

It’s important to mention because, quite frankly, most reviewers fail to do so.

As a lifelong stutterer who has always felt digital assistants—and by extension, smart speakers—are exclusionary due to its voice-first interface paradigm, it disheartens me to see my peers in the reviewer racket continually undervalue the actual speech component of using these devices. It’s understandable—it’s difficult, if not downright impossible, to consider a perspective which you cannot fully comprehend. Yet there is room for empathy—and really, empathy is ultimately what earnest DEI initiatives are meant to reflect—with regards to how privileged it is for the majority of journalists (and their readers) to effortlessly shout into the ether and have Alexa or Siri or the Google Assistant swiftly spring into action.

Look no further than the embargoed HomePod 2 reviews that dropped earlier this week ahead of the product’s general availability starting on Friday. Every single one of them, whether in print or on YouTube, focuses solely on the sound quality. While perfectly sensible to do so, it’s cringeworthy to watch everyone utter not a single word about the speaker’s accessibility features or how verbally accessible Siri may be to someone with a speech delay. Again, expertise is hard—but empathy is not. Put another way, there are very real and very important characteristics of Apple’s new smart speaker that largely go ignored because it’s presumed (albeit rightly so, given how language models are typically trained) that a person is able to competently communicate with the thing. The elephant in the room is there’s far more to tell concerning the HomePod’s story. It’s counterintuitive to most, but it isn’t all about sound quality or smarts or computational audio or ecosystem.

Of course, the responsibility rests not on the tech press alone. Smart speaker makers in Apple, Amazon, Google, Sonos, and others all have to do their part on a technical level such that using a HomePod is a more accessible experience for those with speech impairments. Back in early October, I reported on tech heavyweights Amazon, Apple, Google, Meta, and Microsoft coming together “in a way that would make Voltron blush” on an initiative with the University of Illinois to help make voice-centric products more accessible to people with speech disabilities. The project, called the Speech Accessibility Project, is described as “a new research initiative to make voice recognition technology more useful for people with a range of diverse speech patterns and disabilities.” The essential idea here is current speech models favor typical speech, which makes sense for the masses, but which critically leaves out those who speak using atypical speech patterns. Thus, it’s imperative for engineers to make the technology as inclusive as possible by feeding the artificial intelligence the most diverse dataset possible.

“There are millions of Americans who have speech differences or disabilities. Most of us interact with digital assistants fairly seamlessly, but for folks with less intelligible speech, there can be a barrier to access,” Clarion Mendes, a clinical professor in speech and hearing science and a speech-language pathologist, told me in an interview ahead of my report from October. “This initiative [the Speech Accessibility Project] lessens the digital divide for individuals with disabilities. Increasing access and breaking down barriers means improved quality of life and increased independence. As we embark on this project, the voices and needs of folks in the disability community will be paramount as they share their feedback.”

Astute readers will note what Mendes ultimately expresses: empathy!

It should be stressed the thrust of this piece is not to throw my colleagues and friends under the bus and denigrate their work. They aren’t unfeeling people. The thrust here is simply that, as a stutterer, I feel extremely marginalized and underrepresented when I watch, say, MKBHD hurl rapid-fire commands at Siri or another without trouble. By and large, the smart speaker category has long felt exclusionary to me for the speech issue alone. The uneasiness doesn’t go away just because Apple’s HomePod line sounds great and fits nicely with my use of HomeKit. These are issues Apple (and its contemporaries) must reckon with in the long-term to create the most well-rounded digital assistant experience possible. Software tools like Siri Pause Time, a feature new to iOS 16 that allows users to tell Siri how long to wait until a person stops speaking to respond, is limited in its true effectiveness. The problem is, it sidesteps the problem rather than meet it at the source. It puts a band-aid on something that requires more intricate treatment.

All told, what the new HomePod reviews illustrate so well is the fact the technology media still has a ways to go yet—despite making big strides in recent times—in truly embracing accessibility as a core component of everyday coverage. The expectation shouldn’t be to ask mainstream reviewers to suddenly become experts at assistive technologies to assess stuff; that’s unrealistic. What is highly realistic, however, is to carry an expectation that editors and writers would seek the knowledge they don’t have. It’s conceptually (and practically) no different than an outlet investing in other social justice reporting—in the AAPI and Black communities, for example, especially important nowadays given recent events.

If reviewers can endlessly lament the perceived idiocy of Siri, it isn’t a stretch to acknowledge the adjacency of Siri’s lack of gracefulness in parsing atypical speech. Moreover, it shouldn’t be akin to pulling teeth to ask newspeople to consider regularly running more nuanced takes on products alongside the more overviewing ones. The disability viewpoint is not esoteric; it matters. It’s long past time disability inclusion (and disabled reporters) figure prominently at the tech desks of newsrooms the world over. Accessibility deserves a seat at the table too.

728x90x4

Source link

Continue Reading

Tech

Google Unveils AI-Powered Pixel 9 Lineup Ahead of Apple’s iPhone 16 Release

Published

 on

Tech News in Canada

Google has launched its next generation of Pixel phones, setting the stage for a head-to-head competition with Apple as both tech giants aim to integrate more advanced artificial intelligence (AI) features into their flagship devices. The unveiling took place near Google’s Mountain View headquarters, marking an early debut for the Pixel 9 lineup, which is designed to showcase the latest advancements in AI technology.

The Pixel 9 series, although a minor player in global smartphone sales, is a crucial platform for Google to demonstrate the cutting-edge capabilities of its Android operating system. With AI at the core of its strategy, Google is positioning the Pixel 9 phones as vessels for the transformative potential of AI, a trend that is expected to revolutionize the way people interact with technology.

Rick Osterloh, Google’s senior vice president overseeing the Pixel phones, emphasized the company’s commitment to AI, stating, “We are obsessed with the idea that AI can make life easier and more productive for people.” This echoes the narrative Apple is likely to push when it unveils its iPhone 16, which is also expected to feature advanced AI capabilities.

The Pixel 9 lineup will be the first to fully integrate Google’s Gemini AI technology, designed to enhance user experience through more natural, conversational interactions. The Gemini assistant, which features 10 different human-like voices, can perform a wide array of tasks, particularly if users allow access to their emails and documents.

In an on-stage demonstration, the Gemini assistant showcased its ability to generate creative ideas and even analyze images, although it did experience some hiccups when asked to identify a concert poster for singer Sabrina Carpenter.

To support these AI-driven features, Google has equipped the Pixel 9 with a special chip that enables many AI processes to be handled directly on the device. This not only improves performance but also enhances user privacy and security by reducing the need to send data to remote servers.

Google’s aggressive push into AI with the Pixel 9 comes as Apple prepares to unveil its iPhone 16, which is expected to feature its own AI advancements. However, Google’s decision to offer a one-year free subscription to its advanced Gemini Assistant, valued at $240, may pressure Apple to reconsider any plans to charge for its AI services.

The standard Pixel 9 will be priced at $800, a $100 increase from last year, while the Pixel 9 Pro will range between $1,000 and $1,100, depending on the model. Google also announced the next iteration of its foldable Pixel phone, priced at $1,800.

In addition to the new Pixel phones, Google also revealed updates to its Pixel Watch and wireless earbuds, directly challenging Apple’s dominance in the wearable tech market. These products, like the Pixel 9, are designed to integrate seamlessly with Google’s AI-driven ecosystem.

Google’s event took place against the backdrop of a significant legal challenge, with a judge recently ruling that its search engine constitutes an illegal monopoly. This ruling could lead to further court proceedings that may force Google to make significant changes to its business practices, potentially impacting its Android software or other key components of its $2 trillion empire.

Despite these legal hurdles, Google is pressing forward with its vision of an AI-powered future, using its latest devices to showcase what it believes will be the next big leap in technology. As the battle for AI supremacy heats up, consumers can expect both Google and Apple to push the boundaries of what their devices can do, making the choice between them more compelling than ever.

Continue Reading

News

Microsoft Outage Hits Payment Processors

Published

 on

Canada News Social Media

When major payment processing systems have problems, the issues impact many critical systems that society depends on. In this article, we’ll explain the cause of the Microsoft outage and discuss the impact computer networking issues had on Canada. We’ll also examine whether or not Microsoft was at fault and what businesses can do to prevent further outages.

What Happened With the Microsoft Outage?

The outage with Microsoft’s Azure payment processor resulted from a buggy security update from an outside company, CrowdStrike. CrowdStrike offers information technology security services for many Microsoft Windows computers. The company’s software developers sent a new update out, but instead of patching up minor issues with the existing software, the code within conflicted with Windows and prevented computers from booting up. Users expecting to start their computers for a typical day were instead faced with the dreaded “Blue Screen of Death” error message.

So, how does this produce a problem and a payment processor issue? Many computers running payment processing, among many other kinds of software used for airlines, banks, retail, and other essential services, couldn’t start and were unable to let payments through. This is a catastrophic issue for companies that are heavily reliant upon the speed and ease of an electronic transaction.

In Canada, the outage impacted critical computer systems for air travel. Flights couldn’t be paid for and booked, which caused major problems for customers unable to make transactions while flights remained grounded. Travellers stuck waiting for flights to take off made their way over to the airports’ Starbucks and other vendors, only to discover unusually long lines due to payment issues. Even online gamblers looking to take their minds off the situation couldn’t take full advantage of one of the fastest payment options out there because of the outage.

Aside from payments, hospitals for major health systems had to use paper to complete important tasks like ordering lab work and getting meals to patients. Emergency dispatch lines were temporarily unable to function correctly while their computer systems were down.

How Was the Outage Fixed?

Thankfully, CrowdStrike fixed the problem on their end quickly, mostly via an additional reboot that allowed CrowdStrike to send over unflawed code. Unfortunately, for some business and private customers, rebooting wouldn’t be enough with command-line level adjustments needed for the operating system to run correctly.

The Good and Bad of Outages

First, we’re thankful that the outage was not caused by hackers accessing and stealing a mountain of personal data. A recent outage with an automotive software provider went on for much longer and ended much worse for software provider CDK, which likely paid an undisclosed sum north of $20 million to get data back and systems restored.

By some chance, Microsoft is reported to have experienced its own outage, and many information technology professionals blame Microsoft in part for their issues because of how their systems attempted to fix the problem by rebooting over and over again, though some of Microsoft’s PCs needed to warn users to make a change manually. Unfortunately, any computer that required manual intervention took longer to recover, as a knowledgeable person had to access each computer affected by the issue. In some cases, between dealing with several hours of backlogged tasks and slow recovery processes, some businesses took days, not hours, to get back online.

The outage brings up another major point in the cybersecurity and computer industry. CrowdStrike and Microsoft are both big companies in their respective fields. As a result, the effects of bad code spread much further than they could have if there were more competitors making security products or if there were more software companies making operating systems like Windows. While only 8 million computers were believed to be affected out of a much larger global network, those are essential computers for worldwide communication and payment processing. Perhaps companies should be putting their eggs in more than one basket?

The testing methods for the outage are unclear—did CrowdStrike test the routine software update enough to detect the potential for a major outage? Apparently not.

What Should Businesses Do Next?

Software like Microsoft Azure’s payment systems come from what information technology professionals call ‘the cloud.’ The software is remotely managed over the internet, meaning that the computer that runs the system is not physically present at the location. Unfortunately, this also means that an issue with the internet can take critical systems out of service.

Businesses ranging from major airlines and banks to mom-and-pop stores would be well served by backup systems at their locations. These don’t have to be as primitive as the old-fashioned credit-card carbon-copy slide, but there are options available with consistent service that don’t repeatedly rely on the same networks.

Conclusion

There were certainly challenging moments for Canadian businesses and emergency services during the CrowdStrike and Microsoft outage. As they scrambled to understand the problem and waited, albeit briefly, for issues to resolve, many companies learned the importance of having local and reliable backup for their computer systems.

Continue Reading

Tech

New photos reveal more details about Google’s Pixel 9 Pro Fold

Published

 on

Tech News in Canada

Google’s secret new line of Pixel 9 phones isn’t that big of a secret anymore. Taiwan’s National Communications Commission (NCC) released new photos of the phones including the Pixel 9 Pro Fold from almost every conceivable angle.

Android Authority found the photos in the NCC archives and uploaded galleries of each of the four phones including the Pixel 9, 9 Pro, 9 Pro XL and 9 Pro Fold. They reveal some interesting details about the new Pixel phones.

The charging rates will be a little faster than the last generation of Pixel phones: Taiwanese authorities measured 24.12W for the base model, 25.20W for the Pro and 32.67W for the 9 Pro XL. The Pixel 9 Pro Fold, however, was the slowest of all of them at 20.25W. These numbers don’t often match up perfectly with the advertised ratings, so expect Google to be promoting higher numbers at its event.

Speaking of chargers, it looks like Google needed a bigger charger to power its new phones. Photos included in the NCC leak show each phone will come with a wall charger that’s around 45W depending on which model you purchase. The charger’s plug moved from the middle to the top of the brick.

The Google Pixel 9 Pro Fold can fully unfold.
NCC/Android Authority

The latest photo dump also shows the 9 Pro Fold unfolded for the first time. Google has moved the selfie camera to the inside screen for a wider field of view. The 9 Pro Fold also has a slimmer top and bottom, a reduced fold crease on the display and a full 180 degree unfolding angle to make a screen that’s just over 250mm or just under 10 inches.

These photos are the latest in a very long list of leaks of Google Pixel 9 photos. The last Pixel 9 leak came down yesterday showing two prototype models of the base and XL models. Google might look into buying a new combination lock for the high school locker where they apparently keep all their unreleased gear.

 

728x90x4

Source link

Continue Reading

Trending