adplus-dvertising
Connect with us

Tech

Microsoft’s Panos Panay teases next gen AI-powered Windows 11 & Windows 12

Published

 on

Back when Microsoft first launched Windows 11 in 2021, Panos Panay, the company’s EVP & Chief Product Officer, said at that time that Windows 11 was “the first chapter in the next era of Windows”. Now, over a year later, we may finally be starting to understand what Microsoft meant by that.

At AMD’s CES 2023 keynote earlier this month, Panos Panay was invited onto the stage by the host and AMD CEO, Dr. Lisa Su. The discussion was mainly about AMD’s new AI engine inside the new Ryzen 7040 series chips, and how it would help Microsoft usher in the next generation of software powered by AI.

He said:

300x250x1

AI is the defining technology of our time, it’s like nothing I have ever seen before. It’s transforming industries, it’s improving our daily lives on many ways – some of it you see, some of it you don’t see -, and we are right now right this moment at an inflexion point. This is where computing from the cloud to the edge is becoming more and more intelligent, more personal, and it’s all done by harnessing the power of AI.

[..]

.. now AMD is also at the forefront of AI technology, with Ryzen 7040 series, alongside Windows 11. It is our next step in this journey together

After that, the senior Microsoft exec teased a bit about the next generation of Windows too, which will have a lot more to do with AI. While AI is not exactly new on Windows, its integration is likely going to scale up potentially exponentially as the Redmond giant is working on ways for it. Windows 12 could be deeply integrated with the cloud as processing AI is very intensive.

AI is going to reinvent how you do everything on Windows, quite literally. Like these large generative models, think language models, code gen models, image models; these models are so powerful, so delightful, so useful, personal. But they are also very compute intensive, and so we haven’t been able to do this before. We have never seen these intense workloads at this scale before, and they’re right here. It’s gonna need an operating system that blurs the line between cloud and edge, and that’s what we are doing right now

A possible cloud-based future for Windows is probably good news for consumers in terms of system requirements as well, something that has always been a hot topic of discussion in the case of Windows 11. If not, then devices which have dedicated AI processing hardware will likely be the norm. Perhaps next-gen AI-powered Windows 12 is Microsoft’s bigger master plan that explains the recent rumors of the company’s interest in acquiring OpenAI and integrating ChatGPT with Bing.

728x90x4

Source link

Continue Reading

Tech

Reviews Of The New HomePod Reveal The Tech Media Has Work To Do In Appreciating Accessibility – Forbes

Published

 on


The advent of the second generation HomePod brings with it yet another opportunity to acknowledge the smart speaker’s accessibility to people with disabilities. Besides ecosystem-centric amenities like Handoff, Apple supports a bevy of accessibility features in the device; they include VoiceOver, Touch Accommodations, and much more. This is an important distinction to point out, as I’ve done in this space before. This column is precisely the forum for it.

It’s important to mention because, quite frankly, most reviewers fail to do so.

As a lifelong stutterer who has always felt digital assistants—and by extension, smart speakers—are exclusionary due to its voice-first interface paradigm, it disheartens me to see my peers in the reviewer racket continually undervalue the actual speech component of using these devices. It’s understandable—it’s difficult, if not downright impossible, to consider a perspective which you cannot fully comprehend. Yet there is room for empathy—and really, empathy is ultimately what earnest DEI initiatives are meant to reflect—with regards to how privileged it is for the majority of journalists (and their readers) to effortlessly shout into the ether and have Alexa or Siri or the Google Assistant swiftly spring into action.

300x250x1

Look no further than the embargoed HomePod 2 reviews that dropped earlier this week ahead of the product’s general availability starting on Friday. Every single one of them, whether in print or on YouTube, focuses solely on the sound quality. While perfectly sensible to do so, it’s cringeworthy to watch everyone utter not a single word about the speaker’s accessibility features or how verbally accessible Siri may be to someone with a speech delay. Again, expertise is hard—but empathy is not. Put another way, there are very real and very important characteristics of Apple’s new smart speaker that largely go ignored because it’s presumed (albeit rightly so, given how language models are typically trained) that a person is able to competently communicate with the thing. The elephant in the room is there’s far more to tell concerning the HomePod’s story. It’s counterintuitive to most, but it isn’t all about sound quality or smarts or computational audio or ecosystem.

Of course, the responsibility rests not on the tech press alone. Smart speaker makers in Apple, Amazon, Google, Sonos, and others all have to do their part on a technical level such that using a HomePod is a more accessible experience for those with speech impairments. Back in early October, I reported on tech heavyweights Amazon, Apple, Google, Meta, and Microsoft coming together “in a way that would make Voltron blush” on an initiative with the University of Illinois to help make voice-centric products more accessible to people with speech disabilities. The project, called the Speech Accessibility Project, is described as “a new research initiative to make voice recognition technology more useful for people with a range of diverse speech patterns and disabilities.” The essential idea here is current speech models favor typical speech, which makes sense for the masses, but which critically leaves out those who speak using atypical speech patterns. Thus, it’s imperative for engineers to make the technology as inclusive as possible by feeding the artificial intelligence the most diverse dataset possible.

“There are millions of Americans who have speech differences or disabilities. Most of us interact with digital assistants fairly seamlessly, but for folks with less intelligible speech, there can be a barrier to access,” Clarion Mendes, a clinical professor in speech and hearing science and a speech-language pathologist, told me in an interview ahead of my report from October. “This initiative [the Speech Accessibility Project] lessens the digital divide for individuals with disabilities. Increasing access and breaking down barriers means improved quality of life and increased independence. As we embark on this project, the voices and needs of folks in the disability community will be paramount as they share their feedback.”

Astute readers will note what Mendes ultimately expresses: empathy!

It should be stressed the thrust of this piece is not to throw my colleagues and friends under the bus and denigrate their work. They aren’t unfeeling people. The thrust here is simply that, as a stutterer, I feel extremely marginalized and underrepresented when I watch, say, MKBHD hurl rapid-fire commands at Siri or another without trouble. By and large, the smart speaker category has long felt exclusionary to me for the speech issue alone. The uneasiness doesn’t go away just because Apple’s HomePod line sounds great and fits nicely with my use of HomeKit. These are issues Apple (and its contemporaries) must reckon with in the long-term to create the most well-rounded digital assistant experience possible. Software tools like Siri Pause Time, a feature new to iOS 16 that allows users to tell Siri how long to wait until a person stops speaking to respond, is limited in its true effectiveness. The problem is, it sidesteps the problem rather than meet it at the source. It puts a band-aid on something that requires more intricate treatment.

All told, what the new HomePod reviews illustrate so well is the fact the technology media still has a ways to go yet—despite making big strides in recent times—in truly embracing accessibility as a core component of everyday coverage. The expectation shouldn’t be to ask mainstream reviewers to suddenly become experts at assistive technologies to assess stuff; that’s unrealistic. What is highly realistic, however, is to carry an expectation that editors and writers would seek the knowledge they don’t have. It’s conceptually (and practically) no different than an outlet investing in other social justice reporting—in the AAPI and Black communities, for example, especially important nowadays given recent events.

If reviewers can endlessly lament the perceived idiocy of Siri, it isn’t a stretch to acknowledge the adjacency of Siri’s lack of gracefulness in parsing atypical speech. Moreover, it shouldn’t be akin to pulling teeth to ask newspeople to consider regularly running more nuanced takes on products alongside the more overviewing ones. The disability viewpoint is not esoteric; it matters. It’s long past time disability inclusion (and disabled reporters) figure prominently at the tech desks of newsrooms the world over. Accessibility deserves a seat at the table too.

Adblock test (Why?)

728x90x4

Source link

Continue Reading

Tech

Samsung Galaxy Unpacked 2023: all the news and updates from the event – The Verge

Published

 on


This year’s first Samsung Galaxy Unpacked event will take place in front of an audience in San Francisco’s Masonic Auditorium, marking the first in-person event for Samsung in three years. It kicks off on Wednesday, February 1st, at 1PM ET / 10AM PT, and we’re expecting some exciting announcements.

There have already been tons of rumors (and plenty of leaks) about its Galaxy S23 phones, which could cost a bit more than their S22 predecessors. Other leaks indicate that the flagship S23 Ultra could come with an upgraded 200-megapixel camera along with a 6.8-inch OLED display.

Unlike Galaxy Unpacked events in the recent past, Samsung’s product reservation page suggests that the company’s also planning to release several new laptops instead of new earbuds or smartwatches. We could see up to five variations of its brand-new Galaxy Book 3 laptops, featuring thinner and lighter OLED panels with sensors embedded directly into the touchscreens.

300x250x1

If you’re looking to stay up to date on this year’s Galaxy Unpacked, The Verge will keep you posted on all the news and product announcements from the event.

  • hr]:last:border-none”>

  • hr]:last:border-none”>

    Feb 1, 2023, 2:00 PM UTCUmar Shakir

    These new 45W and 25W GaN fast chargers are compatible with Samsung’s Super Fast Charging 2.0 tech to quickly fill up the batteries in Galaxy phones.


  • hr]:last:border-none”>

  • hr]:last:border-none”>

  • hr]:last:border-none”>

  • hr]:last:border-none”>

  • hr]:last:border-none”>

  • hr]:last:border-none”>

Adblock test (Why?)

728x90x4

Source link

Continue Reading

Tech

Why Live Casinos are Taking the Canadian Gaming Community by Storm

Published

 on

Live Casinos

Were you aware that there are currently more than 2,100 online casinos which solely cater to Canadian players? Ever since the dawn of high-speed Internet, a growing number of fans have become attracted to these platforms thanks to their flexibility and decidedly user-friendly nature. Whether referring to slots, poker or a quick game of bingo, there are numerous options to explore.

However, it is also wise to take a look at some of the latest trends. Perhaps the most interesting involves the notion of live casinos. What do these portals offer, what makes them different than traditional platforms and why might live dealer games represent the next digital wave of the future?

The Basic Concept of Live Online Casinos

The main principle associated with any live casino involves the ability to interact with a human. This normally comes in the form of a dealer. As opposed to playing games that rely solely upon random number generation (RNG), a human dealer will be present via a live streaming portal. This helps to provide what some have called a rather “organic” nature to the games themselves.

For instance, a live casino ontario may offer players the ability to take part in a game of virtual poker. They will be competing against other members of the same table while taking careful note of which cards are dealt. In many ways, this level of interaction closely mirrors the experiences associated with a physical gaming establishment.

300x250x1

What Games Can be Accessed?

Now that we have taken a quick look at the fundamental principles of live casinos, what types of games can users play? Answering this question will partially depend on the portal itself as well as the software technology that is present. However, live dealer games can nonetheless be segmented into a handful of general categories including:

  • Table games such as blackjack and poker
  • Bingo
  • Slots
  • “Combination” games such as poker that leads into a final round of jackpot slots

Those who wish to learn more should navigate to the site in question and peruse the types of live games that are offered. It could also be wise to contact a representative to address any additional questions.

Are There Any Possible Downsides?

Live casinos are certainly set to make their presence known throughout the nation. Still, it is wise to point out a few potential obstacles that may need to be overcome. One possible issue involves the relatively limited number of games when compared to standard online platforms. It could also be difficult to access certain competitions due to a sudden influx of players. Finally, live online streaming requires a relatively fast and extremely stable high-speed Internet connection. This may present a problem for those who live within the more remote regions of Canada.

Having said this, live casinos are already enjoyed by countless Canadian players. It is a foregone conclusion that they will become even more popular in the near future.

 

Continue Reading

Trending