Connect with us

Tech

Amazon's Gives Its Take Of An ‘Ambient’ Alexa Experience – Forbes

Published

 on


Yesterday, Amazon hosted its 3rd annual Alexa Live event for its 900,000+ developers and countless Alexa device users. Last year’s Alexa Live event had many big changes that plotted the future for the Alexa ecosystem. Alexa Live is joined by developers, device makers, startups, entrepreneurs, press, and analysts to see what Alexa has next for its community. 

One statistic that Amazon threw out that caught my attention is that one in four Alexa Smart Home interactions are now initiated by Alexa rather than the customer. This statistic reveals the direction of the future of voice, and I believe Amazon hit it right on the money when giving its vision statement. Jeff Blankenburg, Chief Technology Evangelist, said Alexa’s vision statement, “to be an ambient assistant that is proactive, personal, and predictable, everywhere customers what her to be.” In the background, Alexa is made to be ambient while also naturally assisting customers and not being the next distraction. The unique challenge with Alexa is to traffic and guide information to the user from end-to-end without compromising ambiance. To do this, Amazon says it is working to make Alexa’s ambient experience ubiquitous, multimodal, and smarter. 

Making Alexa ubiquitous

Amazon announced new interactive and customer-engaging features (Alexa Presentation Language), APL Widgets, and Featured Skill Cards. APL Widgets lets customers interact with content on the home screen with glanceable, self-updating views of skill content. Featured Skills lets developers put their skills on the home screen alongside what is already shown on the Echo Show home screen. These are great features that enhance the multimodal experience of the user and developer. Users should be able to engage with the skills used most and discover new Skills in a seamless interaction on the home screen.

Amazon announced its Name Free Interaction Toolkit (NFI) at last year’s Alexa Live event, and this year it has made some striding improvements to the toolkit. The toolkit helps developers get their skills out to users by flagging or signaling a skill based on a user’s request. Amazon says the toolkit has boosted traffic and doubling it in some cases for useful skills. 

NFI Toolkit has a new feature that lets skills be the responses to Alexa’s popular discovery-oriented utterances like “Alexa, tell me a story” or “Alexa, I need a workout.” The NFI Toolkit also has a new personalized skill suggestion feature for users to frequent skills users find most helpful. An example Amazon gave was a customer asking, “Alexa, how did the Nasdaq do today?” and it responds with, “You’ve previously used CNBC skill. Would you like to use it again?” I highlight this example because it brings a personal and ubiquitous experience to skills without being overwhelming. 

Amazon is also extending its Name Free Interactions feature to support extra discovery of skills in interactions that can use multiple skills. I think this feature is another great way to enhance customer interaction and increase discoverability.

Another interactive feature Amazon added to Alexa is the Spotlight feature on Amazon Music. Amazon says users can now connect directly with fans by uploading messages to promote new music and interact with fans. Amazon also created Interactive Media Skill components and Song Request Skill components that shorten interaction times with Radio, Podcast, and Music providers and give users extra modes of interaction. Users will either love or hate these features, given most primarily want to listen to music, and music isn’t necessarily an interactive activity. 

Making Alexa multimodal

Amazon has announced its new Food Skills APIs that quickly enable users to create food delivery and pickup experiences. One of the toughest choices when going out to eat is deciding on a place to eat. Having local food offers and suggestions by Alexa should make the experience much easier for users, and in some cases, better for restaurants, stores, delivery services to get products and services out. 

Amazon also has two new features that go hand-in-hand—Event-Based Triggers and Proactive Suggestions. Alexa users can build proactive experiences that trigger skills when an event or activity happens. Alexa also has improved routines with Custom Tasks that lets users customize routines inside of skills. Amazon also includes a feature that lets users send experiences that start on the Alexa device to a connected smartphone. These features open up the multimodal capabilities of Alexa, and I think users are going to find Alexa to be a crucial part of their days. 

Alexa is also opening its Device Discovery feature to include additional Alexa-compatible devices connected to the same network. This feature allows device makers to integrate Device Discovery into other smart home devices to create a connected home. Amazon has also upgraded Alexa Guard to connect to smart safety devices like smoke, carbon monoxide, and water leak detectors around the home that can then send notifications. 

Making Alexa Smarter

Amazon says it has doubled engagement of skills since it made Alexa Conversations generally available. It announced that it is expanding Alexa Conversations to be available in public beta in German, all English Locales, and a developer preview in Japan. It is also announcing Alexa Skill Components to help developers build skills faster by plugging foundational Skill code into existing voice models and code libraries.

Amazon is also making it easier for users to connect their accounts to a product or service skill or sign up using Voice Forward Account Linking and Voice Forward consent. Amazon said it had upgraded Alexa Skill Design Guide that codifies lessons learned from Amazon’s developers and broader skill build community. 

Amazon has included other features that make creating skills and implementing services and products into the Alexa ecosystem much easier:

  • Alexa Entities lets skills retrieve information from Alexa’s skill graph.
  • Customized Pronunciations lets developers add custom pronunciation to skill models.
  • Sample Utterance Recommendation Engine uses grammar induction, sequence-to-sequence transformers, and data filtering to recommend utterances for a developer’s skills.
  • Skill A/B Testing lets developers perform A/B tests, make data-driven launch decisions.
  • Service and Test-Generation Tool helps developers test capabilities for consolidated batch testing.

What’s great about these new features is that Amazon understands it does not have to do all the work in making Alexa smart. Amazon only needs to give developers the tools and opportunity to implement smart interactions and user experiences. I think these tools successfully give developers the tools to do so.

Wrapping up

Ambient computing is one of the toughest things to get right but I believe the most valuable in the long-run. It could take another five to ten years of work to accomplish it on a global scale.  

Amazon’s Alexa Live Event somehow brought more to the table than last year’s event. A large portion of creating an ambient experience that is ubiquitous, multimodal, and smart is in the hands of developers, device makers, entrepreneurs, and the Alexa community. To create an ambient experience, Amazon must create the tools and opportunities for these partners to do their part. 

Amazon created seamless interactions between skills and users with Feature Cards and APL Widgets. It is giving skills more opportunity to be interactive and discoverable with the NFI Toolkit. Amazon is making many interactions and experiences between users and Alexa a big part of people’s day with Food APIs, Event-Based Triggers, and Proactive Suggestions. Amazon is successfully making skills easier and more accessible to developers, and I think the Alexa ecosystem, from end-to-end, can appreciate the feasibility.  

Ambient computing is the “win” and Amazon, based on what I saw at Live, is getting us closer to this reality. It’s a two-horse race with Google, and it appears Amazon is in the current lead.

Note: Moor Insights & Strategy co-op Jacob Freyman contributed to this article. 

Moor Insights & Strategy, like all research and analyst firms, provides or has provided paid research, analysis, advising, or consulting to many high-tech companies in the industry, including 8×8, Advanced Micro Devices, Amazon, Applied Micro, ARM, Aruba Networks, AT&T, AWS, A-10 Strategies, Bitfusion, Blaize, Box, Broadcom, Calix, Cisco Systems, Clear Software, Cloudera, Clumio, Cognitive Systems, CompuCom, Dell, Dell EMC, Dell Technologies, Diablo Technologies, Digital Optics, Dreamchain, Echelon, Ericsson, Extreme Networks, Flex, Foxconn, Frame (now VMware), Fujitsu, Gen Z Consortium, Glue Networks, GlobalFoundries, Google (Nest-Revolve), Google Cloud, HP Inc., Hewlett Packard Enterprise, Honeywell, Huawei Technologies, IBM, Ion VR, Inseego, Infosys, Intel, Interdigital, Jabil Circuit, Konica Minolta, Lattice Semiconductor, Lenovo, Linux Foundation, MapBox, Marvell, Mavenir, Marseille Inc, Mayfair Equity, Meraki (Cisco), Mesophere, Microsoft, Mojo Networks, National Instruments, NetApp, Nightwatch, NOKIA (Alcatel-Lucent), Nortek, Novumind, NVIDIA, Nuvia, ON Semiconductor, ONUG, OpenStack Foundation, Oracle, Poly, Panasas, Peraso, Pexip, Pixelworks, Plume Design, Poly, Portworx, Pure Storage, Qualcomm, Rackspace, Rambus, Rayvolt E-Bikes, Red Hat, Residio, Samsung Electronics, SAP, SAS, Scale Computing, Schneider Electric, Silver Peak, SONY, Springpath, Spirent, Splunk, Sprint, Stratus Technologies, Symantec, Synaptics, Syniverse, Synopsys, Tanium, TE Connectivity, TensTorrent, Tobii Technology, T-Mobile, Twitter, Unity Technologies, UiPath, Verizon Communications, Vidyo, VMware, Wave Computing, Wellsmith, Xilinx, Zebra, Zededa, and Zoho which may be cited in blogs and research.

Adblock test (Why?)



Source link

Continue Reading

Tech

Cat simulator 'Stray' heads to PlayStation and PC in early 2022 – Engadget

Published

 on


The last time we saw Stray was in the form of a cinematic trailer Sony shared in 2020 that highlighted the game’s futuristic neon-soaked setting and adorable feline protagonist. At the time, we didn’t get to see the game in action, a fact that Annapurna Interactive has now remedied. The publisher shared a slice of gameplay footage from the title during its recent showcase and said it would release Stray sometime in early 2022.

In the opening moments of Stray, our feline protagonist finds himself injured and separated from his family. Gameplay involves using his physical abilities as a cat to navigate the environment and solve puzzles. In the time-honored tradition of duos like Ratchet and Clank, partway through the adventure, you’ll meet a drone named B-12. They will allow you to converse with the city’s other robotic inhabitants and interact with certain objects in the environment. The cat has a playful side to his personality, and you can do things like scratch furniture, interact with vending machines and rub up against the legs of the robots you meet. Good stuff.

When Stray comes out next year, it will be available on PlayStation 4, PS5 and PC. Developer BlueTwelve Studio promised to show off more of the game before then.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Adblock test (Why?)



Source link

Continue Reading

Tech

Cat simulator 'Stray' heads to PlayStation and PC in early 2022 – Yahoo News Canada

Published

 on


The last time we saw Stray was in the form of a cinematic trailer Sony shared in 2020 that highlighted the game’s futuristic neon-soaked setting and adorable feline protagonist. At the time, we didn’t get to see the game in action, a fact that Annapurna Interactive has now remedied. The publisher shared a slice of gameplay footage from the title during its recent showcase and said it would release Stray sometime in early 2022.

In the opening moments of Stray, our feline protagonist finds himself injured and separated from his family. Gameplay involves using his physical abilities as a cat to navigate the environment and solve puzzles. In the time-honored tradition of duos like Ratchet and Clank, partway through the adventure, you’ll meet a drone named B-12. They will allow you to converse with the city’s other robotic inhabitants and interact with certain objects in the environment. The cat has a playful side to his personality, and you can do things like scratch furniture, interact with vending machines and rub up against the legs of the robots you meet. Good stuff.

When Stray comes out next year, it will be available on PlayStation 4, PS5 and PC. Developer BlueTwelve Studio promised to show off more of the game before then.

Adblock test (Why?)



Source link

Continue Reading

Tech

Sony’s new PS5 beta update also fixes one of its silliest flaws – The Verge

Published

 on


The first major system update for Sony’s PlayStation 5 is arriving in beta form today, finally letting you expand the console’s 667GB of usable storage by adding your own PCIe Gen 4 SSD as well as testing new UI options and expanding 3D Audio support. But the full changelog also includes a few features that Sony didn’t highlight to press — including a way to easily update your DualSense controller if you press the wrong button!

You see, the PS5 currently has a very silly flaw: the only time you can update your controller is when you boot the console. And if you say no or accidentally press the O button instead of X, you can’t trigger that update until 24 hours have passed (or you tweak your PS5’s internal clock to cheat it).

But in Beta 2.0, there’s now a dedicated menu for that under Settings > Accessories > Controllers called Wireless Controller Device Software. Please forgive my grainy photo.

You’ll still see controller update prompts when you launch the console, too — and hitting the circle button will still instantly dismiss them.

The beta also makes one of our other UI frustrations slightly better: the ability to easily turn off the console. It’s still a mystery why Sony switched away from letting you long-press the PS button to requiring extra taps, but at least now you can change how many taps it takes. Pressing the hamburger / start button in the PS5’s quick actions menu now lets you drag any of them (including the PS5’s digital power button) to a different position in that menu.

Separately, did you know the PS5 lets you set up all kinds of parental controls for your kid on what they can play, watch, and do, and it lets you remotely approve their requests over the web? I didn’t realize that, and the beta update now lets you see and respond to those asks through the latest version of the mobile PlayStation App, not just via email.

Frankly, it still needs work: it’s a convoluted process that kicks you out to a web browser for setup, requires your kid to be signed into a PlayStation Network account (not just a local profile), has you set up all kinds of limits, and kicks you out to a web browser again (requiring you to log in) when you want to approve a request. And once you let your kid play a particular game, they get to keep playing until you remove it from the whitelist.

What I want is a simple rich phone notification that effectively lets me tap “yes, you can play this for 30 minutes” or “not right now, kid” and be done with it right away. Perhaps there’s time before the 2.0 software goes gold? Or perhaps in a future update.

Adblock test (Why?)



Source link

Continue Reading

Trending