Connect with us

Tech

The Morning After: The best of CES 2022 – Engadget

Published

 on


Welcome to your Monday morning! CES is a wrap. While we planned to send a small team of Engadget staff to Las Vegas to cover the show in person, Omicron appeared on the horizon, and, well, our plans changed. 

Engadget

We also decided, under those circumstances, to do our own thing for our annual CES Awards. Our favorites run the gamut from familiar categories like mobile, TV and wearables through to sustainability innovations, wildcards and transport tech. Did we miss anything?

— Mat Smith

Just don’t expect a radical redesign.

Bloomberg‘s Mark Gurman claims Apple is expected to introduce a third-generation iPhone SE this spring through a virtual presentation “likely” happening in March or April. As we’ve already heard rumored before, Gurman says the new SE would still cling to the iPhone 8-era design but add 5G and a new processor, bringing it closer in parity to existing iPhones. We last saw the iPhone SE in 2020, running on the A13 Bionic.

Continue reading.

It’s not the first time a Y2K-style bug has sent Honda and Acura vehicles to the past.

Since the start of the year, Honda’s forums have been flooded with reports of people complaining the clocks and calendars in their vehicles are stuck in 2002. It’s affecting Honda and Acura models with GPS navigation systems manufactured between 2004 and 2012, with reports of people encountering the problem in the US, UK and Canada. What’s more, there doesn’t appear to be a fix at the moment. Each time someone starts their car, the clock resets — even if they manually set it beforehand.

Continue reading.

Faster processors, updated button layout and longer power cables.

TMA

IKEA

IKEA and Sonos have released a second-generation version of their Symfonisk bookshelf speaker. No Earth-shattering changes here, but the update features a faster processor and more memory and draws less power when it’s on standby. It also comes with a longer power cable.

However, looking at the new model next to its first-generation counterpart, the most visible change is an updated button layout that brings the volume controls next to one another. Prices appear similar to the original in the Netherlands, but we’re still waiting to hear if or when the update will make it to the US.

Continue reading.

It will reportedly be backed by the US dollar.

PayPal’s VP of crypto and digital currencies has confirmed to Bloomberg that the online payment provider is “exploring a stablecoin.” Jose Fernandez da Ponte added the company will work closely with relevant regulators if the project goes forward.

Continue reading.

The company is using a strategy sometimes reserved for supercars.

TMA

Ford

Ford is giving dealerships the option to ban customers from reselling the Lightning for up to a year after purchase. As the (since-pulled) document on the F-150 Gen 14 forums revealed, the dealer could “seek injunctive relief” to block ownership transfer or even demand payment for “all value” generated from the sale.

Continue reading.

The biggest news stories you might have missed

UK watchdog to grill Meta over child safety in VR

NASA finishes deploying the James Webb Space Telescope

Apple said to have ruled out a metaverse for its mixed reality headset

Breakthrough could help you 3D print OLED screens at home

Mars Perseverance halts rock sample storage due to debris

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.

Adblock test (Why?)



Source link

Continue Reading

Tech

Photos of Samsung Galaxy A53 5G's components confirm four rear cameras, one selfie – GSMArena.com news – GSMArena.com

Published

 on


The Samsung Galaxy A53 5G will reuse the bump design of the A52 trio for the quad camera on its back. This was seen in speculative renders from last year, but now we have real-world confirmation as well from spy photos of A53 5G’s frame and rear panel that were shared by 91Mobilies.

The panel appears black, though this could be prior to painting. Either way, black is one of the rumored color options for this model, alongside white, light blue and orange. This same color palette will be used for other Ax3 phones as well, including the Galaxy A13 and A33 5G.


Samsung Galaxy A53 5G rear panel and mid-frame
Samsung Galaxy A53 5G rear panel and mid-frame
Samsung Galaxy A53 5G rear panel and mid-frame

Samsung Galaxy A53 5G rear panel and mid-frame

As for the cameras, it will indeed have four modules, despite TENAA listing only three. The main camera is expected to have the same 64 MP resolution as the A52 models, but the ultra wide may be getting an upgrade to 32 MP (up from 12 MP).

We wouldn’t put too much stock in the TENAA specs, though, they also listed two selfie cameras, and we haven’t seen any evidence of that, not even in TENAA’s own photos of the phone. And if you look at the photo of the phone’s mid-frame, there is only one centered punch hole for a selfie camera.

Samsung Galaxy A53 5G (speculative renders)
Samsung Galaxy A53 5G (speculative renders)
Samsung Galaxy A53 5G (speculative renders)

Samsung Galaxy A53 5G speculative renders (image credit)

The Samsung Galaxy A53 5G will use two different chipsets, one of which is expected to be the Exynos 1200. Note that there isn’t going to be an A53 4G, the two different chips will both power 5G units. Other than that, they should share the same hardware.

The A53 is expected to be announced in the first quarter of this year, likely alongside other Ax3 models.

Source

Adblock test (Why?)



Source link

Continue Reading

Tech

Xbox boss wants to revive old Activision Blizzard games – Rock Paper Shotgun

Published

 on


Of the many possibilities that Microsoft buying Activision Blizzard might enable, only one seems really clear: that Microsoft will put Actiblizz games on Game Pass. Beyond that, it’s all mights and maybes. Here’s another maybe: Microsoft Gaming CEO Phil Spencer says they’re hoping to dig into Actiblizz’s “franchises that I love from my childhood,” raising the likes of Hexen and King’s Quest. What better use for $69 billion than wallowing in nostalgia?

(more…)

Continue Reading

Tech

Meta researchers build an AI that learns equally well from visual, written or spoken materials – TechCrunch

Published

 on


Advances in the AI realm are constantly coming out, but they tend to be limited to a single domain: For instance, a cool new method for producing synthetic speech isn’t also a way to recognize expressions on human faces. Meta (AKA Facebook) researchers are working on something a little more versatile: an AI that can learn capably on its own whether it does so in spoken, written or visual materials.

The traditional way of training an AI model to correctly interpret something is to give it lots and lots (like millions) of labeled examples. A picture of a cat with the cat part labeled, a conversation with the speakers and words transcribed, etc. But that approach is no longer in vogue as researchers found that it was no longer feasible to manually create databases of the sizes needed to train next-gen AIs. Who wants to label 50 million cat pictures? Okay, a few people probably — but who wants to label 50 million pictures of common fruits and vegetables?

Currently some of the most promising AI systems are what are called self-supervised: models that can work from large quantities of unlabeled data, like books or video of people interacting, and build their own structured understanding of what the rules are of the system. For instance, by reading a thousand books it will learn the relative positions of words and ideas about grammatical structure without anyone telling it what objects or articles or commas are — it got it by drawing inferences from lots of examples.

This feels intuitively more like how people learn, which is part of why researchers like it. But the models still tend to be single-modal, and all the work you do to set up a semi-supervised learning system for speech recognition won’t apply at all to image analysis — they’re simply too different. That’s where Facebook/Meta’s latest research, the catchily named data2vec, comes in.

The idea for data2vec was to build an AI framework that would learn in a more abstract way, meaning that starting from scratch, you could give it books to read or images to scan or speech to sound out, and after a bit of training it would learn any of those things. It’s a bit like starting with a single seed, but depending on what plant food you give it, it grows into an daffodil, pansy or tulip.

Testing data2vec after letting it train on various data corpi showed that it was competitive with and even outperformed similarly sized dedicated models for that modality. (That is to say, if the models are all limited to being 100 megabytes, data2vec did better — specialized models would probably still outperform it as they grow.)

“The core idea of this approach is to learn more generally: AI should be able to learn to do many different tasks, including those that are entirely unfamiliar,” wrote the team in a blog post. “We also hope data2vec will bring us closer to a world where computers need very little labeled data in order to accomplish tasks.”

“People experience the world through a combination of sight, sound and words, and systems like this could one day understand the world the way we do,” commented CEO Mark Zuckerberg on the research.

This is still early stage research, so don’t expect the fabled “general AI” to emerge all of a sudden — but having an AI that has a generalized learning structure that works with a variety of domains and data types seems like a better, more elegant solution than the fragmented set of micro-intelligences we get by with today.

The code for data2vec is open source; it and some pretrained models are available here.

Adblock test (Why?)



Source link

Continue Reading

Trending