Connect with us

Tech

Now DuckDuckGo is building its own desktop browser – ZDNet

Published

 on


Privacy-focused search engine DuckDuckGo has offered a first look at its forthcoming desktop “browsing app” that promises simple default privacy settings. 

DuckDuckGo CEO Gabriel Weinberg details its desktop browser in a blogpost recapping its milestones for 2021, including 150 million downloads of its all-in-one privacy apps for iOS and Android and Chromium extensions. 

Weinberg attempts to distinguish the DuckDuckGo desktop browser from the likes of Chromium-based Brave and Mozilla Firefox by arguing it is not a “privacy browser”. Instead, it’s just a browser that offers “robust privacy protection” by default and works across search, browsing, email and more. 

“It’s an everyday browsing app that respects your privacy because there’s never a bad time to stop companies from spying on your search and browsing history,” writes Weinberg. 

Weinberg offers a few clues about the internals underpinning the DuckDuckGo desktop browser or “app” as it calls it, but also leaves out a lot of details. 

He says it won’t be based on Chromium, the open source project underpinning Google Chrome, Microsoft Edge, Brave, Vivaldi and about 30 other browsers. 

“Instead of forking Chromium or anything else, we’re building our desktop app around the OS-provided rendering engines (like on mobile), allowing us to strip away a lot of the unnecessary cruft and clutter that’s accumulated over the years in major browsers,” explains Weinberg. 

It’s not clear what desktop OS-provided rendering engines he’s referring to but it’s not a trivial task to build a desktop browser without Chromium’s Blink rendering engine. Just ask Microsoft, which launched its Chromium-based Edge browser last year. Apple meanwhile uses WebKit for Safari on desktop and requires all non-Safari browsers on iOS, including Chrome, to use WebKit for iOS. 

ZDNet has asked DuckDuckGo for a clarification, but DuckDuckGo’s communications manager Allison Johnson has provided some details to The Verge about the rendering engines.

“macOS and Windows both now offer website rendering APIs (WebView/WebView2) that any application can use to render a website. That’s what we’ve used to build our app on desktop,” said Johnson.

Microsoft’s implementation of WebView2 in Windows allows developers to embed web technologies such as HTML, CSS and JavaScript in native Windows apps. WebView2 on Windows uses Microsoft Edge as the rendering engine to display websites in those apps. 

“We’re building the desktop app from the ground up around the OS-provided rendering APIs. This means that anything beyond website rendering (e.g., tabs & bookmark management, navigation controls, passwords etc.) we have to build ourselves,” said Johnson. 

So, on Windows, the DuckDuckGo browser rendering will rely on Edge/Chromium for Windows, and Safari/Webkit on macOS, The Verge notes. 

Johnson highlighted that isn’t forking Chromium. A clear example of forking a project is Google’s creation of Blink, where it used the open-source code behind the WebKit rendering engine (that Google and Apple had previously maintained) and then built its own web rendering engine for Chromium.  

However DuckDuckGo releases its new desktop browser, Weinberg assures that “compared to Chrome, the DuckDuckGo app for desktop is cleaner, way more private, and early tests have found it significantly faster too!”

Adblock test (Why?)



Source link

Continue Reading

Tech

Photos of Samsung Galaxy A53 5G's components confirm four rear cameras, one selfie – GSMArena.com news – GSMArena.com

Published

 on


The Samsung Galaxy A53 5G will reuse the bump design of the A52 trio for the quad camera on its back. This was seen in speculative renders from last year, but now we have real-world confirmation as well from spy photos of A53 5G’s frame and rear panel that were shared by 91Mobilies.

The panel appears black, though this could be prior to painting. Either way, black is one of the rumored color options for this model, alongside white, light blue and orange. This same color palette will be used for other Ax3 phones as well, including the Galaxy A13 and A33 5G.


Samsung Galaxy A53 5G rear panel and mid-frame
Samsung Galaxy A53 5G rear panel and mid-frame
Samsung Galaxy A53 5G rear panel and mid-frame

Samsung Galaxy A53 5G rear panel and mid-frame

As for the cameras, it will indeed have four modules, despite TENAA listing only three. The main camera is expected to have the same 64 MP resolution as the A52 models, but the ultra wide may be getting an upgrade to 32 MP (up from 12 MP).

We wouldn’t put too much stock in the TENAA specs, though, they also listed two selfie cameras, and we haven’t seen any evidence of that, not even in TENAA’s own photos of the phone. And if you look at the photo of the phone’s mid-frame, there is only one centered punch hole for a selfie camera.

Samsung Galaxy A53 5G (speculative renders)
Samsung Galaxy A53 5G (speculative renders)
Samsung Galaxy A53 5G (speculative renders)

Samsung Galaxy A53 5G speculative renders (image credit)

The Samsung Galaxy A53 5G will use two different chipsets, one of which is expected to be the Exynos 1200. Note that there isn’t going to be an A53 4G, the two different chips will both power 5G units. Other than that, they should share the same hardware.

The A53 is expected to be announced in the first quarter of this year, likely alongside other Ax3 models.

Source

Adblock test (Why?)



Source link

Continue Reading

Tech

Xbox boss wants to revive old Activision Blizzard games – Rock Paper Shotgun

Published

 on


Of the many possibilities that Microsoft buying Activision Blizzard might enable, only one seems really clear: that Microsoft will put Actiblizz games on Game Pass. Beyond that, it’s all mights and maybes. Here’s another maybe: Microsoft Gaming CEO Phil Spencer says they’re hoping to dig into Actiblizz’s “franchises that I love from my childhood,” raising the likes of Hexen and King’s Quest. What better use for $69 billion than wallowing in nostalgia?

(more…)

Continue Reading

Tech

Meta researchers build an AI that learns equally well from visual, written or spoken materials – TechCrunch

Published

 on


Advances in the AI realm are constantly coming out, but they tend to be limited to a single domain: For instance, a cool new method for producing synthetic speech isn’t also a way to recognize expressions on human faces. Meta (AKA Facebook) researchers are working on something a little more versatile: an AI that can learn capably on its own whether it does so in spoken, written or visual materials.

The traditional way of training an AI model to correctly interpret something is to give it lots and lots (like millions) of labeled examples. A picture of a cat with the cat part labeled, a conversation with the speakers and words transcribed, etc. But that approach is no longer in vogue as researchers found that it was no longer feasible to manually create databases of the sizes needed to train next-gen AIs. Who wants to label 50 million cat pictures? Okay, a few people probably — but who wants to label 50 million pictures of common fruits and vegetables?

Currently some of the most promising AI systems are what are called self-supervised: models that can work from large quantities of unlabeled data, like books or video of people interacting, and build their own structured understanding of what the rules are of the system. For instance, by reading a thousand books it will learn the relative positions of words and ideas about grammatical structure without anyone telling it what objects or articles or commas are — it got it by drawing inferences from lots of examples.

This feels intuitively more like how people learn, which is part of why researchers like it. But the models still tend to be single-modal, and all the work you do to set up a semi-supervised learning system for speech recognition won’t apply at all to image analysis — they’re simply too different. That’s where Facebook/Meta’s latest research, the catchily named data2vec, comes in.

The idea for data2vec was to build an AI framework that would learn in a more abstract way, meaning that starting from scratch, you could give it books to read or images to scan or speech to sound out, and after a bit of training it would learn any of those things. It’s a bit like starting with a single seed, but depending on what plant food you give it, it grows into an daffodil, pansy or tulip.

Testing data2vec after letting it train on various data corpi showed that it was competitive with and even outperformed similarly sized dedicated models for that modality. (That is to say, if the models are all limited to being 100 megabytes, data2vec did better — specialized models would probably still outperform it as they grow.)

“The core idea of this approach is to learn more generally: AI should be able to learn to do many different tasks, including those that are entirely unfamiliar,” wrote the team in a blog post. “We also hope data2vec will bring us closer to a world where computers need very little labeled data in order to accomplish tasks.”

“People experience the world through a combination of sight, sound and words, and systems like this could one day understand the world the way we do,” commented CEO Mark Zuckerberg on the research.

This is still early stage research, so don’t expect the fabled “general AI” to emerge all of a sudden — but having an AI that has a generalized learning structure that works with a variety of domains and data types seems like a better, more elegant solution than the fragmented set of micro-intelligences we get by with today.

The code for data2vec is open source; it and some pretrained models are available here.

Adblock test (Why?)



Source link

Continue Reading

Trending