The Note 10 is the best Android phone but this is why I use a Pixel - Android Central - Canada News Media
Connect with us

Tech

The Note 10 is the best Android phone but this is why I use a Pixel – Android Central

Published

on


If you look around the web you’ll probably see the general consensus is that the Samsung Galaxy Note 10 or 10+ is the best Android phone you can buy. It has the best screen, the most memory, the most storage, and beats the competition hands down in almost every way. Almost.

There is one spot where Samsung has traditionally been very weak — writing its own software. It’s getting a lot better at the user experience stuff — the latest version of Android 10 for Galaxy devices is both functional and good-looking. I’m talking about the low-level things, like an operating system.

Phone battle: Google Pixel 4 XL vs. Galaxy S10+?

That’s why Samsung relies on Android. Android is designed and written for this very thing. A company like Samsung can build great hardware then use Android as a base layer for something to call its own. Be honest, would anyone really want to buy a Galaxy Note that ran Bada or Tizen?

Our favorite VPN service is more affordable now than ever before

Thankfully, there is no need to even ponder the question because Samsung can use Android freely and mold it into something uniquely Samsung. And that’s where my issues begin.

I really don’t like how much data Google collects, but I trust it to take good care of it.

I want to be clear — I do not like giving Google so much of my personal information and opt-out of anything that doesn’t give me an equal value in return. But I do trust Google to handle my data like the million-dollar resource it is, and until I see a reason to revoke that trust, I’ll continue to pay for Google services with user data.

More: Does Google sell your personal data?

But when that Google service — let’s use Google Calendar as an example — is colorized, has a few features added to it, still relies on Google’s complete infrastructure but has a Samsung name attached, it means there is now a second term of service and privacy policy that comes into play. In case you haven’t ever noticed, when you set up a Galaxy phone you have to agree to Google’s data collection and Samsung’s collection of the very same data.

Samsung also offers services that are written in-house. Like Samsung Pay, which explicitly says that it will only work in full if you allow Samsung to sell your data. There is no free software in existence that I am willing to allow to sell my data unless I am getting a cut.

Samsung collects data for the same reason other hardware companies do — to make the next version better.

I don’t think Samsung is going to give your data away for no good reason. It’s using it to monitor the apps you use, the time you spend on social media, what web sites you visit and everything else so it can make small changes in the next version to make it all better. All tech companies do this, and so far none have been caught purposefully making private user data available for the world to see.

And I might feel differently if I used Bixby, had a Tizen-powered Galaxy Watch, or lived in a mansion filled with connected Samsung appliances. But I don’t do any of those things and choose to buy my “free” services only once through Google.

The Pixel 4 might not be the best phone you can buy, or even on that list. But since every company seemingly wants my personal information, I choose to give it away to just one of them.

We may earn a commission for purchases using our links. Learn more.

Let’s block ads! (Why?)



Source link

Continue Reading

Tech

PlayStation Plus February 2020 free games announced – Polygon

Published

on


PlayStation Plus subscribers will get access to three games (or five, depending on your count) in February: BioShock: The Collection, The Sims 4, and Firewall Zero Hour. Those PlayStation 4 games will be available as part of PlayStation Plus starting Tuesday, Feb. 4.

BioShock: The Collection includes the single-player content from the original BioShock, BioShock 2, and BioShock Infinite, and all single-player add-on content from those games. The collection also includes the Columbia’s Finest pack and director’s commentary, featuring Ken Levine and Shawn Robertson.

The Sims 4 is, of course, the latest in Electronic Arts’ life simulation series, which offers a wide variety of expansion packs and add-ons.

Finally, Firewall Zero Hour is a 4v4 multiplayer tactical shooter developed exclusively for PlayStation VR. The game’s new season starts on Feb. 4, the same day it goes live on PlayStation Plus.

All three games will be available to download via PS Plus through March 2.

January’s PlayStation Plus games — Naughty Dog and Bluepoint Games’ Uncharted: The Nathan Drake Collection and Double Eleven’s Goat Simulator — are available to download through Feb. 3.

Let’s block ads! (Why?)



Source link

Continue Reading

Tech

Samsung’s upgraded Galaxy Tab S6 5G is the world’s first 5G tablet – The Verge

Published

on


Samsung has officially announced the Galaxy Tab S6 5G, a 5G variant of 2019’s Tab S6 tablet that also takes the crown as the world’s first 5G tablet, as spotted by Android Central.

As the name suggests, the new tablet is virtually unchanged from the original Wi-Fi and LTE models, with one exception: the addition of a Snapdragon X50 5G modem (compared to no modem on the Wi-Fi model, and an X24 LTE modem on the LTE version.)

There is a catch, though: the Galaxy Tab S6 5G is only available in South Korea, for now, in just a single 999,900 won (roughly $848) configuration with 6GB of RAM and 128GB of storage. The rest of the specs, including the 10.5-inch OLED HDR panel, Snapdragon 855 processor, and included stylus, are identical to the existing models (for better or for worse.)

The Galaxy Tab S6 5G will be out on January 30th in South Korea; there’s been no word yet as to whether Samsung will be expanding that release globally in the future.

Let’s block ads! (Why?)



Source link

Continue Reading

Tech

Why Google Assistant supports so many more languages than Siri, Alexa, Bixby, and Cortana – VentureBeat

Published

on


Google Assistant, Apple’s Siri, Amazon’s Alexa, and Microsoft’s Cortana recognize only a narrow slice of the world’s most widely spoken languages. It wasn’t until fall 2018 that Samsung’s Bixby gained support for German, French, Italian, and Spanish — languages spoken by over 600 million people worldwide. And it took years for Cortana to become fluent in Spanish, French, and Portuguese.

But Google — which was already ahead of the competition a year ago with respect to the number of languages its assistant supported — pulled far ahead this year. With the addition of more than 20 new languages in January 2019, and more recently several Indic languages, Google Assistant cemented its lead with over 40 languages in well over 80 countries, up from eight languages and 14 countries in 2017. (Despite repeated requests, Google would not provide an exact number of languages for Google Assistant.) That’s compared with Siri’s 21 supported languages, Alexa’s and Bixby’s seven languages, and Cortana’s eight languages.

So why has Google Assistant pulled so far ahead? Naturally, some of the techniques underpinning Google’s natural language processing (NLP) remain closely guarded trade secrets. But the Mountain View company’s publicly available research sheds some — albeit not much — light on why rivals like Amazon and Apple have yet to match its linguistic prowess.

Supporting a new language is hard

Adding language support to a voice assistant is a multi-pronged process that requires considerable research into speech recognition and voice synthesis.

Most modern speech recognition systems incorporate deep neural networks that predict the phonemes, or perceptually distinct units of sound (for example, p, b, and d in the English words pad, pat, and bad). Unlike older techniques, which relied on hand-tuned statistical models that calculated probabilities for combinations of words to occur in a phrase, neural nets derive characters from representations of audio frequencies called mel-scale spectrograms. This reduces error rates while partially eliminating the need for human supervision.

Speech recognition has advanced significantly, particularly in the past year or so. In a paper, Google researchers detailed techniques that employ spelling correction to reduce errors by 29%, and in another study they applied AI to sound wave visuals to achieve state-of-the-art recognition performance without the use of a language model.

Parallel efforts include SpecAugment, which achieves impressively low word error rates by applying visual analysis data augmentation to mel-scale spectrograms. In production, devices like the Pixel 4 and Pixel 4 XL (in the U.S., U.K., Canada, Ireland, Singapore, and Australia) feature an improved Google Assistant English language model that works offline and processes speech at “nearly zero” latency, delivering answers up to 10 times faster than on previous-generation devices.

Of course, baseline language understanding isn’t enough. Without localization, voice assistants can’t pick up on cultural idiosyncrasies, or worse they run the risk of misappropriation. It takes an estimated 30 to 90 days to build a query-understanding module for a new language, depending on how many intents it needs to cover. And even market-leading smart speakers from the likes of Google and Amazon have trouble understanding certain accents.

Google’s increasingly creative approaches promise to close the gap, however. In September, scientists at the company proposed a speech parser that learns to transcribe multiple languages while at the same time demonstrating “dramatic” improvements in quality, and in October they detailed a “universal” machine translation system trained on over 25 billion samples that’s capable of handling 103 languages.

This work no doubt informed Google Assistant’s multilingual mode, which, like Alexa’s multilingual mode, recognizes up to two languages simultaneously.

Speech synthesis

Generating speech is just as challenging as comprehension, if not more so.

While cutting-edge text to speech (TTS) systems like Google’s Tacotron 2 (which builds voice synthesis models based on spectrograms) and WaveNet 2 (which builds models based on waveforms) learn languages more or less from speech alone, conventional systems tap a database of phones — distinct speech sounds or gestures — strung together to verbalize words. Concatenation, as it’s called, requires capturing the complementary diphones (units of speech comprising two connected halves of phones) and triphones (phones with half of a preceding phone at the beginning and a succeeding phone at the end) in lengthy recording sessions. The number of speech units can easily exceed a thousand.

Another technique — parametric TTS — taps mathematical models to recreate sounds that are then assembled into words and sentences. The data required to generate those sounds is stored in the parameters (variables), and the speech itself is created using a vocoder, which is a voice codec (a coder-decoder) that analyzes and synthesizes the output signals.

Still, TTS is an easier problem to tackle than language comprehension — particularly with deep neural networks like WaveNet 2 at speech engineers’ disposal. Translatotron, which was demoed last May, can translate a person’s voice into another language while retaining their tone and tenor. And in August, Google AI researchers showed that they could drastically improve the quality of speech synthesis and generation using audio data sets from both native and non-native English speakers who have neurodegenerative diseases and techniques from Parrotron, an AI tool for people with impediments.

In a related development, in a pair of papers Google researchers recently revealed ways to make machine-generated speech sound more natural. In a study coauthored by Tacotron co-creator Yuxuan Wang, transfer of things like stress level were achieved by embedding style from a recorded clip of human speech. As for the method described in the second paper, it identified vocal patterns to imitate speech styles like those resulting from anger and tiredness.

How language support might improve in the future

Clearly, Google Assistant has progressed furthest on the assistant language front. So what might it take to get others on the same footing?

Improving assistants’ language support will likely require innovations in speech recognition, as well as NLP. With a “true” neural network stack — one that doesn’t rely heavily on language libraries, keywords, or dictionaries — the emphasis shifts from grammar structures to word embeddings and the relational patterns within word embeddings. Then it becomes possible to train a voice recognition system on virtually any language.

Amazon appears to be progressing toward this with Alexa. Researchers at the company managed to cut down on recognition flubs by 20% to 22% using methods that combined human and machine data labeling, and by a further 15% using a novel noise-isolating AI and machine learning technique. Separately, they proposed an approach involving “teaching” language models new tongues by adapting those trained on one language to others, in the process reducing the data requirement for new languages by up to 50%.

Separately, on the TTS side of the equation, Amazon recently rolled out neural TTS tech in Alexa that improves speech quality by increasing naturalness and expressiveness. Not to be outdone, the latest version of Apple’s iOS mobile operating system, iOS 13, introduces a WaveNet-like TTS technology that makes synthesized voices sound more natural. And last December Microsoft demoed a system — FastSpeech — that speeds up realistic voice generation by eliminating errors like word skipping.

Separately, Microsoft recently open-sourced a version of Google’s popular BERT model that enables developers to deploy BERT at scale. This arrived after researchers at the Seattle company created an AI model — a Multi-Task Deep Neural Network (MT-DNN) — that incorporates BERT to achieve state-of-the-art results, and after a team of applied scientists at Microsoft proposed a baseline-besting architecture for language generation tasks.

Undoubtedly, Google, Apple, Microsoft, Amazon, Samsung, and others are already using techniques beyond those described above to bring new languages to their respective voice assistants. But some had a head start, and others have to contend with legacy systems. That’s why it will likely take time before they’re all speaking the same languages.

Let’s block ads! (Why?)



Source link

Continue Reading

Trending