adplus-dvertising
Connect with us

Tech

Shreya Rajpal on Guardrails for Large Language Models

Published

 on

 

Roland Meertens: Welcome everyone to The InfoQ Podcast. My name is Roland Meertens and I’m your host for today. I am interviewing Shreya Rajpal, who is the CEO and Co-founder of Guardrails AI. We are talking to each other in person at the QCon San Francisco conference just after she gave the presentation called Building Guardrails for Enterprise AI Applications with large Language Models. Keep an eye on InfoQ.com for her presentation as it contains many insights into how one can add guardrails to your large language model application so you can actually make them work. During today’s interview, we will dive deeper into how this works and I hope you enjoy it and you can learn from it.

Welcome, Shreya, to The InfoQ Podcast. We are here at QCon in San Francisco. How do you enjoy the conference so far?

Shreya Rajpal: Yeah, it’s been a blast. Thanks for doing the podcast. I’ve really enjoyed the conference. I was also here last year and I had just a lot of fantastic conversations. I was really looking forward to it and I think it holds up to the standard.

Roland Meertens: All right, and you just gave your talk. How did it go? What was your talk about?

Shreya Rajpal: I think it was a pretty good talk. The audience was very engaged. I got a lot of questions at the end and they were very pertinent questions, so I enjoyed the engagement with the audience. The topic of my talk was on guardrails or the concept of building guardrails for large language model applications, especially from the lens of this open-source framework I created, which is also called Guardrails AI.

What is Guardrails AI [02:21]

Roland Meertens: What does Guardrails AI, what does it do? How can it help me out?

Shreya Rajpal: Guardrails AI essentially looks to solve the problem of reliability and safety for a large language model applications. So if you’ve worked with generative AI and built applications on top of generative AI, what you’ll often end up finding is that they’re really flexible and they’re really functional, but they’re not always useful primarily because they’re not always as reliable. So I like comparing them with traditional software APIs. So traditional software APIs tend to have a lot of correctness baked into the API because we’re in a framework or we’re in a world that’s very deterministic. Compared to that, generative AI ends up being very, very performant, but ends up being essentially not as rigorous in terms of correctness criteria. So hallucinations, for example, are a common issue that we see.

So this is the problem that Guardrails AI tends to solve. So it essentially is something that acts like a firewall around your LLM APIs and make sure that any input that you send to the LLM or any output that you could receive from the LLM is functionally correct for whatever correctness might mean for you. Maybe that’s not hallucinating and then it’ll check for not hallucinations. Maybe it means not having any profanity in your generated text because if you know who your audience is and it’ll check for that. Maybe it means getting the right structured outputs. And all of those can be basically correctness criteria that are enforced.

Roland Meertens: If I, for example, ask it for a JSON document, you will guarantee me that I get correct JSON, but I assume that it can’t really check any of the content, right?

Shreya Rajpal: Oh, it does. Yeah. I think JSON correctness is something that we do and something that we do well. But in addition to that, that is how I look at it, that’s kind of like table states, but it can also look at each field of the JSON and make sure that’s correct. Even if you’re not generating JSON and you’re generating string output. So let’s say you have a question answering chatbot and you want to make sure that the string response that you get from your LLM is not hallucinated or doesn’t violate any rules or regulations of wherever you are, those are also functional things that can be checked and enforced.

Interfacing with your LLM [04:28]

Roland Meertens: So this is basically then like an API interface on top of the large language model?

Shreya Rajpal: I like to think of it as kind of like a shell around the LLM. So it kind of acts as a sentinel at the input of the LLM, at the output of the LLM and acts as making sure that there’s no dangerous outputs or unreliable outputs or unsecure outputs, essentially.

Roland Meertens: Nice. And is this something which you then solve with few-shot learning, or how do you then ensure its correctness?

Shreya Rajpal: In practice, how we end up doing it is a bunch of different techniques depending on the problem that we solve. So for example, for JSON correctness, et cetera, we essentially look to see, okay, here’s our expected structure, here’s what’s incorrect, and you can solve it by few-shot prompting to get the right JSON output. But depending on what the problem is, we end up using different sets of techniques. So for example, a key abstraction in our framework is this idea of a validator where a validator basically checks for a specific requirement, and you can combine all of these validators together in a guard, and that guard will basically run alongside your LLM API and make sure that there’s those guarantees that we care about. And our framework is both a template for creating your own custom validators and orchestrating them via the orchestration layer that we provide, as well as a library of many, many commonly used validators across a bunch of use cases.

Some of them may be rules-based validators. So for example, we have one that makes sure that any regex pattern that you provide, you can make sure that the fields in your JSON or any string output that you get from your JSON matches that regex. We have this one that I talked about in my talk, which you can check out on InfoQ.com called Provenance. And Provenance is essentially making sure that every LLM utterance has some grounding in a source of truth that you know to be true, right? So let’s say you’re an organization that is building a chatbot. You can make sure that your chatbot only answers from the documents from your help center documents or from the documents that you know to be true, and you provide the chatbot and not from its own world model of the internet that it was trained on.

So Provenance looks at every utterance that the LLM has and checks to see where did it come from in my document and makes sure that it’s correct. And if it’s not correct, that means it was hallucinated and can be filtered out. So we have different versions of them and they use various different machine learning techniques under the hood. The simplest one basically uses embedding similarity. We have more complex ones that use LLM self-evaluation or NLI-based classification, like a natural language inference. And so depending on what the problem is, we use either code or ML models or we use external APIs to make sure that the output that you get is correct.

Roland Meertens: Just for my understanding, where do you build in these guardrails? Is this something you built into the models or do you fine tune the model, or is this something you built into essentially the beam search for the output where you say, oh, but if you generate this, this path can’t be correct? Do you do it at this level? Or do you just take the already generated whole text by your large language model and you then, in hindsight, kind of post-process it?

Shreya Rajpal: The latter. Our core assumption is that we’re very… We abstract out the model completely. So you can use an open source model, you can use a commercial model. The example I like using is that in the extreme, you can use a random string generator and we’ll check that random string generator for profanity or making sure that it matches a regex pattern or something.

Roland Meertens: There’s like worst large language model.

Shreya Rajpal: Exactly, the worst large language model. I guess it was a decision that allows developers to really be flexible and focus on more application level concerns rather than really wrangling their model itself. And so how we end up operating is that we are kind of like a sidecar that runs along your model. So any prompt that you’re sending over to your LLM can first pass through guardrails, check to see if there’s any safety concerns, et cetera. And then the output that comes back from your LLM before being sent to your application passes through guardrails.

What constitutes an error? [08:39]

Roland Meertens: So are there any trade-offs when you’re building in these guardrails? Are there some people who say, “Oh, but I like some of the errors?”

Shreya Rajpal: That’s an interesting question. I once remember chatting with someone who was like, oh, yeah, they were building an LLM application that was used by a lot of people, and they basically said, “No, actually people like using us because our system does have profanity, and it does have a lot of things that for other commercial models are filtered out via their moderation APIs.” And so there is an audience for that as well. In that case, what we end up typically seeing is that correctness means different things to different people. So for the person that I mentioned for whom profanity was a good thing, the correct response for them is a response that contains profanity. So you can essentially configure each of these to work for you. There’s no universal definition of what correctness is, just as there’s no universal use case, et cetera.

Roland Meertens: Have you already seen any applications where your guardrails have added a significant impact to the application?

Shreya Rajpal: I think some of my most exciting applications are either in chatbots or in structured data extraction. I also think that those are where most of the LLM applications today are around. So if you’re doing structured data extraction, which is you’re taking a whole chunk of unstructured data, and then from that unstructured data you’re generating some table or something, you’re generating a JSON payload that can then go into your data warehouses, like a row of data. So in that case, essentially making sure that the data you extract is correct and uses the right context and doesn’t veer too far off from historically the data that you receive from that. I think that’s a common use case.

I think the other one that I’ve seen is you’re building a chatbot and you care about some concerns in that chatbot. For example, if you’re in a regulated industry, making sure that there’s no rules that are violated, like misleading your customer about some feature of your product, et cetera, brand risk, using the right tone of voice that aligns with your brand’s communication requirements. I think that’s another common one. Checking for bias, et cetera, is another common one. So there tend to be a lot of these very diverse set of correctness criteria that people have with chatbots that we can enforce.

Enforcing bias [10:55]

Roland Meertens: So how do you enforce these things, for example, bias? Because I think that’s something which is quite hard to grasp, especially if you have only one sample instead of seeing a large overview of samples.

Shreya Rajpal: I think this is another one of those things where, depending on the application or the use case, different organizations may have different Desiderata. So for example, one of the things you can check for is essentially gendered language. Are you using very gendered language or are you using gender-neutral language when you need to be using in your press briefs, et cetera. So that is one specific way of checking bias. But our core philosophy is to take these requirements and break them down into these smaller chunks that can then be configured and put together.

Roland Meertens:  I just remembered that if you have Google photos, they at some point had this incident where someone put in gorillas and then found images of people, I think they just stopped using this keyword at all, which is quite interesting.

Any other applications where you already saw a significant impact or do you have any concrete examples?

Shreya Rajpal: Yeah, let’s see. If you go to our open source GitHub page, I think there’s about a hundred or so projects that use Guardrails for enforcing their guarantees. I want to say most of them are around chatbots or structured data extraction. I see a lot of resume screening ones. I see a lot of making sure that you’re able to go to someone’s LinkedIn profile or look at someone’s resume and make sure that they’re the right candidate for you by looking for specific keywords and how are those keywords projected onto a resume. So I think that’s a common one. Yeah, I think those are some of the top of mind ones. Help center support, chatbots are another common use case. Analyzing contracts, et cetera. Using LLMs, I think is another one.

Roland Meertens: These are sound like applications where you absolutely need to be sure that whatever you put there is-

Shreya Rajpal: Is correct. Utah.

Roland Meertens: … is very correct. Yes.

Shreya Rajpal: Absolutely.

Roland Meertens: So what kind of questions did you get after the talk? Who were interested in this? What kind of questions were there?

Shreya Rajpal: The audience was pretty excited about a lot of the content. I think one of my favorite questions was around the cost of implementing Guardrails, right? At the end of the day, there’s no free lunch. This is all compute that needs to happen at runtime and make sure that you’re, at runtime, looking at where your risk areas are off your system and safeguarding against those risk areas, which typically requires add some amount of latency, add some amount of cost, et cetera as well. And so I think that was an interesting question about how do we think about the cost of implementing that?

I think we’ve done a bunch of work in making the guardrails configurable enough where you can set a policy on each guardrail to make sure that it’s a policy that allows you to say how much you care about something. Not every guardrail is pull the alarm, there’s a horrible outcome. Some of them are bad, but you just shrug and move on. Some of them are like, you take some programmatic action, some of them you do more aggressive risk mitigation, and so that is configurable, and we did a bunch of investment making sure that they’re low latency, they can be parallelized very easily, et cetera.

Priorities for content correctness [13:59]

Roland Meertens: So for example, I could say I absolutely want my output to be the right API specification, but it’s okay if one of the categories didn’t exist before, or isn’t in my prompt?

Shreya Rajpal: Absolutely. Yeah, that’s exactly right. I think a classic example I like using is that if you’re in healthcare and you’re building a healthcare support chatbot, you do not have the authorization to give medical advice to anyone who comes on. And so that’s a guardrail where the no medical advice guardrail, where you’d much rather be like, oh, I might as well not respond to this customer and let a human come in if I suspect that there’s medical advice in my output. So that’s a guardrail where you either get it right or it’s not useful to your customer at all, right? So that’s one of the ones where even if it’s slightly more expensive, you’re willing to take that on. A lot of the other ones you can, like you said, if there’s some extra fields, et cetera, that you’re typically okay with.

Roland Meertens: So what are the next steps then for Guardrails AI? What kind of things are you thinking about for the future? Do you get some requests all the time?

Shreya Rajpal: I think a common request that we get is, I think this is much less a capability thing and more just make it easy for our users to use it where we have support for a lot of the common models, but we keep getting requests every day for support Bard or support Anthropic, et cetera. So we have a custom, like I said, a string-to-string translator where you can substitute your favorite model and use whichever one you one. But I think that’s a common one where just add more integrations with other models that are out There.

Roland Meertens: Is there a winning model at the moment which everybody is going for?

Shreya Rajpal: I think OpenAI typically is the one that we see most commonly. Yeah. I think some of the other ones are more around specific features with being able to create custom guardrails with lower input involved. So like I mentioned, we have a framework for creating custom guardrails, but they’re like, okay, how do I make it easier to see what’s happening? I think better logging in visibility is another one. So a lot of exciting changes. I think a few weeks ago we released a big 0.2 release, which had a lot of these changes kind of implemented in addition to a lot of stability improvements, et cetera, and you have more releases to come.

Roland Meertens: And so for the fixing the errors, is this just always hand coded rules or could you also send it back to a large language model and say, oh, we got this issue, try it again, fix this?

Shreya Rajpal: Yeah, so that’s what we like to call the re-asking paradigm that we implemented. So that actually was a core design principle behind Guardrails where these models have this very fascinating ability to self-heal. If you tell them why they’re wrong, they’re often able to incorporate that feedback and correct themselves. So Guardrails basically automatically constructs a prompt for you and then sends it back and then runs verification, et cetera, all over again. This is another one of those things that I walked over in my talk, which is available for viewers as well.

Fixing your output errors [16:48]

Roland Meertens: So then do you just take the existing output and then send the output back and say, “This was wrong, fix it?” Or do you just re-ask the question and hope that it gets it correct the next time?

Shreya Rajpal: That’s a great question. So typically we work on the output level. We’ve done some prompt engineering on our end to configure how to create this prompt to get the most likely correct output. So we include the original request, we include the output. On the output, we do some optimization where we only, and this is configurable as well, where you only re-ask the incorrect parts. So often you’ll end up finding there’s a specific localized area, either like some field in the JSON, or if you have a large string or a paragraph or something, some sentences in a paragraph that are incorrect. So you only send those back for re-asking and not the whole thing, and that ends up being a little bit less expensive.

Roland Meertens: Okay. Oh, interesting. So you only queried the things which you know are wrong?

Shreya Rajpal: Right, right.

Tips to improve LLM output [17:42]

Roland Meertens: Ah, smart. Yeah, that must save a lot of money. And then in terms of correctness and safety, do you have any tips for people who are writing prompts such that you can structure them better? Or how do you normally evaluate whether a prompt is correct?

Shreya Rajpal: I think my response is that I kind of disagree with the premise of the question a little bit. I actually, I go over this in my talk, but what you end up finding a lot of times is that people invest a lot of time and energy in prompt engineering, but at the end of the day, prompts aren’t guarantees, right? First of all, the elements are non-deterministic. So even if you have the best prompt figured out, you send that same prompt over 10 different times, then you’re going to see different outputs. You’re not going to get the right output.

I think the second is that prompt isn’t a guarantee. Maybe you’re like, okay, this is what I want from you. This is the prompt communicating with the LLM. This is what I want from you. Make sure you’re not violating XYZ criteria, et cetera. There’s absolutely nothing guaranteeing that the LLM is going to respect those instructions in the prompt, so you end up getting incorrect responses still. So what we say as safer prompts, yes, definitely prompt is a way to prime the LLM for being more correct than normal. So you can still definitely include those instructions that don’t do XYZ, but verify. Make sure that actually, those conditions are being respected, otherwise you’re opening yourself up to a world of pain.

Roland Meertens: I always find it really cute if people just put things in there like, “Oh, you’re a very nice agent. You always give the correct answer.” Ah, that will help it.

Shreya Rajpal: One of my favorite anecdotes here is from a friend of mine actually who works with LLMs and has been doing that for a few years now, which is a few years ahead of a lot of other people getting into the area, and I think one of her prompts was, a man will die if you don’t respect this constraint, which is a way who wrangled LLM to get the right out output. So people do all sorts of weird things, but our key thing, I think she ended up moving onto this verification system as well. I think at the end of the day, you need to make sure that those conditions you care about are respected and prompting is just clean and sufficient.

Roland Meertens: I guess that’s the lesson we learned today is always tell your LLM that someone will die if they get the answer incorrect.

Shreya Rajpal: Absolutely.

Roland Meertens: Yeah. Interesting. All right. Thank you very much for being on the podcast and hope you enjoy QCon.

Shreya Rajpal: Yeah, absolutely. Thank you for inviting me. Yeah, excited to be here.

Roland Meertens: Thank you very much for listening to this podcast. I hope you enjoyed the conversation. As I mentioned, we’ll upload the talk on InfoQ.com sometime in the future, so keep an eye on that. Thank you again for listening, and thank you again Shreya for joining The InfoQ Podcast.

728x90x4

Source link

Continue Reading

Tech

How to Preorder the PlayStation 5 Pro in Canada

Published

 on

Sony has made it easy for Canadian consumers to preorder the PlayStation 5 Pro in Canada directly from PlayStation’s official website. Here’s how:

  • Visit the Official Website: Go to direct.playstation.com and navigate to the PS5 Pro section once preorders go live on September 26, 2024.
  • Create or Log in to Your PlayStation Account: If you don’t have a PlayStation account, you will need to create one. Existing users can simply log in to proceed.
  • Place Your Preorder: Once logged in, follow the instructions to preorder your PS5 Pro. Ensure you have a valid payment method ready and double-check your shipping information for accuracy.

Preorder Through Major Canadian Retailers

While preordering directly from PlayStation is a popular option, you can also secure your PS5 Pro through trusted Canadian retailers. These retailers are expected to offer preorders on or after September 26:

  • Best Buy Canada
  • Walmart Canada
  • EB Games (GameStop)
  • Amazon Canada
  • The Source

Steps to Preorder via Canadian Retailers:

  • Visit Retailer Websites: Search for “PlayStation 5 Pro” on the website of your preferred retailer starting on September 26.
  • Create or Log in to Your Account: If you’re shopping online, having an account with the retailer can speed up the preorder process.
  • Preorder in Store: For those who prefer in-person shopping, check with local stores regarding availability and preorder policies.

3. Sign Up for Notifications

Many retailers and websites offer the option to sign up for notifications when the preorder goes live. If you’re worried about missing out due to high demand, this can be a useful option.

  • Visit Retailer Sites: Look for a “Notify Me” or “Email Alerts” option and enter your email to stay informed.
  • Use PlayStation Alerts: Sign up for notifications directly through Sony to be one of the first to know when preorders are available.

4. Prepare for High Demand

Preordering the PS5 Pro is expected to be competitive, with high demand likely to result in quick sellouts, just as with the initial release of the original PS5. To maximize your chances of securing a preorder:

  • Act Quickly: Be prepared to place your order as soon as preorders open. Timing is key, as stock can run out within minutes.
  • Double-Check Payment Information: Ensure your credit card or payment method is ready to go. Any delays during the checkout process could result in losing your spot.
  • Stay Informed: Monitor PlayStation and retailer websites for updates on restocks or additional preorder windows.

Final Thoughts

The PlayStation 5 Pro is set to take gaming to the next level with its enhanced performance, graphics, and new features. Canadian gamers should be ready to act fast when preorders open on September 26, 2024, to secure their console ahead of the holiday season. Whether you choose to preorder through PlayStation’s official website or your preferred retailer, following the steps outlined above will help ensure a smooth and successful preorder experience.

For more details on the PS5 Pro and to preorder, visit direct.playstation.com or stay tuned to updates from major Canadian retailers.

Continue Reading

Tech

Introducing the PlayStation 5 Pro: The Next Evolution in Gaming

Published

 on

Since the PlayStation 5 (PS5) launched four years ago, PlayStation has continuously evolved to meet the demands of its players. Today, we are excited to announce the next step in this journey: the PlayStation 5 Pro. Designed for the most dedicated players and game creators, the PS5 Pro brings groundbreaking advancements in gaming hardware, raising the bar for what’s possible.

Key Features of the PS5 Pro

The PS5 Pro comes equipped with several key performance enhancements, addressing the requests of gamers for smoother, higher-quality graphics at a consistent 60 frames per second (FPS). The console’s standout features include:

  • Upgraded GPU: The PS5 Pro’s GPU boasts 67% more Compute Units than the current PS5, combined with 28% faster memory. This allows for up to 45% faster rendering speeds, ensuring a smoother gaming experience.
  • Advanced Ray Tracing: Ray tracing capabilities have been significantly enhanced, with reflections and refractions of light being processed at double or triple the speed of the current PS5, creating more dynamic visuals.
  • AI-Driven Upscaling: Introducing PlayStation Spectral Super Resolution, an AI-based upscaling technology that adds extraordinary detail to images, resulting in sharper image clarity.
  • Backward Compatibility & Game Boost: More than 8,500 PS4 games playable on PS5 Pro will benefit from PS5 Pro Game Boost, stabilizing or enhancing performance. PS4 games will also see improved resolution on select titles.
  • VRR & 8K Support: The PS5 Pro supports Variable Refresh Rate (VRR) and 8K gaming for the ultimate visual experience, while also launching with the latest wireless technology, Wi-Fi 7, in supported regions.

Optimized Games & Patches

Game creators have quickly embraced the new technology that comes with the PS5 Pro. Many games will receive free updates to take full advantage of the console’s new features, labeled as PS5 Pro Enhanced. Some of the highly anticipated titles include:

  • Alan Wake 2
  • Assassin’s Creed: Shadows
  • Demon’s Souls
  • Dragon’s Dogma 2
  • Final Fantasy 7 Rebirth
  • Gran Turismo 7
  • Marvel’s Spider-Man 2
  • Ratchet & Clank: Rift Apart
  • Horizon Forbidden West

These updates will allow players to experience their favorite games at a higher fidelity, taking full advantage of the console’s improved graphics and performance.

 

 

Design & Compatibility

Maintaining consistency within the PS5 family, the PS5 Pro retains the same height and width as the original PS5 model. Players will also have the option to add an Ultra HD Blu-ray Disc Drive or swap console covers when available.

Additionally, the PS5 Pro is fully compatible with all existing PS5 accessories, including the PlayStation VR2, DualSense Edge, Pulse Elite, and Access controller. This ensures seamless integration into your current gaming setup.

Pricing & Availability

The PS5 Pro will be available starting November 7, 2024, at a manufacturer’s suggested retail price (MSRP) of:

  • $699.99 USD
  • $949.99 CAD
  • £699.99 GBP
  • €799.99 EUR
  • ¥119,980 JPY

Each PS5 Pro comes with a 2TB SSD, a DualSense wireless controller, and a copy of Astro’s Playroom pre-installed. Pre-orders begin on September 26, 2024, and the console will be available at participating retailers and directly from PlayStation via direct.playstation.com.

The launch of the PS5 Pro marks a new chapter in PlayStation’s commitment to delivering cutting-edge gaming experiences. Whether players choose the standard PS5 or the PS5 Pro, PlayStation aims to provide the best possible gaming experience for everyone.

Preorder your PS5 Pro and step into the next generation of gaming this holiday season.

Continue Reading

Tech

Google Unveils AI-Powered Pixel 9 Lineup Ahead of Apple’s iPhone 16 Release

Published

 on

Tech News in Canada

Google has launched its next generation of Pixel phones, setting the stage for a head-to-head competition with Apple as both tech giants aim to integrate more advanced artificial intelligence (AI) features into their flagship devices. The unveiling took place near Google’s Mountain View headquarters, marking an early debut for the Pixel 9 lineup, which is designed to showcase the latest advancements in AI technology.

The Pixel 9 series, although a minor player in global smartphone sales, is a crucial platform for Google to demonstrate the cutting-edge capabilities of its Android operating system. With AI at the core of its strategy, Google is positioning the Pixel 9 phones as vessels for the transformative potential of AI, a trend that is expected to revolutionize the way people interact with technology.

Rick Osterloh, Google’s senior vice president overseeing the Pixel phones, emphasized the company’s commitment to AI, stating, “We are obsessed with the idea that AI can make life easier and more productive for people.” This echoes the narrative Apple is likely to push when it unveils its iPhone 16, which is also expected to feature advanced AI capabilities.

The Pixel 9 lineup will be the first to fully integrate Google’s Gemini AI technology, designed to enhance user experience through more natural, conversational interactions. The Gemini assistant, which features 10 different human-like voices, can perform a wide array of tasks, particularly if users allow access to their emails and documents.

In an on-stage demonstration, the Gemini assistant showcased its ability to generate creative ideas and even analyze images, although it did experience some hiccups when asked to identify a concert poster for singer Sabrina Carpenter.

To support these AI-driven features, Google has equipped the Pixel 9 with a special chip that enables many AI processes to be handled directly on the device. This not only improves performance but also enhances user privacy and security by reducing the need to send data to remote servers.

Google’s aggressive push into AI with the Pixel 9 comes as Apple prepares to unveil its iPhone 16, which is expected to feature its own AI advancements. However, Google’s decision to offer a one-year free subscription to its advanced Gemini Assistant, valued at $240, may pressure Apple to reconsider any plans to charge for its AI services.

The standard Pixel 9 will be priced at $800, a $100 increase from last year, while the Pixel 9 Pro will range between $1,000 and $1,100, depending on the model. Google also announced the next iteration of its foldable Pixel phone, priced at $1,800.

In addition to the new Pixel phones, Google also revealed updates to its Pixel Watch and wireless earbuds, directly challenging Apple’s dominance in the wearable tech market. These products, like the Pixel 9, are designed to integrate seamlessly with Google’s AI-driven ecosystem.

Google’s event took place against the backdrop of a significant legal challenge, with a judge recently ruling that its search engine constitutes an illegal monopoly. This ruling could lead to further court proceedings that may force Google to make significant changes to its business practices, potentially impacting its Android software or other key components of its $2 trillion empire.

Despite these legal hurdles, Google is pressing forward with its vision of an AI-powered future, using its latest devices to showcase what it believes will be the next big leap in technology. As the battle for AI supremacy heats up, consumers can expect both Google and Apple to push the boundaries of what their devices can do, making the choice between them more compelling than ever.

Continue Reading

Trending