Connect with us

Tech

Apple expected to unveil new Macs as PC industry slump eases

Published

 on

By Stephen Nellis

SAN FRANCISCO (Reuters) -Apple on Monday introduced new MacBook Pro and iMac computers and three new chips to power them, with the company saying it had redesigned its graphics processing units (GPU), a key part of the chip where Nvidia dominates the market.

The new computers and the M3, M3 Pro and M3 Max chips were unveiled at an online event heavily focused on professional users.

In the U.S., the 14-inch MacBook Pro laptop will start at $1,599 and a 16-inch version starts at $2,499. The new iMac desktop with the M3 family of chips starts at $1,299. Some will be available next week, while others will not ship until later in November.

Apple has seen a revitalization in its Mac business, roughly doubling its market share to nearly 11% since 2020 when it parted ways with Intel and started using its own custom-designed chips as the brains of the machines, according to preliminary data from IDC.

As part of the focus on business users on Monday, it showed off a new secure screen sharing feature that would let them on their machines from remote locations.

The company’s custom chips, which use design technology from Arm Holdings, have given its Macs better battery life and, for some tasks, better performance than machines using Microsoft’s Windows operating system.

Unlike other laptop makers that might combine a central processor unit (CPU) from Intel with a GPU from Nvidia, Apple has combined both parts in its Apple silicon chips, which the company claims gives it better performance than its rivals.

Apple’s shakeup of the market has spurred Qualcomm to redouble its efforts to make Arm-based chips for Windows, announcing plans last week to release a chip that is both faster and more energy efficient than some Apple offerings. Reuters last week reported that Nvidia also plans to jump into the PC market as early as 2025.

CORPORATE BUYING

Apple aimed the new machines squarely at designers, musicians and software developers, at one point highlighting that the way it users memory can be used by artificial intelligence researchers, whose chatbots and other creations are often constrained by how much data can be held in the computer’s memory.

Apple also tweaked its overall lineup of computers in ways that could change the behavior of corporate buyers.

While slashing the U.S. price of the new 14-inch MacBook Pro from $1,999 to $1,599, Apple appeared to have eliminated a cheaper $1,299 13-inch model of its MacBook Pro that was a big seller to businesses, said Ben Bajarin, chief executive and principal analyst at Creative Strategies.

That move will likely clarify the choice between the company’s model lines, prompting choices between Apple’s productivity-oriented MacBook Air models that top out at $1,299 or the new $1,599 starting price for MacBook Pro models.

At Apple, the Mac hit $40.18 billion in revenue for its fiscal 2022, or about 11% of its revenue. While that was up 14% from the previous fiscal year, sales this year have slowed along with the rest of the PC industry, which has suffered a post-pandemic slump.

Apple said the new chips would be the first for laptops and desktops that use 3 nanometer manufacturing technology, which will give the chips better performance for each watt of electricity used.

Apple did not name who is making the chips, but analysts believe it is Taiwan Semiconductor Manufacturing Co, which uses the same technology to make chips for the top-end iPhone 15 models.

Throughout the event, Apple executives compared the performance of the new MacBooks and iMac machines to older Apple machines with chips from Intel, playing up how much speed customers would notice by upgrading to devices with Apple’s own chips.

(Reporting by Stephen Nellis in San Francisco; Additional reporting by Shivani Tanna and Jahnavi Nidumolu in Bengaluru and Peter Henderson and Sayantani Ghosh in San Francisco; Editing by Marguerita Choy and Jamie Freed)

 

728x90x4

Source link

Continue Reading

Tech

The Set Piece Strategy: Tackling Complexity in Serverless Applications

Published

 on

Key Takeaways

  • Decompose complexity: Break down issues into parts to effectively address each one.
  • Develop sustainable applications by leveraging the features offered by serverless technology, such as optimization, robust availability, and scalability.
  • Adopt Domain-Driven Design and a microservices-based architecture: These techniques foster team independence and streamline development processes.
  • Incorporate best practices for software delivery into serverless development by emphasizing modularity, efficiency, and observability.
  • Encourage Team Autonomy: Empower teams with autonomy by equipping them with the tools and knowledge to manage their microservices independently.

Most of you should be familiar with the movie Mamma Mia! Here We Go Again. There are so many things in this movie to entertain us: vibrant colors, locations, sun, water, an all-star cast, etc. If you think of moviemaking, it has many stages to go through. Everything seems simple to us, but someone needs to develop a story, write a script, find the producer, bring a director on board, find the stars, location, costumes, etc. It’s a complicated process.

When it is packaged together, we could call it a monolith. However, a movie is not just one big blob; first, there is an introduction. Often, there is an interval, hyped up by the story built before, in a manner that leaves you hanging on the suspense. Then there are the credits. At this point, the movie has been broken into a few parts. Then, within each part are hundreds of scenes, simple and complex, all knitted together to bring us the entire movie experience.

Complexity is everywhere, not just moviemaking. It’s in life; it’s in software engineering as well. It is a fact. And the way we tackle this complexity is essential. In the book The Philosophy of Software Design, the author states that the fundamental problem in engineering is problem decomposition, which is how we divide a problem into pieces. This is so true everywhere. Regarding the film, for example, we have a vision of the entire thing. Then, we break it into different parts so we can focus on each.

I usually use this analogy: let’s say that you watch the night sky. It’s a blanket of dots—that’s it. No matter how often you look at it, you still get the same picture. Now, get a telescope and zoom into one bright dot. What you see is a blur at first, then a galaxy. You keep going, and you find suns and star patterns behind that galaxy. Then, a planet, a cloud formation, and a landscape at some point. This is the way engineers should approach a complex problem. They need to know how to enter the problem, see the big picture first, and then keep going.

Set Pieces in Software Delivery

Usually, when planning a movie, the director identifies areas of the film called set pieces. A car chase, a loud sequence, or a long drive are some examples. They identify these parts of the movie so they can plan and film accordingly. They can do the filming rehearsal, similar to what we do in testing. This is the concept behind set pieces. Why does it matter? Because it has specific characteristics that we can apply to engineering. A set piece is a part of the whole picture.

Similarly, in software engineering, you take part of a big use case, focusing on something you can manage. Then, you can plan, rehearse, or test each part. Finally, you bring everything together to make the whole.

This approach is not specific to software engineering or serverless architectures. However, there are three reasons why we can use this approach to improve serverless applications. First, the characteristics of serverless technology allow us to do that. Second, we can use proven and familiar industry patterns and practices. Finally, we can consider application sustainability—I’ll discuss it later.

Serverless Characteristics

Let’s take a deeper look into serverless characteristics. It’s a cloud computing model, part of the cloud setup. There is no server management. We pay for computing and storage, autoscales, and high availability. The service provider takes care of these things, so you don’t need to consider them. It is an ecosystem of managed services, so we can optimize things at a granular level when architecting a serverless application. This is also why we can iteratively and incrementally develop our applications. At the same time, this ecosystem brings diversity into a team. Teams used to be a few engineers doing programming. Serverless architectures changed that dynamic because programming is only part of deploying a serverless application. You need to know how to knit the services together (infrastructure as code), provision a database table, manage queues, and set up your API authentication. No individual experts are involved; it’s all part of the engineer’s day-to-day job. That’s why it brings diversity into a team with different skills.

Besides granularly and individually optimizing a serverless application (API quotas, database scaling, memory allocation, function timeouts, etc.), we can also optimize it at depth—which, in this context, means optimizing the application considering the relative importance of its functionalities. Let’s take the three data flow pipelines above as an example. Say some data gets dropped into the source and goes through the pipeline. At the top, you see price changes data, and at the bottom, product reviews. Price changes are critical data, so you want that data flowing quickly. Product reviews, however, don’t need to appear for a day or two or even a week. That means in this architecture, you can adjust the resources you consume and architect to gain the cost—which translates into sustainability.

Domain-Driven Design and Microservices

Let’s look at domain-driven design and microservices. With the advent of DDD, we started splitting our organization into domains and subdomains, breaking it down for more visibility and control. With that, we now had boundaries, or the bounded context. Guarding boundaries is the most crucial aspect for a serverless team or organization to successfully develop with serverless technologies.

When discussing boundaries, we also need to discuss team topologies: the structure of different teams, like stream-aligned teams or platform teams. If we focus on stream-aligned teams so we have a boundary, we can now assign a team to guard that boundary. They are the custodians of the bounded context. We break down the organization into domains and subdomains. We identify the boundary, where, according to DDD, ubiquitous language, the common language is spoken, and we now have a domain model. As a team, we are responsible for protecting the domain model. Who takes over from here? Microservices. Because the team can now build microservices and applications that reside within their boundaries. We will see how they interact later on.

This is why it’s essential, whether we use serverless or not, to capitalize on the proven practices and patterns in the industry as they evolve, to make use of them, and to get benefits. DDD came in 20-odd years ago. Microservices came later. Team topologies, just recently. We can still bring everything together and work harmoniously to make things happen. Domains, team autonomy, boundaries, microservices, contracts—these things should be in the mind of everyone who architects serverless applications.

Sustain Your Applications

Let’s talk about serverless application sustainability. When we talk about sustainability, most people think about green initiatives. Sustainability, as a definition, is very generic. We keep it going with a little nourishment or nutrition so it doesn’t die off. This is precisely the principle that we apply when it comes to our planet. We want us to keep going for our future generations. But how does it relate to serverless or software engineering? Let’s go back to the old way of waterfall model, which I’m sure many of you must have come across. Typically, it starts with the requirements and then continues with the different siloed phases, often taking weeks, months, or maybe years to complete. After the application release, it gets pushed into some maintenance mode.

Let’s think differently when it comes to serverless, and more specifically to what I call sustaining a serverless application—you start with an idea, design your application, build it, deploy it to the cloud, and then look after it. But it’s not finished yet; you must keep it going. You start with a minimum viable product, but your goal is to make it the most valuable product. For that, you need iteration. You need to iterate. When you do that, what you’re doing is sustaining your product. That is the different meaning of sustainability in our context.

The cloud is basically composed of three things: computing, storage, and networking. The “serverless” part is already in the picture because it’s part of the cloud. In serverless development, we use the cloud to build products using serverless technologies, using the processes to allow us to operate in the cloud successfully. This is what I call a sustainability triangle in serverless.

We have the products, the processes, and the cloud, forming a sustainability triangle. In this triangle, Processes are the processes that allow us to deploy our products sustainably and operate sustainably in the cloud. And while a sustainable product can mean many things, it has three essential aspects: modularity, extensibility, and observability. These aspects are also interdependent. For example, if we have a modular product, it can likely be extended. Then, if we have better visibility of what’s happening in our modular service, we can sustain it longer. That’s the mindset we need to have when we work with serverless development or the services we build.

Sustainable processes could be many things. As the mindset of the people or the developers or engineers behind the development, we use the processes and the cloud as their operating platform to gain the advantage when sustaining the products and operating them sustainably. There are three different ways of looking at things or three different aspects of sustainability. These aspects should be kept in mind when it comes to architecting because the operating environment is cloud and how we operate. That’s where the cloud aspect comes in. Some of the processes, for example, are enhanced sustainability and lean principles. Then, being pragmatic with the iterative or agile development, starting with something small, using the MVP mindset, and moving forward. This is the typical agile cycle. Then automation, having the DevOps mindset, and continuous refactoring.

With modern technologies, cloud providers release services and features daily. That means we can’t stand still after building an application, so we should be able to continuously evaluate, refactor, and improve things for the future. We are enhancing or sustaining as we go.

Something I always recommend to engineers is to architect the solutions. This is very important, especially in that serverless landscape. Sustainability in the cloud is a shared part of serverless architecture. All cloud providers come with certain sustainability aspects. As customers or consumers, we are responsible to architect our solutions to gain the benefits of sustainability and have the contribution going via the provider to the wider world. This is, again, an essential aspect of architecting serverless applications.

Set Piece in Practice

Let’s put everything into practice. Take a small reward system as an example. You go to an e-commerce website; you have rewards, vouchers, or codes you want to redeem. The website uses a content management system to load the reward data. It typically has a backend service to validate the code and make the redemption. Then, there may be a third-party application where some data is stored as a ledger. Let’s say those two are third parties, and we don’t focus on them too much. Our domain, e-commerce, could be different in your cases. Let’s pretend, for argument’s sake, that the subdomain is the customer, and we have a bounded context that’s important: rewards. That’s where the architecture diagram comes in.

A traditional microservices approach usually considers one bounded context and a big monolithic microservice, primarily because of containerization. However, with all the characteristics of serverless that we saw earlier, we can think differently. With the traditional versus microservices, when it comes to this scale, we often need to consider if a particular piece of the application or service changes a lot. For example, in reward redemption, business logic changes frequently because business rules change. So why should we deploy the entire thing every time if it’s just small, with one part changing?

This is where we can probably introduce some of the thinking of identifying the pieces. For example, let’s leave a few of these things out and look for areas we can decouple and build as separate pieces. For example, find core services like the backend service. Then, let’s identify the data flows. Identify those areas so they can be developed as separate microservices and have different interaction patterns with others in the system. That is one way of looking at the problem. Then, the anti-corruption layer; these are the protective measures to guard your domain model. Suppose the CMS data model is different from the rewards-bounded context model. In that case, the ACL does the transformation, translation, and push of the data so that if you replay CMS or even the CRM, you don’t need to do too much to make the changes within the core model.

How do we piece these things together? We have a bounded context, and then we put some microservices in place. These are all smaller microservices, and they all connect to each other. But how do they connect? This is where engineers usually struggle. If we look back at the filmmaking process, how do we combine hundreds of scenes and sequences of scenes? This is mainly done with dialogue and background music carrying over from one scene to the next. What do we have in the world of microservices? You know the answer: APIs, events, and messages. This is why it is still possible when you break these things into different pieces; the system works beautifully as one application.

If we add these aspects, then we can redraw the application diagram as above. We identify the synchronous API invocation paths, and where we can, use asynchronous or event-driven communication. These are some ways of thinking about architecture when dealing with serverless applications and taking advantage of its characteristics and patterns.

Serverless Microservices Approach

Typically, this is how your rewards system will look in a serverless world. The important thing to notice is that all microservices exist inside your bounded context. They don’t cross the boundaries. That’s where communication and contracts come in. Then, you can have independent deployment pipelines going happily to production without impacting anything else. This is the power of breaking things down and making them more manageable for everyone, including engineers and architects. For that, we need an autonomous team. They own the microservices within the bounded context. That’s important; that’s the ownership. Everything that happens is their responsibility. You need microservices to deal with reports or data generation.

You need microservices to send emails to customers, receive the feedback, etc. These are the areas that we can easily decouple. When we build our application, we don’t need to start with all these things simultaneously. Email can come in later, or you can do the report generation once you know what data this bounded context deals with. Then, of course, it’s an autonomous team that operates in their cloud account. This is important. I think many organizations are still going through this phase, but not many organizations have achieved this. This is crucial for the velocity and flow of the team. They have their own account and their own repository. They don’t deal with anything outside the boundaries. If you want to talk to their services, there is an API, the event flows, the event broker, or the common event bus. That is what we aim to build and architect with serverless.

In Summary

When we look at application architecture through the serverless lens, we must think about its unique aspects. Take advantage of the serverless architecture characteristics; make use of them. Use the architectural patterns. Don’t be shy about introducing anti-corruption layers or microservices to other engineers around you. Let them learn. More importantly, encourage team autonomy.

A couple of months ago, there was an engineer who took over a particular piece of new work. He was going to create an architecture diagram. He had no clue how to tackle it, so he started drawing APIs and things. I asked, “How do you know you need an API here?” He said the system has an API—that’s that system. I replied, “Why don’t you start with something like domain storytelling? Then, you draw the picture as a storyboard. Domain storytelling is a book you can follow. It’s nice to envision it in that way. Then you explain it to everyone, stakeholders. If you see something good for the feature or the service that you’re building, you can slowly think about the design and architecture.” Challenge engineers to confront complexity. Feed them all the sound patterns and practices.

About the Author

 

728x90x4

Source link

Continue Reading

Tech

Nintendo has a cunning plan to beat the inevitable Switch 2 scalpers – make an absolute buttload of consoles

Published

 on

Nintendo has outlined its plans to help ensure the Switch’s successor isn’t affected by scalping as badly as its predecessor was at launch. The big idea – make sure it produces a lot more consoles, something that should be a lot more feasible now the company says it’s resolved some component supply issues.

For those that might not have been in the market for a Switch at launch, the console was high in demand across major territories, leading to demand overshadowing supply. This was a rich feeding ground for scalpers, who bought up the consoles in bulk with plans to sell them on at inflated prices. The hope seems to be that, by bumping up production (without the worry of factors like the 2021 onward COVID chip shortages), Nintendo can produce enough consoles to make a scalping market redundant.

As translated by IGN, Nintendo president Shuntaro Furukawa said the following on the matter in a recent investor Q&A: “As a countermeasure against resale, we believe that the most important thing is to produce a sufficient number to meet customer demand, and this idea has not changed since last year”.

“In addition to this, we are considering whether there are any other measures that can be taken to the extent allowed by laws and regulations, taking into account the circumstances of each region.

“Although we were unable to produce sufficient quantities of Nintendo Switch hardware last year and the year before due to a shortage of semiconductor components, this situation has now been resolved. At this time, we do not believe that the shortage of components will have a significant impact on the production of the successor model.”

Fingers crossed this curbs the problem! The Nintendo Switch 2 is anticipated to rock the industry when it launches, off the back of the original Nintendo Switch – which has consistently sold well since its release in 2017 due to its unique position in the console market and a steady supply of high quality, Switch-only games. Whether or not the Switch 2 can overshadow the original Switch’s sales, especially in the long haul, remains the big question leading into 2025.

 

728x90x4

Source link

Continue Reading

Tech

Mortal Kombat 1 player wins $565 at tournament, smashes $3000 light

Published

 on

It was a wonderful weekend full of fighting game action thanks to CEO 2024, which was live in the USA. However, amidst many a tense match in Street Fighter 6, Tekken 8, and more, it was a Mortal Kombat 1 player that shattered expectations with an over-eager pop off.

Dyloch, one of the best General Shao players in the world and multiple major tournament winner, managed to take home the first place belt at CEO for Mortal Kombat 1 as well as a tasty $565 in prize money. However, a few matches prior, in the winner’s final, he celebrated his victory by picking up his chair and throwing it over the wrestling ring ropes surrounding the console set up. This chair, airborne and clearly in a juggle state, crashed down onto an Elation Chorus Line 16 LED light. You can buy one right now for the low, low price of $2,992.

In response, tournament organizer Alex Jebailey posted several tweets, including one where he (perhaps jokingly) asks for Dyloch’s Paypal because “somebody’s paying for that broken light fixture and it’s not me” and following this up with a declaration that: “If one more person pops off throwing anything you will be banned from any event I ever do. This is a final warning to anyone in the future. Do not throw things”. Back in 2021, Jebailey took similar action against an attendee who pulled a fire alarm at the event, forcing a mass evacuation and huge rock paper scissors exhibition.

It’s unclear whether Dyloch will actually end up having to pay for a new light, leaving Daytona Beach with a net -$2,427, or whether his sponsor will foot the bill, or even insurance for the light will cover it. Neither Dyloch or Jebaliey has elaborated on the matter further since it happened.

This isn’t actually the first time a chair has caused some controversy in the fighting game space in recent weeks. It was only back in May that Hungrybox, legendary Smash Bros competitor and anti-crab advocate, threw and broke a hotel chair during a match at Get On My Level. The player has a legacy of throwing furniture, (see two more examples here), but with thankfully no history of major equipment damage.

These actions, often defended as hype or part of the energy of a clutch win, are all well and good if you don’t have to pay for ’em. But with this latest example of chair-on-lighting violence putting a rather large dollar sign next to the act, perhaps we can go back to the good old days of calling your opponent a bum, giving them the finger, or high-fiving all your friends in the crowd instead.

Let us know what you think of this below. Should things be left as is, with responsibility left in the hands on professional video game players, or should certain players be on a list and have their seats bolted down prior to tournament matches? Either way, think of the tournament organizers, who often have to clean up after such messes.

 

728x90x4

Source link

Continue Reading

Trending