adplus-dvertising
Connect with us

Tech

The Set Piece Strategy: Tackling Complexity in Serverless Applications

Published

 on

Key Takeaways

  • Decompose complexity: Break down issues into parts to effectively address each one.
  • Develop sustainable applications by leveraging the features offered by serverless technology, such as optimization, robust availability, and scalability.
  • Adopt Domain-Driven Design and a microservices-based architecture: These techniques foster team independence and streamline development processes.
  • Incorporate best practices for software delivery into serverless development by emphasizing modularity, efficiency, and observability.
  • Encourage Team Autonomy: Empower teams with autonomy by equipping them with the tools and knowledge to manage their microservices independently.

Most of you should be familiar with the movie Mamma Mia! Here We Go Again. There are so many things in this movie to entertain us: vibrant colors, locations, sun, water, an all-star cast, etc. If you think of moviemaking, it has many stages to go through. Everything seems simple to us, but someone needs to develop a story, write a script, find the producer, bring a director on board, find the stars, location, costumes, etc. It’s a complicated process.

When it is packaged together, we could call it a monolith. However, a movie is not just one big blob; first, there is an introduction. Often, there is an interval, hyped up by the story built before, in a manner that leaves you hanging on the suspense. Then there are the credits. At this point, the movie has been broken into a few parts. Then, within each part are hundreds of scenes, simple and complex, all knitted together to bring us the entire movie experience.

Complexity is everywhere, not just moviemaking. It’s in life; it’s in software engineering as well. It is a fact. And the way we tackle this complexity is essential. In the book The Philosophy of Software Design, the author states that the fundamental problem in engineering is problem decomposition, which is how we divide a problem into pieces. This is so true everywhere. Regarding the film, for example, we have a vision of the entire thing. Then, we break it into different parts so we can focus on each.

I usually use this analogy: let’s say that you watch the night sky. It’s a blanket of dots—that’s it. No matter how often you look at it, you still get the same picture. Now, get a telescope and zoom into one bright dot. What you see is a blur at first, then a galaxy. You keep going, and you find suns and star patterns behind that galaxy. Then, a planet, a cloud formation, and a landscape at some point. This is the way engineers should approach a complex problem. They need to know how to enter the problem, see the big picture first, and then keep going.

Set Pieces in Software Delivery

Usually, when planning a movie, the director identifies areas of the film called set pieces. A car chase, a loud sequence, or a long drive are some examples. They identify these parts of the movie so they can plan and film accordingly. They can do the filming rehearsal, similar to what we do in testing. This is the concept behind set pieces. Why does it matter? Because it has specific characteristics that we can apply to engineering. A set piece is a part of the whole picture.

Similarly, in software engineering, you take part of a big use case, focusing on something you can manage. Then, you can plan, rehearse, or test each part. Finally, you bring everything together to make the whole.

This approach is not specific to software engineering or serverless architectures. However, there are three reasons why we can use this approach to improve serverless applications. First, the characteristics of serverless technology allow us to do that. Second, we can use proven and familiar industry patterns and practices. Finally, we can consider application sustainability—I’ll discuss it later.

Serverless Characteristics

Let’s take a deeper look into serverless characteristics. It’s a cloud computing model, part of the cloud setup. There is no server management. We pay for computing and storage, autoscales, and high availability. The service provider takes care of these things, so you don’t need to consider them. It is an ecosystem of managed services, so we can optimize things at a granular level when architecting a serverless application. This is also why we can iteratively and incrementally develop our applications. At the same time, this ecosystem brings diversity into a team. Teams used to be a few engineers doing programming. Serverless architectures changed that dynamic because programming is only part of deploying a serverless application. You need to know how to knit the services together (infrastructure as code), provision a database table, manage queues, and set up your API authentication. No individual experts are involved; it’s all part of the engineer’s day-to-day job. That’s why it brings diversity into a team with different skills.

Besides granularly and individually optimizing a serverless application (API quotas, database scaling, memory allocation, function timeouts, etc.), we can also optimize it at depth—which, in this context, means optimizing the application considering the relative importance of its functionalities. Let’s take the three data flow pipelines above as an example. Say some data gets dropped into the source and goes through the pipeline. At the top, you see price changes data, and at the bottom, product reviews. Price changes are critical data, so you want that data flowing quickly. Product reviews, however, don’t need to appear for a day or two or even a week. That means in this architecture, you can adjust the resources you consume and architect to gain the cost—which translates into sustainability.

Domain-Driven Design and Microservices

Let’s look at domain-driven design and microservices. With the advent of DDD, we started splitting our organization into domains and subdomains, breaking it down for more visibility and control. With that, we now had boundaries, or the bounded context. Guarding boundaries is the most crucial aspect for a serverless team or organization to successfully develop with serverless technologies.

When discussing boundaries, we also need to discuss team topologies: the structure of different teams, like stream-aligned teams or platform teams. If we focus on stream-aligned teams so we have a boundary, we can now assign a team to guard that boundary. They are the custodians of the bounded context. We break down the organization into domains and subdomains. We identify the boundary, where, according to DDD, ubiquitous language, the common language is spoken, and we now have a domain model. As a team, we are responsible for protecting the domain model. Who takes over from here? Microservices. Because the team can now build microservices and applications that reside within their boundaries. We will see how they interact later on.

This is why it’s essential, whether we use serverless or not, to capitalize on the proven practices and patterns in the industry as they evolve, to make use of them, and to get benefits. DDD came in 20-odd years ago. Microservices came later. Team topologies, just recently. We can still bring everything together and work harmoniously to make things happen. Domains, team autonomy, boundaries, microservices, contracts—these things should be in the mind of everyone who architects serverless applications.

Sustain Your Applications

Let’s talk about serverless application sustainability. When we talk about sustainability, most people think about green initiatives. Sustainability, as a definition, is very generic. We keep it going with a little nourishment or nutrition so it doesn’t die off. This is precisely the principle that we apply when it comes to our planet. We want us to keep going for our future generations. But how does it relate to serverless or software engineering? Let’s go back to the old way of waterfall model, which I’m sure many of you must have come across. Typically, it starts with the requirements and then continues with the different siloed phases, often taking weeks, months, or maybe years to complete. After the application release, it gets pushed into some maintenance mode.

Let’s think differently when it comes to serverless, and more specifically to what I call sustaining a serverless application—you start with an idea, design your application, build it, deploy it to the cloud, and then look after it. But it’s not finished yet; you must keep it going. You start with a minimum viable product, but your goal is to make it the most valuable product. For that, you need iteration. You need to iterate. When you do that, what you’re doing is sustaining your product. That is the different meaning of sustainability in our context.

The cloud is basically composed of three things: computing, storage, and networking. The “serverless” part is already in the picture because it’s part of the cloud. In serverless development, we use the cloud to build products using serverless technologies, using the processes to allow us to operate in the cloud successfully. This is what I call a sustainability triangle in serverless.

We have the products, the processes, and the cloud, forming a sustainability triangle. In this triangle, Processes are the processes that allow us to deploy our products sustainably and operate sustainably in the cloud. And while a sustainable product can mean many things, it has three essential aspects: modularity, extensibility, and observability. These aspects are also interdependent. For example, if we have a modular product, it can likely be extended. Then, if we have better visibility of what’s happening in our modular service, we can sustain it longer. That’s the mindset we need to have when we work with serverless development or the services we build.

Sustainable processes could be many things. As the mindset of the people or the developers or engineers behind the development, we use the processes and the cloud as their operating platform to gain the advantage when sustaining the products and operating them sustainably. There are three different ways of looking at things or three different aspects of sustainability. These aspects should be kept in mind when it comes to architecting because the operating environment is cloud and how we operate. That’s where the cloud aspect comes in. Some of the processes, for example, are enhanced sustainability and lean principles. Then, being pragmatic with the iterative or agile development, starting with something small, using the MVP mindset, and moving forward. This is the typical agile cycle. Then automation, having the DevOps mindset, and continuous refactoring.

With modern technologies, cloud providers release services and features daily. That means we can’t stand still after building an application, so we should be able to continuously evaluate, refactor, and improve things for the future. We are enhancing or sustaining as we go.

Something I always recommend to engineers is to architect the solutions. This is very important, especially in that serverless landscape. Sustainability in the cloud is a shared part of serverless architecture. All cloud providers come with certain sustainability aspects. As customers or consumers, we are responsible to architect our solutions to gain the benefits of sustainability and have the contribution going via the provider to the wider world. This is, again, an essential aspect of architecting serverless applications.

Set Piece in Practice

Let’s put everything into practice. Take a small reward system as an example. You go to an e-commerce website; you have rewards, vouchers, or codes you want to redeem. The website uses a content management system to load the reward data. It typically has a backend service to validate the code and make the redemption. Then, there may be a third-party application where some data is stored as a ledger. Let’s say those two are third parties, and we don’t focus on them too much. Our domain, e-commerce, could be different in your cases. Let’s pretend, for argument’s sake, that the subdomain is the customer, and we have a bounded context that’s important: rewards. That’s where the architecture diagram comes in.

A traditional microservices approach usually considers one bounded context and a big monolithic microservice, primarily because of containerization. However, with all the characteristics of serverless that we saw earlier, we can think differently. With the traditional versus microservices, when it comes to this scale, we often need to consider if a particular piece of the application or service changes a lot. For example, in reward redemption, business logic changes frequently because business rules change. So why should we deploy the entire thing every time if it’s just small, with one part changing?

This is where we can probably introduce some of the thinking of identifying the pieces. For example, let’s leave a few of these things out and look for areas we can decouple and build as separate pieces. For example, find core services like the backend service. Then, let’s identify the data flows. Identify those areas so they can be developed as separate microservices and have different interaction patterns with others in the system. That is one way of looking at the problem. Then, the anti-corruption layer; these are the protective measures to guard your domain model. Suppose the CMS data model is different from the rewards-bounded context model. In that case, the ACL does the transformation, translation, and push of the data so that if you replay CMS or even the CRM, you don’t need to do too much to make the changes within the core model.

How do we piece these things together? We have a bounded context, and then we put some microservices in place. These are all smaller microservices, and they all connect to each other. But how do they connect? This is where engineers usually struggle. If we look back at the filmmaking process, how do we combine hundreds of scenes and sequences of scenes? This is mainly done with dialogue and background music carrying over from one scene to the next. What do we have in the world of microservices? You know the answer: APIs, events, and messages. This is why it is still possible when you break these things into different pieces; the system works beautifully as one application.

If we add these aspects, then we can redraw the application diagram as above. We identify the synchronous API invocation paths, and where we can, use asynchronous or event-driven communication. These are some ways of thinking about architecture when dealing with serverless applications and taking advantage of its characteristics and patterns.

Serverless Microservices Approach

Typically, this is how your rewards system will look in a serverless world. The important thing to notice is that all microservices exist inside your bounded context. They don’t cross the boundaries. That’s where communication and contracts come in. Then, you can have independent deployment pipelines going happily to production without impacting anything else. This is the power of breaking things down and making them more manageable for everyone, including engineers and architects. For that, we need an autonomous team. They own the microservices within the bounded context. That’s important; that’s the ownership. Everything that happens is their responsibility. You need microservices to deal with reports or data generation.

You need microservices to send emails to customers, receive the feedback, etc. These are the areas that we can easily decouple. When we build our application, we don’t need to start with all these things simultaneously. Email can come in later, or you can do the report generation once you know what data this bounded context deals with. Then, of course, it’s an autonomous team that operates in their cloud account. This is important. I think many organizations are still going through this phase, but not many organizations have achieved this. This is crucial for the velocity and flow of the team. They have their own account and their own repository. They don’t deal with anything outside the boundaries. If you want to talk to their services, there is an API, the event flows, the event broker, or the common event bus. That is what we aim to build and architect with serverless.

In Summary

When we look at application architecture through the serverless lens, we must think about its unique aspects. Take advantage of the serverless architecture characteristics; make use of them. Use the architectural patterns. Don’t be shy about introducing anti-corruption layers or microservices to other engineers around you. Let them learn. More importantly, encourage team autonomy.

A couple of months ago, there was an engineer who took over a particular piece of new work. He was going to create an architecture diagram. He had no clue how to tackle it, so he started drawing APIs and things. I asked, “How do you know you need an API here?” He said the system has an API—that’s that system. I replied, “Why don’t you start with something like domain storytelling? Then, you draw the picture as a storyboard. Domain storytelling is a book you can follow. It’s nice to envision it in that way. Then you explain it to everyone, stakeholders. If you see something good for the feature or the service that you’re building, you can slowly think about the design and architecture.” Challenge engineers to confront complexity. Feed them all the sound patterns and practices.

About the Author

 

728x90x4

Source link

Continue Reading

Tech

AI could help scale humanitarian responses. But it could also have big downsides

Published

 on

 

NEW YORK (AP) — As the International Rescue Committee copes with dramatic increases in displaced people in recent years, the refugee aid organization has looked for efficiencies wherever it can — including using artificial intelligence.

Since 2015, the IRC has invested in Signpost — a portfolio of mobile apps and social media channels that answer questions in different languages for people in dangerous situations. The Signpost project, which includes many other organizations, has reached 18 million people so far, but IRC wants to significantly increase its reach by using AI tools — if they can do so safely.

Conflict, climate emergencies and economic hardship have driven up demand for humanitarian assistance, with more than 117 million people forcibly displaced in 2024, according to the United Nations refugee agency. The turn to artificial intelligence technologies is in part driven by the massive gap between needs and resources.

To meet its goal of reaching half of displaced people within three years, the IRC is testing a network of AI chatbots to see if they can increase the capacity of their humanitarian officers and the local organizations that directly serve people through Signpost. For now, the pilot project operates in El Salvador, Kenya, Greece and Italy and responds in 11 languages. It draws on a combination of large language models from some of the biggest technology companies, including OpenAI, Anthropic and Google.

The chatbot response system also uses customer service software from Zendesk and receives other support from Google and Cisco Systems.

If they decide the tools work, the IRC wants to extend the technical infrastructure to other nonprofit humanitarian organizations at no cost. They hope to create shared technology resources that less technically focused organizations could use without having to negotiate directly with tech companies or manage the risks of deployment.

“We’re trying to really be clear about where the legitimate concerns are but lean into the optimism of the opportunities and not also allow the populations we serve to be left behind in solutions that have the potential to scale in a way that human to human or other technology can’t,” said Jeannie Annan, International Rescue Committee’s Chief Research and Innovation Officer.

The responses and information that Signpost chatbots deliver are vetted by local organizations to be up to date and sensitive to the precarious circumstances people could be in. An example query that IRC shared is of a woman from El Salvador traveling through Mexico to the United States with her son who is looking for shelter and for services for her child. The bot provides a list of providers in the area where she is.

More complex or sensitive queries are escalated for humans to respond.

The most important potential downside of these tools would be that they don’t work. For example, what if the situation on the ground changes and the chatbot doesn’t know? It could provide information that’s not just wrong, but dangerous.

A second issue is that these tools can amass a valuable honeypot of data about vulnerable people that hostile actors could target. What if a hacker succeeds in accessing data with personal information or if that data is accidentally shared with an oppressive government?

IRC said it’s agreed with the tech providers that none of their AI models will be trained on the data that the IRC, the local organizations or the people they are serving are generating. They’ve also worked to anonymize the data, including removing personal information and location.

As part of the Signpost.AI project, IRC is also testing tools like a digital automated tutor and maps that can integrate many different types of data to help prepare for and respond to crises.

Cathy Petrozzino, who works for the not-for-profit research and development company MITRE, said AI tools do have high potential, but also high risks. To use these tools responsibly, she said, organizations should ask themselves, does the technology work? Is it fair? Are data and privacy protected?

She also emphasized that organizations need to convene a range of people to help govern and design the initiative — not just technical experts, but people with deep knowledge of the context, legal experts, and representatives from the groups that will use the tools.

“There are many good models sitting in the AI graveyard,” she said, “because they weren’t worked out in conjunction and collaboration with the user community.”

For any system that has potentially life-changing impacts, Petrozzino said, groups should bring in outside experts to independently assess their methodologies. Designers of AI tools need to consider the other systems it will interact with, she said, and they need to plan to monitor the model over time.

Consulting with displaced people or others that humanitarian organizations serve may increase the time and effort needed to design these tools, but not having their input raises many safety and ethical problems, said Helen McElhinney, executive director of CDAC Network. It can also unlock local knowledge.

People receiving services from humanitarian organizations should be told if an AI model will analyze any information they hand over, she said, even if the intention is to help the organization respond better. That requires meaningful and informed consent, she said. They should also know if an AI model is making life-changing decisions about resource allocation and where accountability for those decisions lies, she said.

Degan Ali, CEO of Adeso, a nonprofit in Somalia and Kenya, has long been an advocate for changing the power dynamics in international development to give more money and control to local organizations. She asked how IRC and others pursuing these technologies would overcome access issues, pointing to the week-long power outages caused by Hurricane Helene in the U.S. Chatbots won’t help when there’s no device, internet or electricity, she said.

Ali also warned that few local organizations have the capacity to attend big humanitarian conferences where the ethics of AI are debated. Few have staff both senior enough and knowledgeable enough to really engage with these discussions, she said, though they understand the potential power and impact these technologies may have.

“We must be extraordinarily careful not to replicate power imbalances and biases through technology,” Ali said. “The most complex questions are always going to require local, contextual and lived experience to answer in a meaningful way.”

___

The Associated Press and OpenAI have a licensing and technology agreement that allows OpenAI access to part of AP’s text archives.

___

Associated Press coverage of philanthropy and nonprofits receives support through the AP’s collaboration with The Conversation US, with funding from Lilly Endowment Inc. The AP is solely responsible for this content. For all of AP’s philanthropy coverage, visit https://apnews.com/hub/philanthropy.

Source link

Continue Reading

Tech

Ottawa orders TikTok’s Canadian arm to be dissolved

Published

 on

 

The federal government is ordering the dissolution of TikTok’s Canadian business after a national security review of the Chinese company behind the social media platform, but stopped short of ordering people to stay off the app.

Industry Minister François-Philippe Champagne announced the government’s “wind up” demand Wednesday, saying it is meant to address “risks” related to ByteDance Ltd.’s establishment of TikTok Technology Canada Inc.

“The decision was based on the information and evidence collected over the course of the review and on the advice of Canada’s security and intelligence community and other government partners,” he said in a statement.

The announcement added that the government is not blocking Canadians’ access to the TikTok application or their ability to create content.

However, it urged people to “adopt good cybersecurity practices and assess the possible risks of using social media platforms and applications, including how their information is likely to be protected, managed, used and shared by foreign actors, as well as to be aware of which country’s laws apply.”

Champagne’s office did not immediately respond to a request for comment seeking details about what evidence led to the government’s dissolution demand, how long ByteDance has to comply and why the app is not being banned.

A TikTok spokesperson said in a statement that the shutdown of its Canadian offices will mean the loss of hundreds of well-paying local jobs.

“We will challenge this order in court,” the spokesperson said.

“The TikTok platform will remain available for creators to find an audience, explore new interests and for businesses to thrive.”

The federal Liberals ordered a national security review of TikTok in September 2023, but it was not public knowledge until The Canadian Press reported in March that it was investigating the company.

At the time, it said the review was based on the expansion of a business, which it said constituted the establishment of a new Canadian entity. It declined to provide any further details about what expansion it was reviewing.

A government database showed a notification of new business from TikTok in June 2023. It said Network Sense Ventures Ltd. in Toronto and Vancouver would engage in “marketing, advertising, and content/creator development activities in relation to the use of the TikTok app in Canada.”

Even before the review, ByteDance and TikTok were lightning rod for privacy and safety concerns because Chinese national security laws compel organizations in the country to assist with intelligence gathering.

Such concerns led the U.S. House of Representatives to pass a bill in March designed to ban TikTok unless its China-based owner sells its stake in the business.

Champagne’s office has maintained Canada’s review was not related to the U.S. bill, which has yet to pass.

Canada’s review was carried out through the Investment Canada Act, which allows the government to investigate any foreign investment with potential to might harm national security.

While cabinet can make investors sell parts of the business or shares, Champagne has said the act doesn’t allow him to disclose details of the review.

Wednesday’s dissolution order was made in accordance with the act.

The federal government banned TikTok from its mobile devices in February 2023 following the launch of an investigation into the company by federal and provincial privacy commissioners.

— With files from Anja Karadeglija in Ottawa

This report by The Canadian Press was first published Nov. 6, 2024.

The Canadian Press. All rights reserved.

Source link

Continue Reading

Health

Here is how to prepare your online accounts for when you die

Published

 on

 

LONDON (AP) — Most people have accumulated a pile of data — selfies, emails, videos and more — on their social media and digital accounts over their lifetimes. What happens to it when we die?

It’s wise to draft a will spelling out who inherits your physical assets after you’re gone, but don’t forget to take care of your digital estate too. Friends and family might treasure files and posts you’ve left behind, but they could get lost in digital purgatory after you pass away unless you take some simple steps.

Here’s how you can prepare your digital life for your survivors:

Apple

The iPhone maker lets you nominate a “ legacy contact ” who can access your Apple account’s data after you die. The company says it’s a secure way to give trusted people access to photos, files and messages. To set it up you’ll need an Apple device with a fairly recent operating system — iPhones and iPads need iOS or iPadOS 15.2 and MacBooks needs macOS Monterey 12.1.

For iPhones, go to settings, tap Sign-in & Security and then Legacy Contact. You can name one or more people, and they don’t need an Apple ID or device.

You’ll have to share an access key with your contact. It can be a digital version sent electronically, or you can print a copy or save it as a screenshot or PDF.

Take note that there are some types of files you won’t be able to pass on — including digital rights-protected music, movies and passwords stored in Apple’s password manager. Legacy contacts can only access a deceased user’s account for three years before Apple deletes the account.

Google

Google takes a different approach with its Inactive Account Manager, which allows you to share your data with someone if it notices that you’ve stopped using your account.

When setting it up, you need to decide how long Google should wait — from three to 18 months — before considering your account inactive. Once that time is up, Google can notify up to 10 people.

You can write a message informing them you’ve stopped using the account, and, optionally, include a link to download your data. You can choose what types of data they can access — including emails, photos, calendar entries and YouTube videos.

There’s also an option to automatically delete your account after three months of inactivity, so your contacts will have to download any data before that deadline.

Facebook and Instagram

Some social media platforms can preserve accounts for people who have died so that friends and family can honor their memories.

When users of Facebook or Instagram die, parent company Meta says it can memorialize the account if it gets a “valid request” from a friend or family member. Requests can be submitted through an online form.

The social media company strongly recommends Facebook users add a legacy contact to look after their memorial accounts. Legacy contacts can do things like respond to new friend requests and update pinned posts, but they can’t read private messages or remove or alter previous posts. You can only choose one person, who also has to have a Facebook account.

You can also ask Facebook or Instagram to delete a deceased user’s account if you’re a close family member or an executor. You’ll need to send in documents like a death certificate.

TikTok

The video-sharing platform says that if a user has died, people can submit a request to memorialize the account through the settings menu. Go to the Report a Problem section, then Account and profile, then Manage account, where you can report a deceased user.

Once an account has been memorialized, it will be labeled “Remembering.” No one will be able to log into the account, which prevents anyone from editing the profile or using the account to post new content or send messages.

X

It’s not possible to nominate a legacy contact on Elon Musk’s social media site. But family members or an authorized person can submit a request to deactivate a deceased user’s account.

Passwords

Besides the major online services, you’ll probably have dozens if not hundreds of other digital accounts that your survivors might need to access. You could just write all your login credentials down in a notebook and put it somewhere safe. But making a physical copy presents its own vulnerabilities. What if you lose track of it? What if someone finds it?

Instead, consider a password manager that has an emergency access feature. Password managers are digital vaults that you can use to store all your credentials. Some, like Keeper,Bitwarden and NordPass, allow users to nominate one or more trusted contacts who can access their keys in case of an emergency such as a death.

But there are a few catches: Those contacts also need to use the same password manager and you might have to pay for the service.

___

Is there a tech challenge you need help figuring out? Write to us at onetechtip@ap.org with your questions.

Source link

Continue Reading

Trending