In the landscape of software development, particularly within the DevSecOps pipeline, artificial intelligence (AI) can help address inefficiencies and streamline workflows. Among the most time-consuming tasks in this arena are code creation, test generation, and the review process. AI technologies, such as code generators and AI-driven test creation tools, tackle these areas head-on, enhancing productivity and quality. For instance, AI can automate boilerplate code generation, offer real-time code suggestions, and facilitate the creation of comprehensive tests, including regression and unit tests. These capabilities speed up the development process and significantly reduce the potential for human error.
In the realm of operations, AI’s role is equally pivotal. CI/CD pipelines, a critical component of modern software development practices, benefit from AI through automated debugging, root cause analysis using machine learning algorithms, and observability improvements. Tools like k8sgpt and Ollama Mistral LLM analyze deployment logs and summarize critical data, allowing for quicker and more accurate decision-making. Furthermore, AI’s application in resource analysis and sustainability, exemplified by tools like Kepler, underscores the technology’s ability to optimize operations for efficiency and environmental impact.
Lastly, security within DevSecOps benefits greatly from AI, with innovations such as AI guardrails and vulnerability management systems. AI can explain security vulnerabilities clearly and recommend or implement resolutions, safeguarding applications against potential threats. Moreover, through features like controlled access to AI models and prompt validation, AI’s contribution to privacy and data security enhances the overall security posture. Transparency in AI usage and adherence to ethical principles in product development further build trust in these technologies.
After the session, InfoQ interviewed Michael Friedlich about how AI can help with DevSecOps.
InfoQ: Given your emphasis on AI’s role in streamlining DevSecOps workflows and improving efficiency, how do you suggest organizations balance the drive for rapid innovation and deployment with the imperative to maintain robust security practices?
Michael Friedrich: Think of the following steps in your AI adoption journey into DevSecOps:
Start with an assessment of your workflows and their importance for efficiency
Establish guardrails for AI, including data security, validation metrics, etc.
Require impact analysis beyond developer productivity. How will AI accelerate and motivate all teams and workflows?
Existing DevSecOps workflows are required to verify AI-generated code, including security scanning, compliance frameworks, code quality, test coverage, performance observability, and more.
I’m referencing an article from the GitLab blog in my talk. The discussions with our customers and everyone involved at GitLab inspired me to think beyond workflows and encourage users to plan their AI adoption strategically.
InfoQ: Specifically, could you share your thoughts on integrating AI tools without compromising security standards, especially when dealing with sensitive data and complex infrastructure?
Michael Friedrich: A common concern is how sensitive data is being used with AI tools. Users need transparent information on data security, privacy, and how the data is used. For example, a friend works in the automotive industry with highly sophisticated and complex algorithms for car lighting. This code must never leave their network and brings new challenges with AI adoption and SaaS models. Additionally, code must not be used to train public models and potentially be leaked into someone else’s code base. The demand for local LLMs and custom-trained models increased in 2024, and I believe that vendors are working hard to address these customer concerns.
Another example is prompts that could expose sensitive infrastructure data (FQDNs, path names, etc.) in infrastructure and cloud-native deployment logs. Specific filters and policies must be installed, and refined controls on how users adopt AI must be added to their workflows. Root cause analysis in failed CI/CD pipelines is helpful for developers but could require filtered logs for AI-assisted analysis.
I recommend asking AI vendors about AI guardrails and continuing the conversation when information remains unclear. Encourage them to create an AI Transparency Center and follow the example at https://about.gitlab.com/ai-transparency-center/. Lastly, transparency on guardrails is a requirement when evaluating AI tools and platforms.
InfoQ: You highlighted several pain points within DevSecOps workflows, including maintaining legacy code and analyzing the impact of security vulnerabilities. How do you envision AI contributing to managing or reducing technical debt, particularly in legacy systems that might not have been designed with modern DevOps practices in mind?
Michael Friedrich: Companies that have not yet migrated to cloud-native technologies or refactored their code base to modern frameworks will need assistance. In earlier days, this was achieved through automation or rewriting everything from scratch. However, this is a time-consuming process that requires a lot of research, especially when source code, infrastructure, and workflows are not well documented.
The challenges are multi-faceted: Once you understand the source code, algorithms, frameworks, and dependencies, how would you ensure that nothing breaks on changes? Tests can be generated with the help of AI, and creating a safety net for more extensive refactoring activities also helps with AI-generated code. Refactoring code can add new bugs and security vulnerabilities, requiring existing DevSecOps platforms with quality and security scanning. The challenges don’t stop there – CI/CD pipelines might fail, cloud deployments run into resource and cost explosions, and the feedback loop in DevSecOps starts anew – new features and migration plans.
My advice is to adapt AI-powered workflows in iterations. Identify the most pressing or lightweight approach for your teams and ensure that guardrails and impact analysis are in place.
For example, start with code suggestions, add code explanations and vulnerability explanations as helpful knowledge assistance, continue with chat prompts, and use Retrieval Augmented Generation (RAG) to enrich answers with custom knowledge base data (e.g., from documentation in a Git repository, using the Markdown format).
If teams benefit better from AI-assisted code reviews and issue discussion summaries, shift your focus there. If developers spend most of their time looking at long-running CI/CD pipelines with a failure rate of 90%, invest in root cause analysis first. If releases are always delayed because of last-minute regressions and security vulnerability reviews, start with test generation and security explanation and resolution.
InfoQ: Are there AI-driven strategies or tools that can help bridge the gap between older architectures and the requirements of contemporary DevSecOps pipelines?
Michael Friedrich: Follow the development pattern of “explain, add tests, refactor”; and add security patterns, preferably on a DevSecOps platform where all data for measuring the impact comes together in dashboards. Take the opportunity to review tool sprawl and move from DIY DevOps to the platform approach for more excellent efficiency benefits.
Speaking from my own experience, I had to fix complex security vulnerabilities many years ago, and these fixes had broken critical functionalities in the product of my previous company. I have also introduced performance regressions and deadlocks, which are hard to trace and find in production environments. Think of a distributed cloud environment with many agents, satellites, and a central management instance. If I had AI-assisted help, understanding the CVE and proposed fix could have avoided months of debugging regressions. A conversational chat prompt also invites follow-up questions, such as “Explain how this code change could create performance regressions and bugs in a distributed C++ project context.”
I’ve also learned that LLMs are capable of refactoring code into different programming languages, for example, C into Rust, solving a problem with memory safety and more robust code. This strategy can help migrate the code base in iterations to a new programming language and/or framework.
I’m also excited about AI agents and how they will aid code analysis, provide migration strategies, and help companies understand the challenges with older architectures and modern DevSecOps pipelines. For example, I would love to have AI-assisted incident analysis with querying live data in your cloud environment through LLM function calls. This aids Observability insights for more informed prompts and could result in infrastructure security and cost optimization proposals through automated Merge Requests.
Companies working in the open, i.e., through open source or core models, can co-create with their customers. More refined issue summaries, better code reviews, and guided security explanations and resolutions will help everyone contribute, with a bit of help from AI.
The federal government is ordering the dissolution of TikTok’s Canadian business after a national security review of the Chinese company behind the social media platform, but stopped short of ordering people to stay off the app.
Industry Minister François-Philippe Champagne announced the government’s “wind up” demand Wednesday, saying it is meant to address “risks” related to ByteDance Ltd.’s establishment of TikTok Technology Canada Inc.
“The decision was based on the information and evidence collected over the course of the review and on the advice of Canada’s security and intelligence community and other government partners,” he said in a statement.
The announcement added that the government is not blocking Canadians’ access to the TikTok application or their ability to create content.
However, it urged people to “adopt good cybersecurity practices and assess the possible risks of using social media platforms and applications, including how their information is likely to be protected, managed, used and shared by foreign actors, as well as to be aware of which country’s laws apply.”
Champagne’s office did not immediately respond to a request for comment seeking details about what evidence led to the government’s dissolution demand, how long ByteDance has to comply and why the app is not being banned.
A TikTok spokesperson said in a statement that the shutdown of its Canadian offices will mean the loss of hundreds of well-paying local jobs.
“We will challenge this order in court,” the spokesperson said.
“The TikTok platform will remain available for creators to find an audience, explore new interests and for businesses to thrive.”
The federal Liberals ordered a national security review of TikTok in September 2023, but it was not public knowledge until The Canadian Press reported in March that it was investigating the company.
At the time, it said the review was based on the expansion of a business, which it said constituted the establishment of a new Canadian entity. It declined to provide any further details about what expansion it was reviewing.
A government database showed a notification of new business from TikTok in June 2023. It said Network Sense Ventures Ltd. in Toronto and Vancouver would engage in “marketing, advertising, and content/creator development activities in relation to the use of the TikTok app in Canada.”
Even before the review, ByteDance and TikTok were lightning rod for privacy and safety concerns because Chinese national security laws compel organizations in the country to assist with intelligence gathering.
Such concerns led the U.S. House of Representatives to pass a bill in March designed to ban TikTok unless its China-based owner sells its stake in the business.
Champagne’s office has maintained Canada’s review was not related to the U.S. bill, which has yet to pass.
Canada’s review was carried out through the Investment Canada Act, which allows the government to investigate any foreign investment with potential to might harm national security.
While cabinet can make investors sell parts of the business or shares, Champagne has said the act doesn’t allow him to disclose details of the review.
Wednesday’s dissolution order was made in accordance with the act.
The federal government banned TikTok from its mobile devices in February 2023 following the launch of an investigation into the company by federal and provincial privacy commissioners.
— With files from Anja Karadeglija in Ottawa
This report by The Canadian Press was first published Nov. 6, 2024.
LONDON (AP) — Most people have accumulated a pile of data — selfies, emails, videos and more — on their social media and digital accounts over their lifetimes. What happens to it when we die?
It’s wise to draft a will spelling out who inherits your physical assets after you’re gone, but don’t forget to take care of your digital estate too. Friends and family might treasure files and posts you’ve left behind, but they could get lost in digital purgatory after you pass away unless you take some simple steps.
Here’s how you can prepare your digital life for your survivors:
Apple
The iPhone maker lets you nominate a “ legacy contact ” who can access your Apple account’s data after you die. The company says it’s a secure way to give trusted people access to photos, files and messages. To set it up you’ll need an Apple device with a fairly recent operating system — iPhones and iPads need iOS or iPadOS 15.2 and MacBooks needs macOS Monterey 12.1.
For iPhones, go to settings, tap Sign-in & Security and then Legacy Contact. You can name one or more people, and they don’t need an Apple ID or device.
You’ll have to share an access key with your contact. It can be a digital version sent electronically, or you can print a copy or save it as a screenshot or PDF.
Take note that there are some types of files you won’t be able to pass on — including digital rights-protected music, movies and passwords stored in Apple’s password manager. Legacy contacts can only access a deceased user’s account for three years before Apple deletes the account.
Google
Google takes a different approach with its Inactive Account Manager, which allows you to share your data with someone if it notices that you’ve stopped using your account.
When setting it up, you need to decide how long Google should wait — from three to 18 months — before considering your account inactive. Once that time is up, Google can notify up to 10 people.
You can write a message informing them you’ve stopped using the account, and, optionally, include a link to download your data. You can choose what types of data they can access — including emails, photos, calendar entries and YouTube videos.
There’s also an option to automatically delete your account after three months of inactivity, so your contacts will have to download any data before that deadline.
Facebook and Instagram
Some social media platforms can preserve accounts for people who have died so that friends and family can honor their memories.
When users of Facebook or Instagram die, parent company Meta says it can memorialize the account if it gets a “valid request” from a friend or family member. Requests can be submitted through an online form.
The social media company strongly recommends Facebook users add a legacy contact to look after their memorial accounts. Legacy contacts can do things like respond to new friend requests and update pinned posts, but they can’t read private messages or remove or alter previous posts. You can only choose one person, who also has to have a Facebook account.
You can also ask Facebook or Instagram to delete a deceased user’s account if you’re a close family member or an executor. You’ll need to send in documents like a death certificate.
TikTok
The video-sharing platform says that if a user has died, people can submit a request to memorialize the account through the settings menu. Go to the Report a Problem section, then Account and profile, then Manage account, where you can report a deceased user.
Once an account has been memorialized, it will be labeled “Remembering.” No one will be able to log into the account, which prevents anyone from editing the profile or using the account to post new content or send messages.
X
It’s not possible to nominate a legacy contact on Elon Musk’s social media site. But family members or an authorized person can submit a request to deactivate a deceased user’s account.
Passwords
Besides the major online services, you’ll probably have dozens if not hundreds of other digital accounts that your survivors might need to access. You could just write all your login credentials down in a notebook and put it somewhere safe. But making a physical copy presents its own vulnerabilities. What if you lose track of it? What if someone finds it?
Instead, consider a password manager that has an emergency access feature. Password managers are digital vaults that you can use to store all your credentials. Some, like Keeper,Bitwarden and NordPass, allow users to nominate one or more trusted contacts who can access their keys in case of an emergency such as a death.
But there are a few catches: Those contacts also need to use the same password manager and you might have to pay for the service.
___
Is there a tech challenge you need help figuring out? Write to us at onetechtip@ap.org with your questions.
LONDON (AP) — Britain’s competition watchdog said Thursday it’s opening a formal investigation into Google’s partnership with artificial intelligence startup Anthropic.
The Competition and Markets Authority said it has “sufficient information” to launch an initial probe after it sought input earlier this year on whether the deal would stifle competition.
The CMA has until Dec. 19 to decide whether to approve the deal or escalate its investigation.
“Google is committed to building the most open and innovative AI ecosystem in the world,” the company said. “Anthropic is free to use multiple cloud providers and does, and we don’t demand exclusive tech rights.”
San Francisco-based Anthropic was founded in 2021 by siblings Dario and Daniela Amodei, who previously worked at ChatGPT maker OpenAI. The company has focused on increasing the safety and reliability of AI models. Google reportedly agreed last year to make a multibillion-dollar investment in Anthropic, which has a popular chatbot named Claude.
Anthropic said it’s cooperating with the regulator and will provide “the complete picture about Google’s investment and our commercial collaboration.”
“We are an independent company and none of our strategic partnerships or investor relationships diminish the independence of our corporate governance or our freedom to partner with others,” it said in a statement.
The U.K. regulator has been scrutinizing a raft of AI deals as investment money floods into the industry to capitalize on the artificial intelligence boom. Last month it cleared Anthropic’s $4 billion deal with Amazon and it has also signed off on Microsoft’s deals with two other AI startups, Inflection and Mistral.