SUSE AI is safe and mature, but not plug-and-play for all

Open infrastructure between closed domains

Insight: Agentic AI

SUSE AI is safe and mature, but not plug-and-play for all

SUSECON, SUSE’s annual event, delivered what many will see as predictable updates for 2025. The focus was on the upcoming SUSE Linux Enterprise Server 16, enhancements to the Kubernetes tool Rancher, and an expanded partnership, this time with Microsoft Sentinel. However, SUSE AI, which was announced somewhat cryptically in 2024, produced the most interesting developments. What exactly does SUSE AI do? And what doesn’t it do?

In 2024, we actually wondered what SUSE AI really was. As a foundation for IT infrastructure, SUSE intended to use the new solution, which hadn’t been launched at that time, to advance its principles going into the GenAI era. In other words: to offer choices, preferably open-source, control over one’s own data, and comprehensive observability, similar to Linux and Rancher, among others. In 2025, this will be specifically implemented as an AI infrastructure, provided by SUSE together with partners and integrations, but not exclusively.

For instance, SUSE doesn’t build its own LLM and never will. Abhinav Puri, VP & GM Portfolio Solutions & Services, and Thomas di Giacomo, Chief Technology & Product Officer of SUSE, both state this definitively. Additionally, SUSE AI stops where the verticals begin. It appears that there will never be a ready-made, packaged SUSE AI for retail, healthcare, etc. Partners are stepping in to fill this gap, for example by providing a solution in collaboration with Infosys and HPE. However, to clarify the role SUSE plays in this process, we’ll discuss the AI pipeline in the SUSE world from start to finish.

Not the models themselves, but almost everything else

Puri indicates that SUSE provides the infrastructure for AI workloads, from the operating system to workflows, container orchestrations and security. This includes support for third-party solutions such as Red Hat Enterprise Linux and CentOS on the OS side and OpenWebUI and Ollama, among others, for serving LLM.

“One of our strengths is the secure supply chain,” says Puri. “We know how to build secure open-source pipelines.” This comes with a reference to over three decades of enterprise Linux support, longer than any competitor. His colleague Sanjeet Singh, Director & Head of Cloud Product Management, provides a concrete example. One of the Fortune 50 companies now outsources the approval of open-source projects for AI to SUSE. The company in question has many AI engineers, but lacks the expertise for this selection. SUSE can eliminate known CVEs (vulnerabilities) much faster to make an open-source tool enterprise-grade, Singh explains.

A major issue is that open-source maintainers don’t prioritize security. Yet open-source can actually make software more secure, as SUSE CEO Dirk-Peter van Leeuwen told us last year. Singh summarizes SUSE’s role as follows: “We offer a CVE-free framework so that organizations can use open-source projects in a secure manner.”

More than solving vulnerabilities: guardrails and observability

Security in relation to AI is a broad concept and extends beyond resolving vulnerabilities. Consider guardrails, the generic term for preventing situations where AI models go off the rails. SUSE uses the open-source Guardrails AI project as its foundation. A SUSE greendoc explains the precise implementation of this, but it doesn’t stop there. The aforementioned partner Infosys also has a Guardrails solution that can be integrated with SUSE. It’s a blueprint for further expansion of this aspect of SUSE AI.

Observability is a larger theme about which SUSE has much to say. We’ll therefore discuss it in detail in a later article. SUSE’s Observability for AI isn’t coming out until April, while curated components, agentic AI workflows and guardrails are already available. On the SUSECON stage, Puri describes this list as the “most pressing needs” for organizations. We’ll take a closer look at what is already possible. Observability, which includes tracking the costs of your API usage, is something we’ll address later, just like SUSE.

However, this is somewhat putting the cart before the horse. Singh emphasizes: “Observability is the biggest concern because the costs are enormous.” Users within their organization must be able to justify to the CFO that they’re using AI efficiently, whether on-premises, via public cloud instances or APIs. All of this is complex and will clearly require some time, but SUSE’s plans in this area are clear. For example, it wants to show not only LLM metrics in dashboards, but also GPU performance and the use of vector databases, the foundation for GenAI use with proprietary data.

On the road to agentic

As mentioned, the role of SUSE AI revolves around guaranteeing secure tooling. However, organizations want to innovate quickly and simply running GenAI models isn’t enough. Over the past year, agentic AI has gained momentum, with the technology acting more independently. Within SUSE’s definition (a fairly common one), agentic workflows revolve around the ability to plan, reason and act based on objectives and data.

This is implemented concretely with OpenWebUI Pipelines, an open-source framework that connects models, APIs and external tools. SUSE will add this project to its own AI Library. This means it will join the same list as SUSE Security, SUSE Observability, SUSE Rancher Prime RKE2 and SUSE Linux Micro, on which Nvidia drivers have been installed (support for AMD is still forthcoming).

That’s a lot of building blocks, offering flexibility for various implementations. It’s up to other parties to prepare industry-specific applications for this. SUSE’s description remains at the level of use cases, such as “data-intensive industries.” It seems that SUSE doesn’t want to compete with partners or spread itself too thin. After all, it already maintains other people’s Linux distributions, offers long-term support windows and curates open-source components.

Conclusion: where SUSE AI begins and ends

SUSE will not be building its own models. The debate about how ‘open’ the ‘open-source’ LLMs on the market are continues. Brent Schroeder, Global CTO at SUSE, emphasizes that no one can reproduce any particular open-source model if its developers don’t provide the code, the model and the data. Consider DeepSeek’s approach, where we gain insight into parts of the code and can run the model ourselves, but have no visibility into the exact training data.

The result is that SUSE AI operates in an open field between closed domains. On one hand, there’s the realm of the AI model builders, where the prominent parties involved don’t meet the requirement to release code, model and data – essentially none of them are fully open-source according to SUSE’s (and many others’) definition. At the other end, there’s the AI hardware, mostly Nvidia GPUs, and the paid implementations of companies like Infosys to actually turn SUSE AI into an engine block for readymade usage.

SUSE defines its own task as providing the infrastructure, with as much choice and flexibility as possible for end users. SUSE AI clearly states what the company’s core tasks are and, more importantly, what they are not. It cannot compete in the model space (barely anyone can) and cannot offer extra freedom if almost everyone uses the same AI chips. SUSE doesn’t have the immense bandwidth of implementers to create an out-of-the-box SUSE AI variant for each sector. You’ll need to configure these things (or have them configured for you for a fee) yourself.

The end result is a mix of solutions, curated and secure, that align with what the company has been doing for some time. Despite all the upheaval in the IT world caused by GenAI, SUSE thus remains somewhat conservative and predictable. SUSE AI will move with industrial trends. Above all, it will give organizations peace of mind by eliminating vulnerabilities and providing integrations. That is what SUSE has set out to do, no more, no less.

Also read: How SUSE AI went from vision to platform