What technology leaders need to ensure AI delivers

artificial intelligence is often presented as an “existential” issue for companies.

But despite all the apparent enthusiasm and investment in AI tools, many technology leaders don’t seem to see them as a business application.

- Advertisement -

Gartner predicts that more than 40% of AI projects will be canceled by the end of 2027, often due to inadequate risk management and uncertain return on investment.

This lack of adoption wastes investment and undermines long-term confidence in the technology.

This creates a gap between organizations that are constantly transitioning to business AI and those that are struggling to make it work. The gap will only widen as generational AI is replaced by agent AI.

- Advertisement - Advertisement

Having a vision is key to AI success

Of course, vision is key to AI success, as is the data needed for your company’s individual AI strategy. Combined with an initial capital investment, this can be enough to build a large pilot project.

But is this enough to guarantee the success of the entire company? Gartner’s numbers clearly show: no.

So what’s missing? What do technology leaders need to do to ensure that AI not only likes it, but actually works?

The answer is to ensure access to AI. Simply put, it’s the ability to deploy, manage and scale AI outside the lab and across the enterprise.

- Advertisement -

It takes hard work to ensure that what begins as a compelling but disjointed pilot is integrated into the enterprise as a whole.

This means ensuring that AI works on a unified platform that includes computing power, data and governance. A platform that can be replicated across the enterprise, on premise, in the cloud or at the edge.

There is nothing new in the basic concept. Effectively launching business-critical workloads, such as ERP or CRM, requires equal attention to the underlying operational infrastructure.

However, particular challenges should be highlighted when it comes to implementing these goals with AI.

Building an AI infrastructure

It’s easy to think that AI infrastructure management starts and ends with GPUs. But memory with high bandwidth, fast memory and a good network connection also play an important role. This also applies to other processors and accelerators, depending on which part of the workflow we are looking at.

More importantly, this infrastructure (whether on-premises, cloud or hybrid) must be able to adapt and scale as projects move from on-premises pilots to enterprise production. AI can inherently be much more difficult to manage than traditional enterprise workloads.

However, it’s not just about CPU performance or gigabytes of memory. Security and governance are non-negotiable in enterprise AI projects. An organization’s underlying data and models are essential to the future and must be considered.

More broadly, data sovereignty and AI regulation further complicate matters. Technology leaders need to know their data is where it belongs and be clear about who has access to it and who doesn’t.

The possibilities of artificial intelligence are limitless. But the price also rises if the underlying infrastructure is not managed properly. Simply paying for GPUs and the power to run them, and then underutilizing them, reduces the return on investment and compromises ESG obligations.

Resizing operations

Technology managers must plan ahead to increase or decrease capacity. But it is also necessary to be able to control and predict costs. Therefore, they need to be sure that their platform and toolset allows them to do this with ease.

This becomes even more important as more AI agents enter. Security, governance and compliance must also be ensured when agents log in, generate data and make decisions.

The infrastructure must be able to support them and absorb demand peaks while they are active. You should consider resource placement to reduce latency for real-time workloads. And power consumption must be kept within acceptable limits.

With all this in mind, the contours of preparing for the age of artificial intelligence become clearer.

Real operational readiness requires a turnkey AI approach in the form of a complete platform with the option to include GPUs and other necessary accelerators.

It should include integrated data services that support the full range of formats that AI needs, as well as appropriate security and governance controls.

It should also support both virtual machines and containers and have the ability to orchestrate them. The continued operation of artificial intelligence is already a real challenge. No one wants to migrate to the cloud at the same time.

The role of the LLM

LLMs do not always provide reproducible answers. But the infrastructure behind Gen AI and Agentic AI must be repeatable if companies want to scale it to meet demand.

This includes cloud, on-premise and edge solutions.

With the right platform and tools, technology leaders can ensure their employees are continuously focused on maximizing the value they can get from their AI investments.

You won’t waste time or resources turning a successful pilot project into a company-wide strategy.

Whether they focus their business on AI or recognize that AI will be part of their broader toolset, technology leaders must recognize that AI is a business application.

And enterprise applications need an enterprise-class infrastructure that can support them from pilot to production and beyond.

Only in this way can you guarantee the sustainability of your organization.

We list the best tools for data visualization.

Related Articles