This article originally appeared in UC Advanced issue #17.
Following on from UC Advanced March issue, which covered DE-CIX CEO Ivo Ivanov’s thoughts on Mobile World Congress 2025, Mareike Jacobshagen – Head of DE-CIX’s Global Business Partner Program – shares her thoughts with UC Advanced on the big ideas discussed at this year’s event.
This year’s Mobile World Congress took place during an uncharacteristically warm spring day in Barcelona, but it wasn’t the weather drumming up a fever. Far from the usual buzz around mobile-broadband convergence or the emergence of 6G, virtually every stand in the Fira exhibition centre was showcasing, discussing, or asking important questions about artificial intelligence. MWC regulars will know that AI isn’t a new topic at the conference, but the excitement and interest around it this year was palpable.
Visitors got a glimpse into a future where AI is embedded in every aspect of technology – autonomous networks, AI-powered smartphones, and even humanoid robots capable of adapting to real-time instructions. Yet, beyond the excitement of new devices and services, one fundamental question loomed: is our network infrastructure ready to support AI at scale?
According to McKinsey, the number of organisations deploying AI-based solutions almost quadrupled from just 20% in 2017 to 78% in 2024. For generative AI – whether AI inference for real-time data processing and analytics, or AI model training for more targeted and advanced deployments – the growth is even more pronounced, more than doubling from 33% to 71% in the space of just one year. And remember, these are just business deployments – the actual impact AI will have on societies is almost too big to predict, from smart city infrastructure and medical diagnostics to self-driving vehicles and even everyday use.
AI has broken ground so quickly that it has already begun to outpace our ability to support it. Legacy network infrastructure is prone to congestion and bottlenecks that are already limiting the real-world adoption of some of the use-cases demonstrated at MWC. Shortly after ChatGPT was released in 2024, even paying subscribers were greeted with the message “Sorry! We’re at capacity,” as OpenAI’s servers buckled under the weight of demand. As businesses continue to lean into AI, they could be getting a very similar message – or worse, be stuck with sluggish AI systems that aren’t performing optimally while wondering what the problem might be. The message is clear: high-performance computing, seamless data mobility, and low-latency connectivity are no longer just desirable – they’re essential if AI is to deliver on its promises.
The challenge lies in AI’s growing need for distributed processing. AI models are no longer confined to centralised data centers; they are deployed across cloud platforms, edge devices, and enterprise environments, each with unique latency and bandwidth requirements. AI inference – the real-time application of trained models and the most common use of AI – depends on ultra-fast data transfers between these locations. Traditional cloud architectures, which rely on unpredictable public Internet routing, introduce performance limitations that become unacceptable when milliseconds matter. Whether an AI model is making real-time decisions in a self-driving vehicle or processing predictive analytics at a financial firm, the network must be able to handle large-scale, high-speed data movement without congestion or extensive packet loss.
So how can that be achieved? AI workloads require direct, low-latency pathways between cloud providers, edge computing sites, IoT devices, and enterprise infrastructure to ensure real-time responsiveness, while at the same time, AI’s appetite for data is growing exponentially, with models requiring continuous updates and retraining based on new inputs. This requires a shift away from traditional data center-centric architectures toward a more distributed, dynamically interconnected model. Interconnection provides the missing link in AI’s infrastructure challenge by creating direct, high-performance pathways between cloud providers, data centers, and edge computing environments. Unlike traditional networking models that rely on multiple hops over the public Internet, interconnection establishes private, low-latency connections that optimise dataflows and ensure AI workloads can move seamlessly.
Security and reliability are also critical concerns. We know that AI models depend on vast amounts of often sensitive data, requiring secure, predictable network environments. Ensuring that AI traffic moves through secure, dedicated pathways rather than competing for bandwidth on congested public networks will become essential in meeting compliance obligations and mitigating cyberthreats. Put simply, as AI models become more sophisticated, enterprises will need greater control over their dataflows. And it’s that same element of control that will eventually unlock AI growth.
Almost every business conversation at the moment is dominated by AI. It’s difficult to think of a technology that has experienced such far-reaching breakthroughs in such a short span of time – but those breakthroughs alone aren’t enough. The real challenge is building the digital infrastructure necessary to support AI at scale, ensuring that latency, security, and bandwidth constraints do not hold back its potential. As organisations push AI deployments beyond isolated use cases and into widespread real-world applications, our focus must expand from developing new AI capabilities to ensuring the underlying infrastructure is ready to sustain them. MWC 2025 showcased the next generation of AI-driven technology; now the industry must take on the harder task – building the foundation need to make next-generation this generation.