Edge-forward: Akamai eyes sweet spot between centralized & decentralized AI inference
For this edition of The New Stack Makers, we sit down with two leaders at Akamai: Lena Hall, senior director, The post Edge-forward: Akamai eyes sweet spot between centralized & decentralized AI inference appeared first on The New Stack .
For this edition of The New Stack Makers, we sit down with two leaders at Akamai: Lena Hall, senior director, developers & AI engineering, and Thorsten Hans, senior developer advocate
Keen to understand what’s going on in the cloud-native AI universe and where Akamai fits into that story, this discussion took place at KubeCon + CloudNativeCon Europe 2026 in Amsterdam.
We know Akamai as a Content Delivery Network (CDN) business with a focus on cybersecurity and software application development technologies. The Akamai of today is also a modern, developer-friendly cloud infrastructure business ready to deliver for the age of AI in every location.
But what shape does that business model take in real-world terms?
“There are so many use cases that benefit from really low latency distributed processing, and Akamai has always been known for our services around distributed computing. So this is why we have developed managed container services for Kubernetes; this technology works fluidly with our low-latency serverless functions and our distributed AI inference platform,” says Hall.
Bringing compute closer
In our discussion, Hall and Hans explain how the company achieves its proximity play. With 41 core datacenters in 36 countries, Akamai extends its reach through around 4,400 smaller “distributed reach” datacenters worldwide.
“The intention is to bring compute closer to wherever the user is around the planet in order to reduce latency,” Hall says. “But it’s important to remember that there are so many different types of workloads that users like to run. There are those that require really deep thinking and a lot of computing, so this is where centralized data centers do a great job. But when you combine those stacks with distributed edge capabilities, you can deliver faster feedback loops when required.”
Those faster feedback loops are critical in areas such as robotics, fraud detection, or in conversational agents, where a delay can lead to customer loss very quickly. Bringing both centralized and decentralized edge resources together is clearly the sweet spot that Akamai is aiming for.
Brittle integration points?
But let’s question this theory for a second. In taking this approach, is Akamai not building its own integration and configuration nightmare? With a whole string of dependencies and integration points, doesn’t that create an inherently more brittle compute and data stack for AI to rest upon?
“Doing this correctly is precisely the infrastructure service layer that Akamai is capable of providing,” says Hall. “We’re used to delivering this for really large corporations in a managed way with a simplified setup. Users can then move forward to develop new services without having to manage the infrastructure element of the equation and leverage the tools we have.”
Enthusiastic advocates of self-service systems, Hall and Hans describe a computing landscape where users have all the toolkits and services they need to spin up an ecosystem and deploy at will, often with a single command.
Equally positive about the need to underpin open-source support, the Akamai pair again points to the company’s managed Kubernetes service. Akamai also has its own application platform project that runs on top of Linode Kubernetes Engine (LKE) to package a selection of frequently used open source projects. This means users can access these tools through Akamai’s interface without manually installing each piece of software.
We put developers at the center, always… a developer can go from a blinking cursor to a live production-deployed application that is globally distributed on top of the Akamai cloud in around two minutes.
Serverless suitability
Regarding who uses Akamai’s platform within any given software engineering team, developer advocate Hans says that his firm’s serverless technologies, such as Akamai Functions, are especially accessible to developers at all levels. This part of the company’s platform is designed to help developers build, deploy and scale applications and AI workloads using WebAssembly functions across Akamai’s distributed cloud without the burden of managing infrastructure.
“We put developers at the center, always. Akamai worked with the Cloud Native Computing Foundation (CNCF) to create the sandbox project known as Spin. This is a framework for building and deploying serverless applications in WebAssembly. This leads us towards NoOps, so a developer can go from a blinking cursor to a live production-deployed application that is globally distributed on top of the Akamai cloud in around two minutes, all built on different layers of popular open source projects,” says Hans.
Akamai’s work with Spin stems from its December 2025 acquisition of cloud-native Wasm Function-as-a-Service company Fermyon. So-named to embody its mission to bring cold start times down to under 1 millisecond, the Spin team also created SpinKube in 2024 to serve as a Kubernetes runtime. Hans has been responsible for the increased use of Wasm within Akamai, as the team has pledged to make it easier for developers to execute lightweight code at the edge.
Focus on the logic, not the logistics
Promising to always “meet developers where they are” in terms of individual skills, Hans says that Akamai has provided guidance via tutorials, hands-on labs, and ready-to-use applications. In practice, it’s all about telling developers that they shouldn’t spend time worrying about server provisioning and management; they should be able to see what’s inside the box (of any given Akamai service) and think about how they can apply that to their environment’s requirements.
Hall and Hans say they appreciate that there will always be software engineering teams that need to work with the internal infrastructure they run on. But for the majority of its customer base today, Akamai provides a way to work with a higher level of abstraction. This means engineers don’t have to worry about the server underneath; all they need to focus on is the business processes they aim to encapsulate in application logic and the functionality their users need.
TRENDING STORIES
Group Created with Sketch.
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Products
What Deploying an AI Shopping Assistant with RAG Actually Means - Blockchain Council
<a href="https://news.google.com/rss/articles/CBMisgFBVV95cUxPVDAwRnBKQnZmQ3E5aTI0dmpvMzZwOEw0VlpqWVd4UGtPaEhMcEJyb0JqZkp4TXlyS01tZjVYaFNqMHBZazNDVXUwai0tcE5qNTRZTkR4cVplMjhUNE5ITloyTTItbXctSGRGWTkzbUs3cWlmUldCZWVZd3RNLVhnb1E5Sm4ydnktSHJlc2lwbWZ1N2VBTldkSC03ZDE5b2V0dVhPdGVOQWpyZGR5bGpRWERn?oc=5" target="_blank">What Deploying an AI Shopping Assistant with RAG Actually Means</a> <font color="#6f6f6f">Blockchain Council</font>
Production-Ready RAG Pipeline with Vector DBs - Blockchain Council
<a href="https://news.google.com/rss/articles/CBMivwFBVV95cUxQSFU3eTgtOUNDRWgxMXVrVmZGWHpHaWR5TVpseGktd21IUWFjaXBwaGdia1pVSndrN1RBLXFvSFJkQzBaaU85Q1lxamJrcWRmM2ZwZWp4U2hsMXR3T1ZNR2NtcV9GVHBad0ZQeHNXT0ZudlEzYjc0RlZHQmxPVGRkbkhtWEQtbHN2LUFzeEZLaUxMd3ZuamhTVVBxbGtkRUtKbWstOFZBb2xpMEJaNWtvaW15aENzcDQwNWRzekZBdw?oc=5" target="_blank">Production-Ready RAG Pipeline with Vector DBs</a> <font color="#6f6f6f">Blockchain Council</font>
This Defense Company Made AI Agents That Blow Things Up - WIRED
<a href="https://news.google.com/rss/articles/CBMiiAFBVV95cUxNY2V3RGJkNDduUmVPV3JpYnNkRHgxNGt2dEoyNEdZUTAyOVJMbFl5REZsT08zb1Z4TVlMNWc3OFJDVFNqcWRxa0FGSEFFVTlQajg4dVRIVWRKSzhkZV9yXzN4Z3lWbXltbFk4UDIxcDZQSnJ4alhvZTlINnl6YmRoaTFnT2ZNMUtT?oc=5" target="_blank">This Defense Company Made AI Agents That Blow Things Up</a> <font color="#6f6f6f">WIRED</font>
[D] Why I abandoned YOLO for safety critical plant/fungi identification. Closed-set classification is a silent failure mode
<!-- SC_OFF --><div class="md"><p>I’ve been building an open-sourced handheld device for field identification of edible and toxic plants wild plants, and fungi, running entirely on device. Early on I trained specialist YOLO models on iNaturalist research grade data and hit 94-96% accuracy across my target species. Felt great, until I discovered a problem I don’t see discussed enough on this sub.</p> <p>YOLO’s closed set architecture has no concept of “I don’t know.” Feed it an out of distribution image and it will confidently classify it as one of its classes at near 100% confidence. In most CV cases this can be annoyance. In foraging, it’s potentially lethal.</p> <p>I tried confidence threshold fine-tuning at first, doesn’t work. The confidence scores on OOD inputs are indistinguishable f
Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!