Stack vs malloc: real-world benchmark shows 2–6x difference
<p>Usually, we assume that malloc is fast—and in most cases it is. <br> However, sometimes "reasonable" code can lead to very unreasonable performance.</p> <p>In a previous post, I looked at using stack-based allocation (VLA / fixed-size) for temporary data, and another on estimating available stack space to use it safely.</p> <p>This time I wanted to measure the actual impact in a realistic workload.</p> <h3> Full Article (Medium - no paywall): </h3> <p><a href="https://blog.stackademic.com/temporary-memory-isnt-free-allocation-strategies-and-their-hidden-costs-159247f7f856" rel="noopener noreferrer">Stack vs malloc: real-world benchmark shows 2–6x difference</a></p> <p>I built a benchmark based on a loan portfolio PV calculation, where each loan creates several temporary arrays (thousand
Usually, we assume that malloc is fast—and in most cases it is. However, sometimes "reasonable" code can lead to very unreasonable performance.
In a previous post, I looked at using stack-based allocation (VLA / fixed-size) for temporary data, and another on estimating available stack space to use it safely.
This time I wanted to measure the actual impact in a realistic workload.
Full Article (Medium - no paywall):
Stack vs malloc: real-world benchmark shows 2–6x difference
I built a benchmark based on a loan portfolio PV calculation, where each loan creates several temporary arrays (thousands of elements each). This is fairly typical code-clean, modular, nothing unusual.
I compared:
-
stack allocation (VLA)
-
heap per-loan (malloc/free)
-
heap reuse
-
static (baseline)
Results:
-
stack allocation stays very close to optimal
-
heap per-loan can be ~2.5x slower (glibc) and up to ~6x slower (musl)
-
even optimized allocators show pattern-dependent behavior
The main takeaway for me: allocation cost is usually hidden—but once it's in the hot path, it really matters.
Curious how others approach temporary workspace in performance-sensitive code.
DEV Community
https://dev.to/yairlenga/stack-vs-malloc-real-world-benchmark-shows-2-6x-difference-4601Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
benchmarkavailable
Microsoft just shipped the clearest signal yet that it is building an AI empire without OpenAI
Six months after renegotiating the contract that once barred it from independently pursuing frontier AI, Microsoft has released three in-house models that directly challenge the partner it spent $13 billion cultivating. MAI-Transcribe-1, MAI-Voice-1, and MAI-Image-2 are now available in Microsoft Foundry, and they do not carry OpenAI’s name anywhere on the label. The models are [ ] This story continues at The Next Web

How we turned a small open-source model into the world's best AI forecaster
tldr: Our model Foresight V3 is #1 on Prophet Arena, beating every frontier model. The base model is gpt-oss-120b, training data was auto-generated using public news. Benchmark Prophet Arena is a live forecasting benchmark from UChicago's SIGMA Lab. Every model receives identical context, so the leaderboard reflects the model's reasoning ability. OpenAI's Head of Applied Research called it "the only benchmark that can't be hacked." We lead both the Overall and Sports categories, ahead of every frontier model including GPT-5.2, Gemini 3 Pro, and Claude Opus 4.5. Data Generation Pipeline Real-world data is messy, unstructured, and doesn't have labels. But it does have timestamps. We turn those timestamps into labeled training data using an approach we call future-as-label. We start with a so

The Hidden Cost of Manual Intervention in Digital Products
There is a cost your product team is almost certainly not tracking. It does not appear on your engineering budget. It does not show up in your infrastructure bills. It does not get flagged in your sprint retrospectives. And yet it compounds quietly across every release, every scaling event, and every new hire — until it becomes the single most significant drag on your platform’s ability to grow. The cost is manual intervention. Every time a team member has to step in to make a decision the system should have made, you are paying this cost. Every escalation, every workaround, every “just ask Sarah about that” is a withdrawal from an account most teams have never even opened. This article is about why that cost is so hard to see, how it compounds, and what it actually means to design it out
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.




Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!