Securing Asgard: Why I Built a Card Game Suite for Docker Security
Hi there, little friend! Imagine you have a super-duper toy box, right? And inside are all your favorite toys.
Sometimes, bad monsters try to sneak into your toy box and mess with your toys! That's like "security" for grown-ups' computers. They have special guards called "Commandos" to keep the monsters away.
Now, a very clever grown-up made some silly card games for these Commandos to play! Like a game of "Go Fish" but with monster-fighting heroes. It's a funny way to help grown-ups remember how to keep their computer toys safe, even though it's just a game. It's like a pretend game to learn real-life protecting! Isn't that a funny idea?
This is a submission for the DEV April Fools Challenge What I Built What do you do when you have a series of narrative-driven Docker security workshops featuring 10 elite "Commandos" fighting CVE monsters in Asgard? You could write more documentation. You could add more tests. Or, you could do the most "anti-value" thing possible: Build a full-featured arcade suite where these security characters play Blackjack and Swiss Jass. Presenting the Asgard Arcade : A collection of four utterly useless but technically over-engineered games designed to distract developers from actual security work while simultaneously drilling "Security Metaphors" into their brains. The Lore: Docker Commandos Black Forest Shadow The Docker Commandos are a team of 10 elite specialists, each representing a core Docker
This is a submission for the DEV April Fools Challenge
What I Built
What do you do when you have a series of narrative-driven Docker security workshops featuring 10 elite "Commandos" fighting CVE monsters in Asgard?
You could write more documentation. You could add more tests. Or, you could do the most "anti-value" thing possible: Build a full-featured arcade suite where these security characters play Blackjack and Swiss Jass.
Presenting the Asgard Arcade: A collection of four utterly useless but technically over-engineered games designed to distract developers from actual security work while simultaneously drilling "Security Metaphors" into their brains.
The Lore: Docker Commandos & Black Forest Shadow
The Docker Commandos are a team of 10 elite specialists, each representing a core Docker security feature (e.g., Gord is docker init, Jack is docker scout). Their journey began in the Black Forest Shadow universe—a dark fantasy retelling of container security where warriors fight shadowy monsters called CVEs in the year 1865.
From the 19th-century Black Forest to the futuristic golden districts of Asgard, these characters teach DevSecOps through immersive storytelling.
Black Forest Shadow — A Dark Fantasy Guide to Docker and Kubernetes Security - Docker and Kubernetes Security - Docker and Kubernetes Security
A dark fantasy novel set in the Black Forest of 1865 that teaches Docker and Kubernetes security through narrative — covering CVE hunting, SBOM generation, runtime hardening, and container security.
dockersecurity.io
The Games:
-
Asgard Siege (Tactical Defense): A game where you must counter CVE threats (like "The Supply Chain Hydra") by deploying the correct Commando. Choose wrong, and Asgard's security level crashes.
-
Blackjack with Jack: Standard Blackjack, but against Angra (the shadow villain). If you are dealt Jack (the Cyborg Commando), you get a "Scout Bonus" to see the dealer's hidden card.
-
Asgardian Jass (Schieber): A 4-player Swiss trick-taking game. We replaced standard suits with Shields, Attestations, Hardened Images, and Signatures. Jack is the "Bure" (highest trump).
-
The Reference Deck: A simple card-comparison game to learn the "Power," "Stealth," and "Legacy" stats of each character.
Demo
You can experience the arcade yourself at dockersecurity.io/commandos (scroll down to the "Asgard Arcade") or jump directly into a game below:
The Tactical Siege
Docker and Kubernetes Security
From supply chain to runtime: build safer images, lock down clusters, instrument logging & audit trails, and stay ahead of emerging threats. The comprehensive guide by Mohammad-Ali A'râbi.
dockersecurity.io
Blackjack with Jack
Docker and Kubernetes Security
From supply chain to runtime: build safer images, lock down clusters, instrument logging & audit trails, and stay ahead of emerging threats. The comprehensive guide by Mohammad-Ali A'râbi.
dockersecurity.io
Asgardian Jass
Docker and Kubernetes Security
From supply chain to runtime: build safer images, lock down clusters, instrument logging & audit trails, and stay ahead of emerging threats. The comprehensive guide by Mohammad-Ali A'râbi.
dockersecurity.io
Code
The project is built within the official DockerSecurity.io website repository.
How I Built It
Full Disclosure: Every single game in this arcade, the UI components, the AI logic, and even this very blog post were entirely developed and written by Gemini CLI, an interactive agent. I simply provided the "utterly useless" vision, and the agent executed the over-engineering.
Built with Next.js 14, Tailwind CSS, and Radix UI.
-
The Jass Engine: Features a heuristic AI for your partner (Evie) and opponents (Angra & Jack the Miner) that follows suit rules, handles trump logic, and manages complex turn states.
-
Dynamic State: Utilizes React state machines to manage trick resolution, "Zero-Day Exploit" dealer logic in Blackjack, and the deteriorating security level of Asgard during sieges.
-
Accessible Visuals: Custom character portraits with responsive aspect ratios and high-visibility suit indicators (e.g., Shields for SBOMs, Fingerprints for Identity).
Prize Category
I am submitting this for the Community Favorite category.
While it solves exactly zero real-world security vulnerabilities, it turns the grueling task of learning supply-chain security (SBOMs, Provenance, VEX) into a series of addictive arcade games. It’s the ultimate "Anti-Value" tool: it encourages developers to spend their "Build Time" playing cards with a cyborg cowboy instead of fixing their Dockerfile.
Created by Mohammad-Ali A'râbi (Docker Captain) & Gemini CLI
DEV Community
https://dev.to/aerabi/securing-asgard-why-i-built-a-card-game-suite-for-docker-security-32hnSign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
geminifeaturecomponent
Don't Blink: Evidence Collapse during Multimodal Reasoning
arXiv:2604.04207v1 Announce Type: new Abstract: Reasoning VLMs can become more accurate while progressively losing visual grounding as they think. This creates task-conditional danger zones where low-entropy predictions are confident but ungrounded, a failure mode text-only monitoring cannot detect. Evaluating three reasoning VLMs on MathVista, HallusionBench, and MMMU_Pro, we find a pervasive evidence-collapse phenomenon: attention to annotated evidence regions drops substantially, often losing over half of evidence mass, as reasoning unfolds. Full-response entropy is the most reliable text-only uncertainty signal under cross-dataset transfer, yet adding vision features with a single global linear rule is brittle and often degrades transfer. An entropy-vision interaction model reveals a t

CoALFake: Collaborative Active Learning with Human-LLM Co-Annotation for Cross-Domain Fake News Detection
arXiv:2604.04174v1 Announce Type: new Abstract: The proliferation of fake news across diverse domains highlights critical limitations in current detection systems, which often exhibit narrow domain specificity and poor generalization. Existing cross-domain approaches face two key challenges: (1) reliance on labelled data, which is frequently unavailable and resource intensive to acquire and (2) information loss caused by rigid domain categorization or neglect of domain-specific features. To address these issues, we propose CoALFake, a novel approach for cross-domain fake news detection that integrates Human-Large Language Model (LLM) co-annotation with domain-aware Active Learning (AL). Our method employs LLMs for scalable, low-cost annotation while maintaining human oversight to ensure la

A Model of Understanding in Deep Learning Systems
arXiv:2604.04171v1 Announce Type: new Abstract: I propose a model of systematic understanding, suitable for machine learning systems. On this account, an agent understands a property of a target system when it contains an adequate internal model that tracks real regularities, is coupled to the target by stable bridge principles, and supports reliable prediction. I argue that contemporary deep learning systems often can and do achieve such understanding. However they generally fall short of the ideal of scientific understanding: the understanding is symbolically misaligned with the target system, not explicitly reductive, and only weakly unifying. I label this the Fractured Understanding Hypothesis.
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Products

The Augmentation Trap: AI Productivity and the Cost of Cognitive Offloading
arXiv:2604.03501v1 Announce Type: new Abstract: Experimental evidence confirms that AI tools raise worker productivity, but also that sustained use can erode the expertise on which those gains depend. We develop a dynamic model in which a decision-maker chooses AI usage intensity for a worker over time, trading immediate productivity against the erosion of worker skill. We decompose the tool's productivity effect into two channels, one independent of worker expertise and one that scales with it. The model produces three main results. First, even a decision-maker who fully anticipates skill erosion rationally adopts AI when front-loaded productivity gains outweigh long-run skill costs, producing steady-state loss: the worker ends up less productive than before adoption. Second, when manager

Toward a Universal Color Naming System: A Clustering-Based Approach using Multisource Data
arXiv:2604.03235v1 Announce Type: new Abstract: Is it coral, salmon, or peach? What seems like a simple color can have many names, and without a standard, these variations create confusion across design, technology, and communication. Color naming is a fundamental task across industries such as fashion, cosmetics, web design, and visualization tools. However, the lack of universally accepted color naming standards leads to inconsistent color standards across platforms, applications, and industries. Moreover, these systems include hundreds or thousands of overlapping, perceptually indistinct shades, despite the fact that humans typically distinguish only a limited number of unique color categories in practice. In this study, we propose a clustering-based multisource data framework to build
trunk/17247bdcbbdacb333a1f28519a632823573bb787: [ROCm] simplify unrolling by leveraging compiler (#177697)
A recent change to LLVM ( llvm/llvm-project#181241 ) enables loop unrolling for loops with runtime-known loop count even when the trip count expression is expensive. This is the case for simple HIP loops based on the "blockDim.x" expression, which translates to a complex LLVM IR expression. With this change, we can simply obtain loop unrolling via compiler transformation using "pragma unroll" and without the need of hand-written specializations of the code. Therefore, this patch simplifies the pytorch code base across different targets, as much as possible given different compilation toolchains. Pull Request resolved: #177697 Approved by: https://github.com/jeffdaily , https://github.com/pruthvistony



Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!