Meta-Programming and Macro capabilities of various languages
Meta-programming = the broad idea of “programs that manipulate or generate programs” . It can happen at runtime (reflection) or compile-time (macros). Macros = one specific style of meta-programming, usually tied to transforming syntax at compile time (in a pre-processor or AST-transformer). It takes a piece of code as input and replaces it with another piece of code as output, often based on patterns or parameters. Rule‑based transformation: A macro is specified as a pattern (e.g., a template, an AST pattern, or token pattern) plus a replacement that is generated when that pattern is matched. Expansion, not function call: Macro use is not a runtime call; the macro is expanded before execution, so the final code is the result of replacing the macro invocation with its generated code. Here
Meta-programming = the broad idea of “programs that manipulate or generate programs”. It can happen at runtime (reflection) or compile-time (macros).
Macros = one specific style of meta-programming, usually tied to transforming syntax at compile time (in a pre-processor or AST-transformer). It takes a piece of code as input and replaces it with another piece of code as output, often based on patterns or parameters. Macros consist of:
-
Rule‑based transformation: A macro is specified as a pattern (e.g., a template, an AST pattern, or token pattern) plus a replacement that is generated when that pattern is matched.
-
Expansion, not function call: Macro use is not a runtime call; the macro is expanded before execution, so the final code is the result of replacing the macro invocation with its generated code.
Here are some programming languages and their meta-programming and macro capabilities.
NB! Take with a grain of salt. The result comes from working with perplexity.ai, and I have not had a chance to personally verify all of the cells. They do look generally correct to me overall, though. Corrections are welcome!
Metaprogramming + macro features
Here are the programming languages with their scores (out of 15) and links to their repos or homepages:
-
Racket: 15
-
Common Lisp (CL): 13
-
Scheme (R7RS‑small): 12
-
Rust: 11
-
Nim: 10
-
Clojure: 10
-
Carp: 9
-
Jai: 5
-
C++: 5
-
Zig: 4
-
Ruby: 4
Scores are out of 15 = 4 (metaprogramming) + 3 (compile‑time facilities) + 8 (macro features).
Each cell is either ✅ (yes) or – (no / limited).
Feature / language
Racket
Common Lisp
Scheme (R7RS‑small)
Rust
Nim
Clojure
Carp
Jai
C++
Zig comptime
Ruby
Metaprogramming features:
Runtime metaprogramming (e.g., open classes, define_method, method hooks)
✅
✅
–
–
–
✅
–
–
–
–
✅
Runtime reflection / introspection ✅ ✅ ✅ – – ✅ – – ✅ – ✅
Runtime eval / dynamic code loading
✅
✅
✅
–
–
✅
–
–
–
–
✅
Build‑ or tooling‑level code generation supported ✅ ✅ ✅ ✅ ✅ ✅ ✅ ✅ ✅ ✅ ✅
Metaprogramming score (out of 4): 4 4 3 1 1 4 1 1 2 1 4
Compile‑time facilities (not strictly macros):
Racket
Common Lisp
Scheme (R7RS‑small)
Rust
Nim
Clojure
Carp
Jai
C++
Zig comptime
Ruby
Run arbitrary code at compile time
✅
✅
✅
✅
✅
–
✅
✅
✅ (constexpr)
✅
–
Types as values at compile time
✅ (– but in Typed Racket)
–
–
✅
✅
–
–
✅
✅ (constexpr + templates)
✅
–
constexpr‑style type‑level / compile‑time computation
✅
–
–
✅ (const‑eval)
✅
–
✅
✅
✅ (via constexpr)
✅
–
Macro features:
Hygienic identifier binding ✅ ✅ ✅ ✅ ✅ ✅ ✅ (gensym but manual) ✅ – – –
Operate on AST / syntax tree ✅ ✅ ✅ ✅ ✅ ✅ ✅ – – – –
Pattern‑based transformations ✅ ✅ ✅ ✅ ✅ ✅ ✅ – – – –
Define new syntactic forms ✅ ✅ ✅ ✅ ✅ ✅ ✅ – – – –
Define new keywords / syntax ✅ ✅ ✅ ✅ ✅ ✅ ✅ – – – –
Override core language forms ✅ ✅ ✅ – – – – – – – –
Multi‑phase / macros of macros ✅ ✅ ✅ ✅ – – – – – – –
Full‑fledged DSL / language building (via macros) ✅ ✅ ✅ ✅ ✅ ✅ ✅ – – – –
Macro & compile time features score (out of 11) 11 9 9 10 9 6 6 4 3 3 0
Total score 15 13 12 11 10 10 9 5 5 4 4
Racket
Common Lisp
Scheme (R7RS‑small)
Rust
Nim
Clojure
Carp
Jai
C++
Zig comptime
Ruby
The score counts one point per row where the language can reasonably do what the feature describes (DSL‑building is counted as a full feature, even if “limited” in some languages).
The feature score is not an ultimate measure of meta-programming power, since a language (like C++) may have a higher score than another language (like Ruby), but generally be considered less tailored for meta-programming than the other language (Ruby is generally revered for its powerful meta-programming abilities).
Macro features are varied and many, and thus in the total score they gain an undue weight, although runtime meta-programming may be just as, or even more, powerful.
Lisp-style languages (with their homoiconic S-expressions) make out 5 of the 11 languages in our list: Racket, CL, Scheme, Clojure, Carp.
For further reading: https://github.com/oils-for-unix/oils/wiki/Metaprogramming
DEV Community
https://dev.to/redbar0n/meta-programming-and-macro-capabilities-of-various-languages-1hgdSign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
transformerfeaturecode generation![[R], 31 MILLIONS High frequency data, Light GBM worked perfectly](https://d2xsxph8kpxj0f.cloudfront.net/310419663032563854/konzwo8nGf8Z4uZsMefwMr/default-img-neural-network-P6fqXULWLNUwjuxqUZnB3T.webp)
[R], 31 MILLIONS High frequency data, Light GBM worked perfectly
We just published a paper on predicting adverse selection in high-frequency crypto markets using LightGBM , and I wanted to share it here because the findings are directly relevant to anyone dealing high frequency data and machine learning The core problem we solved: Every market maker's nightmare — getting picked off by informed traders right before a big move. We built a model that flags those toxic seconds before they wreck you. The data: - 31,081,463 second-level observations of BTC/USDT perpetual futures on Bybit - February 2025 → February 2026 (381 raw daily files) - Strict walk-forward regime, zero lookahead bias The key results (this is the part that shocked us): Our TailScore metric — which combines predicted toxicity probability with predicted price move severity — flags the top

Latent Reasoning Sprint #3: Activation Difference Steering and Logit Lens
In my previous post I found evidence consistent with the scratchpad paper's compute/store alternation hypothesis — even steps showing higher intermediate answer detection and odd steps showing higher entropy along with results matching “Can we interpret latent reasoning using current mechanistic interpretability tools?”. This post investigates activation steering applied to latent reasoning and examines the resulting performance changes. Quick Summary: Tuned Logit lens sometimes does not find the final answer to a prompt and instead finds a close approximation Tuned Logit lens does not seem to have a consistent location layer or latent where the final answer is positioned. Tuned logit lens variants like ones only trained on latent 3 still only have therefore on odd vectors. Activation stee

Building a Second Brain for Claude Code
Building a Second Brain for Claude Code Every team I've worked on has the same problem. Someone makes a decision — a good one, usually — with a lot of context behind it. Why we chose DuckDB over Postgres. Why we inverted that dependency. Why the authentication flow goes through a middleware layer instead of a decorator. And then, six months later, someone asks "why does this work this way?" and the answer is... gone. Buried in Slack. Lost in a PR description nobody remembers. Living only in the head of the person who wrote it, if they're still on the team. In the age of agentic development this fundamental problem has only been exacerbated. The time it takes to code up an epic is no longer the long pole in the SDLC tent. Knowledge is being generated at exponential rates and no one seems to
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Products
![Considering NeurIPS submission [D]](https://d2xsxph8kpxj0f.cloudfront.net/310419663032563854/konzwo8nGf8Z4uZsMefwMr/default-img-robot-hand-JvPW6jsLFTCtkgtb97Kys5.webp)
Considering NeurIPS submission [D]
Wondering if it worth submitting paper I’m working on to NeurIPS. I have formal mathematical proof for convergence of a novel agentic system plus a compelling application to a real world use case. The problem is I just have a couple examples. I’ve tried working with synthetic data and benchmarks but no existing benchmarks captures the complexity of the real world data for any interesting results. Is it worth submitting or should I hold on to it until I can build up more data? submitted by /u/Clean-Baseball3748 [link] [comments]



Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!