Bayesian Additive Regression Trees for functional ANOVA model
arXiv:2509.03317v4 Announce Type: replace Abstract: Bayesian Additive Regression Trees (BART) is a powerful statistical model that leverages the strengths of Bayesian inference and regression trees. It has received significant attention for capturing complex non-linear relationships and interactions among predictors. However, the accuracy of BART often comes at the cost of interpretability. To address this limitation, we propose ANOVA Bayesian Additive Regression Trees (ANOVA-BART), a novel extension of BART based on the functional ANOVA decomposition, which is used to decompose the variability of a function into different interactions, each representing the contribution of a different set of covariates or factors. Our proposed ANOVA-BART enhances interpretability, preserves and extends th
View PDF HTML (experimental)
Abstract:Bayesian Additive Regression Trees (BART) is a powerful statistical model that leverages the strengths of Bayesian inference and regression trees. It has received significant attention for capturing complex non-linear relationships and interactions among predictors. However, the accuracy of BART often comes at the cost of interpretability. To address this limitation, we propose ANOVA Bayesian Additive Regression Trees (ANOVA-BART), a novel extension of BART based on the functional ANOVA decomposition, which is used to decompose the variability of a function into different interactions, each representing the contribution of a different set of covariates or factors. Our proposed ANOVA-BART enhances interpretability, preserves and extends the theoretical guarantees of BART, and achieves comparable prediction performance. Specifically, we establish that the posterior concentration rate of ANOVA-BART is nearly minimax optimal, and further provides the same convergence rates for each interaction that are not available for BART. Moreover, comprehensive experiments confirm that ANOVA-BART is comparable to BART in both accuracy and uncertainty quantification, while also demonstrating its effectiveness in component selection. These results suggest that ANOVA-BART offers a compelling alternative to BART by balancing predictive accuracy, interpretability, and theoretical consistency.
Subjects:
Machine Learning (stat.ML); Machine Learning (cs.LG)
Cite as: arXiv:2509.03317 [stat.ML]
(or arXiv:2509.03317v4 [stat.ML] for this version)
https://doi.org/10.48550/arXiv.2509.03317
arXiv-issued DOI via DataCite
Submission history
From: Seokhun Park [view email] [v1] Wed, 3 Sep 2025 13:50:45 UTC (466 KB) [v2] Thu, 4 Sep 2025 12:40:40 UTC (462 KB) [v3] Wed, 4 Feb 2026 06:28:53 UTC (540 KB) [v4] Tue, 31 Mar 2026 06:49:05 UTC (540 KB)
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
modelannounceavailableAnthropic Responsible Scaling Policy v3: A Matter of Trust
Anthropic has revised its Responsible Scaling Policy to v3. The changes involved include abandoning many previous commitments, including one not to move ahead if doing so would be dangerous, citing that given competition they feel blindly following such a principle would not make the world safer. Holden Karnofsky advocated for the changes. He maintains that the previous strategy of specific commitments was in error, and instead endorses the new strategy of having aspirational goals. He was not at Anthropic when the commitments were made. My response to this will be two parts. Today’s post talks about considerations around Anthropic going back on its previous commitments, including asking to what extent Anthropic broke promises or benefited from people reacting to those promises, and how we
InkSF, an Opening on Finding the Highest Impact in AI Safety and Moving to SF
How can we actually minimize the odds that AI leads to catastrophic outcomes for all of us humans? This question has been rattling around my head for the last two months. The world might be ending. Nobody seems to care. The incentives are steaming us ahead. When I ask strangers on the street: “ How likely is it that superhuman [1] AI could become too powerful for humans to control?”, 78% say either "very likely" (51.6%) or "somewhat likely" (26.3%) [2] . My guess is AI capabilities spending is at least 20x the spending on ensuring AI leads to the flourishing of humans [3] . Moloch [4] is winning. So what can actually be done? As a toy example [5] : let’s say I currently think there is a 40% chance AI eventually goes extraordinarily bad for humanity. I could either: Try really hard to get l
March 2026 Links
Why We Have Prison Gangs : Q&A whose ultimate answer is that gangs are a form of governance in a place that has little. Skarbek also talks about what being in a gang is like, rules they have in place (bedtime, taxes, no affiliating with sex offenders or former LEOs), similarities. Plane Crash : Delian gives a play-by-play of his plane engine cutting off mid-flight, culminating in him crash landing on a golf course. The lessons learned extended elsewhere, namely where else did he simply say "I'll do it later", when later wasn't guaranteed? Sirat is not about the end of the world : A great perspective on Sirat that contains spoilers. Everything you ever wanted to know about Roblox, but were afraid to ask a 12-year-old Maybe there's a pattern here? : Technology, no matter what its original pu
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Releases
Anthropic Responsible Scaling Policy v3: A Matter of Trust
Anthropic has revised its Responsible Scaling Policy to v3. The changes involved include abandoning many previous commitments, including one not to move ahead if doing so would be dangerous, citing that given competition they feel blindly following such a principle would not make the world safer. Holden Karnofsky advocated for the changes. He maintains that the previous strategy of specific commitments was in error, and instead endorses the new strategy of having aspirational goals. He was not at Anthropic when the commitments were made. My response to this will be two parts. Today’s post talks about considerations around Anthropic going back on its previous commitments, including asking to what extent Anthropic broke promises or benefited from people reacting to those promises, and how we
AI Writes Better UI Without React Than With It
<p>I recently shipped a desktop app. No React. No npm. No node_modules. AI generated most of the UI code, and it worked better without a framework than it ever would have with one.</p> <p>Streaming real-time content, managing complex state, drag-and-drop reordering, resizable layouts -- all in plain Web Components and imperative DOM manipulation. Zero runtime dependencies.</p> <p>This wasn't an ideological choice. It just turned out that when AI writes your UI, frameworks get in the way more than they help.</p> <h2> Why frameworks existed </h2> <p>Think about what React actually solved when it came out. The browser DOM API was painful. Managing state across components was a mess. Keeping the UI in sync with data required tons of boilerplate. Frameworks abstracted all of that away.</p> <p>B
Behind the Scenes: How Database Traffic Control Works
<p><em>By Patrick Reynolds</em></p> <p>In March, we released Database Traffic Control™, a feature for mitigating and preventing database overload due to unexpectedly expensive SQL queries. For an overview, <a href="https://planetscale.com/blog/introducing-database-traffic-control" rel="noopener noreferrer">read the blog post introducing the feature</a>, and to get started using it, read the <a href="https://planetscale.com/docs/postgres/traffic-control/" rel="noopener noreferrer">reference documentation</a>. This post is a deep dive into how the feature works.</p> <h2> Background </h2> <p>If you already know how Postgres and Postgres extensions work internally, you can skip this section.</p> <p>A single Postgres server is made up of many running processes. Each client connection to Postgre
We Built the Same Agent Three Times Before It Worked
<p>Two months ago, our DevOps team set out to build an AWS governance agent. Something that could look across a multi-account AWS organization, find orphaned resources, flag security issues, check tag compliance, and tell you where you're bleeding money — in plain English.</p> <p>We had AWS Strands Agents SDK, Amazon Bedrock AgentCore, and a reasonable amount of optimism.</p> <p>What followed was two months of building, tearing down, and rebuilding. Three fundamentally different architectures. 18,000 lines of code written and then deleted. And a final system that's simpler than any of the ones that came before it.</p> <p>This is the story of how we got there.</p> <h2> Iteration 1: "The LLM Will Figure It Out" </h2> <p>The first version was the obvious one. Give the LLM a set of AWS API too
Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!