Democratizing Foundations of Problem-Solving with AI: A Breadth-First Search Curriculum for Middle School Students
arXiv:2604.01396v1 Announce Type: cross Abstract: As AI becomes more common in students' everyday experiences, a major challenge for K-12 AI education is designing learning experiences that can be meaningfully integrated into existing subject-area instruction. This paper presents the design and implementation of an AI4K12-aligned curriculum that embeds AI learning goals within a rural middle school science classroom using Breadth-First Search (BFS) as an accessible entry point to AI problem-solving. Through unplugged activities and an interactive simulation environment, students learned BFS as a strategy for exploring networks and identifying shortest paths, then applied it to science contexts involving virus spread and contact tracing. To examine engagement and learning, we analyzed pre-
View PDF HTML (experimental)
Abstract:As AI becomes more common in students' everyday experiences, a major challenge for K-12 AI education is designing learning experiences that can be meaningfully integrated into existing subject-area instruction. This paper presents the design and implementation of an AI4K12-aligned curriculum that embeds AI learning goals within a rural middle school science classroom using Breadth-First Search (BFS) as an accessible entry point to AI problem-solving. Through unplugged activities and an interactive simulation environment, students learned BFS as a strategy for exploring networks and identifying shortest paths, then applied it to science contexts involving virus spread and contact tracing. To examine engagement and learning, we analyzed pre- and post-assessments, student work artifacts, and a teacher interview. Results suggest that students engaged productively with the curriculum, improved their understanding of BFS and AI problem-solving, and benefited from learning these ideas within ongoing science instruction. Teacher feedback further indicated that the module fit well within the science curriculum while supporting intended science learning outcomes. We conclude with curriculum and design considerations for broadening access to learning about problem-solving with AI in education.
Comments: Paper accepted to the 27th International Conference on AI in Education (AIED 2026)
Subjects:
Computers and Society (cs.CY); Emerging Technologies (cs.ET); Human-Computer Interaction (cs.HC)
Cite as: arXiv:2604.01396 [cs.CY]
(or arXiv:2604.01396v1 [cs.CY] for this version)
https://doi.org/10.48550/arXiv.2604.01396
arXiv-issued DOI via DataCite (pending registration)
Submission history
From: Griffin Pitts [view email] [v1] Wed, 1 Apr 2026 20:54:14 UTC (6,009 KB)
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
announceproductpaper
Multi-fidelity approaches for general constrained Bayesian optimization with application to aircraft design
Aircraft design relies heavily on solving challenging and computationally expensive Multidisciplinary Design Optimization problems. In this context, there has been growing interest in multi-fidelity models for Bayesian optimization to improve the MDO process by balancing computational cost and accuracy through the combination of high- and low-fidelity simulation models, enabling efficient exploration of the design process at a minimal computational effort. In the existing literature, fidelity selection focuses only on the objective function to decide how to integrate multiple fidelity levels, — Oihan Cordelier, Youssef Diouane, Nathalie Bartoli

Transfer Learning in Bayesian Optimization for Aircraft Design
The use of transfer learning within Bayesian optimization addresses the disadvantages of the so-called \textit{cold start} problem by using source data to aid in the optimization of a target problem. We present a method that leverages an ensemble of surrogate models using transfer learning and integrates it in a constrained Bayesian optimization framework. We identify challenges particular to aircraft design optimization related to heterogeneous design variables and constraints. We propose the use of a partial-least-squares dimension reduction algorithm to address design space heterogeneity, a — Ali Tfaily, Youssef Diouane, Nathalie Bartoli

Symmetrizing Bregman Divergence on the Cone of Positive Definite Matrices: Which Mean to Use and Why
This work uncovers variational principles behind symmetrizing the Bregman divergences induced by generic mirror maps over the cone of positive definite matrices. We show that computing the canonical means for this symmetrization can be posed as minimizing the desired symmetrized divergences over a set of mean functionals defined axiomatically to satisfy certain properties. For the forward symmetrization, we prove that the arithmetic mean over the primal space is canonical for any mirror map over the positive definite cone. For the reverse symmetrization, we show that the canonical mean is the — Tushar Sial, Abhishek Halder
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Products

🦀 Rust Foundations — The Stuff That Finally Made Things Click
"Rust compiler and Clippy are the biggest tsunderes — they'll shout at you for every small mistake, but in the end… they just want your code to be perfect." Why I Even Started Rust I didn't pick Rust out of curiosity or hype. I had to. I'm working as a Rust dev at Garden Finance , where I built part of a Wallet-as-a-Service infrastructure. Along with an Axum backend, we had this core Rust crate ( standard-rs ) handling signing and broadcasting transactions across: Bitcoin EVM chains Sui Solana Starknet And suddenly… memory safety wasn't "nice to have" anymore. It was everything. Rust wasn't just a language — it was a guarantee . But yeah… in the beginning? It felt like the compiler hated me :( So I'm writing this to explain Rust foundations in the simplest way possible — from my personal n

Why Standard HTTP Libraries Are Dead for Web Scraping (And How to Fix It)
If you are building a data extraction pipeline in 2026 and your core network request looks like Ruby’s Net::HTTP.get(URI(url)) or Python's requests.get(url) , you are already blocked. The era of bypassing bot detection by rotating datacenter IPs and pasting a fake Mozilla/5.0 User-Agent string is long gone. Modern Web Application Firewalls (WAFs) like Cloudflare, Akamai, and DataDome don’t just read your headers anymore—they interrogate the cryptographic foundation of your connection. Here is a deep dive into why standard HTTP libraries actively sabotage your scraping infrastructure, and how I built a polyglot sidecar architecture to bypass Layer 4–7 fingerprinting entirely. The Fingerprint You Didn’t Know You Had When your code opens a secure connection to a server, long before the first

Tired of Zillow Blocking Scrapers — Here's What Actually Works in 2026
If you've ever tried scraping Zillow with BeautifulSoup or Selenium, you know the pain. CAPTCHAs, IP bans, constantly changing HTML selectors, headless browser detection — it's an arms race you're not going to win. I spent way too long fighting anti-bot systems before switching to an API-based approach. This post walks through how to pull Zillow property data, search listings, get Zestimates, and export everything to CSV/Excel — all with plain Python and zero browser automation. What You'll Need Python 3.7+ The requests library ( pip install requests ) A free API key from RealtyAPI That's it. No Selenium. No Playwright. No proxy rotation. Getting Started: Your First Property Lookup Let's start simple — get full property details for a single address: import requests url = " https://zillow.r

Why Gaussian Diffusion Models Fail on Discrete Data?
arXiv:2604.02028v1 Announce Type: new Abstract: Diffusion models have become a standard approach for generative modeling in continuous domains, yet their application to discrete data remains challenging. We investigate why Gaussian diffusion models with the DDPM solver struggle to sample from discrete distributions that are represented as a mixture of delta-distributions in the continuous space. Using a toy Random Hierarchy Model, we identify a critical sampling interval in which the density of noisified data becomes multimodal. In this regime, DDPM occasionally enters low-density regions between modes producing out-of-distribution inputs for the model and degrading sample quality. We show that existing heuristics, including self-conditioning and a solver we term q-sampling, help alleviate

Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!