OneComp: One-Line Revolution for Generative AI Model Compression
arXiv:2603.28845v1 Announce Type: new Abstract: Deploying foundation models is increasingly constrained by memory footprint, latency, and hardware costs. Post-training compression can mitigate these bottlenecks by reducing the precision of model parameters without significantly degrading performance; however, its practical implementation remains challenging as practitioners navigate a fragmented landscape of quantization algorithms, precision budgets, data-driven calibration strategies, and hardware-dependent execution regimes. We present OneComp, an open-source compression framework that transforms this expert workflow into a reproducible, resource-adaptive pipeline. Given a model identifier and available hardware, OneComp automatically inspects the model, plans mixed-precision assignment
Authors:Yuma Ichikawa, Keiji Kimura, Akihiro Yoshida, Yudai Fujimoto, Hiroki Tokura, Yamato Arai, Yoshiyuki Ishii, Yusei Kawakami, Genki Shikada, Achille Jacquemond, Yoshihiko Fujisawa, Katsuki Fujisawa, Takumi Honda, Akira Sakai
View PDF HTML (experimental)
Abstract:Deploying foundation models is increasingly constrained by memory footprint, latency, and hardware costs. Post-training compression can mitigate these bottlenecks by reducing the precision of model parameters without significantly degrading performance; however, its practical implementation remains challenging as practitioners navigate a fragmented landscape of quantization algorithms, precision budgets, data-driven calibration strategies, and hardware-dependent execution regimes. We present OneComp, an open-source compression framework that transforms this expert workflow into a reproducible, resource-adaptive pipeline. Given a model identifier and available hardware, OneComp automatically inspects the model, plans mixed-precision assignments, and executes progressive quantization stages, ranging from layer-wise compression to block-wise refinement and global refinement. A key architectural choice is treating the first quantized checkpoint as a deployable pivot, ensuring that each subsequent stage improves the same model and that quality increases as more compute is invested. By converting state-of-the-art compression research into an extensible, open-source, hardware-aware pipeline, OneComp bridges the gap between algorithmic innovation and production-grade model deployment.
Comments: 31 pages, 6 figures
Subjects:
Machine Learning (cs.LG); Artificial Intelligence (cs.AI); Computational Engineering, Finance, and Science (cs.CE); Computation and Language (cs.CL)
Cite as: arXiv:2603.28845 [cs.LG]
(or arXiv:2603.28845v1 [cs.LG] for this version)
https://doi.org/10.48550/arXiv.2603.28845
arXiv-issued DOI via DataCite (pending registration)
Submission history
From: Yuma Ichikawa [view email] [v1] Mon, 30 Mar 2026 17:43:32 UTC (323 KB)
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
modelfoundation modeltraining
Inter-Speaker Relative Cues for Two-Stage Text-Guided Target Speech Extraction
arXiv:2603.01316v2 Announce Type: replace Abstract: This paper investigates the use of relative cues for text-based target speech extraction (TSE). We first provide a theoretical justification for relative cues from the perspectives of human perception and label quantization, showing that relative cues preserve fine-grained distinctions that are often lost in absolute categorical representations for continuous-valued attributes. Building on this analysis, we propose a two-stage TSE framework in which a speech separation model first generates candidate sources, followed by a text-guided classifier that selects the target speaker based on embedding similarity. Within this framework, we train two separate classification models to evaluate the advantages of relative cues over independent cues

Empirical and Statistical Characterisation of 28 GHz mmWave Propagation in Office Environments
arXiv:2604.01814v1 Announce Type: new Abstract: Millimeter wave (mmWave) technology at 28 GHz is vital for beyond-5G systems, but indoor deployment remains challenging due to limited statistical evidence on propagation. This study investigates path loss, material penetration, and coverage enhancement using TMYTEK-based measurements. Statistical tests and confidence interval analysis show that path loss aligns with free-space theory, with an exponent of n = 2.07 plus or minus 0.073 (p = 0.385), confirming the suitability of classical models. Material analysis reveals significant variation: desk dividers introduce 3.4 dB more attenuation than display boards (95 percent CI: 1.81 to 4.98 dB, p less than 0.01), contradicting thickness-based assumptions. Reflector optimisation yields a significa

Mitigating Implicit Inconsistencies in Patch Porting
arXiv:2604.01680v1 Announce Type: new Abstract: Promptly porting patches from a source codebase to its variants (e.g., forks and branches) is essential for mitigating propagated defects and vulnerabilities. Recent studies have explored automated patch porting to reduce manual effort and delay, but existing approaches mainly handle inconsistencies visible in a patch's local context and struggle with those requiring global mapping knowledge between codebases. We refer to such non-local inconsistencies as implicit inconsistencies. Implicit inconsistencies pose greater challenges for developers to resolve due to their non-local nature. To address them, we propose MIP, which enables collaboration among an LLM, a compiler, and code analysis utilities. MIP adopts different strategies for differen
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Models

Tracking the emergence of linguistic structure in self-supervised models learning from speech
arXiv:2604.02043v1 Announce Type: cross Abstract: Self-supervised speech models learn effective representations of spoken language, which have been shown to reflect various aspects of linguistic structure. But when does such structure emerge in model training? We study the encoding of a wide range of linguistic structures, across layers and intermediate checkpoints of six Wav2Vec2 and HuBERT models trained on spoken Dutch. We find that different levels of linguistic structure show notably distinct layerwise patterns as well as learning trajectories, which can partially be explained by differences in their degree of abstraction from the acoustic signal and the timescale at which information from the input is integrated. Moreover, we find that the level at which pre-training objectives are d

My most common research advice: do quick sanity checks
Written quickly as part of the Inkhaven Residency . At a high level, research feedback I give to more junior research collaborators often can fall into one of three categories: Doing quick sanity checks Saying precisely what you want to say Asking why one more time In each case, I think the advice can be taken to an extreme I no longer endorse. Accordingly, I’ve tried to spell out the degree to which you should implement the advice, as well as what “taking it too far” might look like. This piece covers doing quick sanity checks, which is the most common advice I give to junior researchers. I’ll cover the other two pieces of advice in a subsequent piece. Doing quick sanity checks Research is hard (almost by definition) and people are often wrong. Every researcher has wasted countless hours

Fast dynamical similarity analysis
arXiv:2511.22828v2 Announce Type: replace-cross Abstract: Understanding how nonlinear dynamical systems (e.g., artificial neural networks and neural circuits) process information requires comparing their underlying dynamics at scale, across diverse architectures and large neural recordings. While many similarity metrics exist, current approaches fall short for large-scale comparisons. Geometric methods are computationally efficient but fail to capture governing dynamics, limiting their accuracy. In contrast, traditional dynamical similarity methods are faithful to system dynamics but are often computationally prohibitive. We bridge this gap by combining the efficiency of geometric approaches with the fidelity of dynamical methods. We introduce fast dynamical similarity analysis (fastDSA),

Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!