Structured identification of multivariable modal systems
arXiv:2510.10820v2 Announce Type: replace-cross Abstract: Physically interpretable models are essential for next-generation industrial systems, as these representations enable effective control, support design validation, and provide a foundation for monitoring strategies. The aim of this paper is to develop a system identification framework for estimating modal models of complex multivariable mechanical systems from frequency response data. To achieve this, a two-step structured identification algorithm is presented, where an additive model is first estimated using a refined instrumental variable method and subsequently projected onto a modal form. The developed identification method provides accurate, physically-relevant, minimal-order models, for both generally-damped and proportionally
View PDF HTML (experimental)
Abstract:Physically interpretable models are essential for next-generation industrial systems, as these representations enable effective control, support design validation, and provide a foundation for monitoring strategies. The aim of this paper is to develop a system identification framework for estimating modal models of complex multivariable mechanical systems from frequency response data. To achieve this, a two-step structured identification algorithm is presented, where an additive model is first estimated using a refined instrumental variable method and subsequently projected onto a modal form. The developed identification method provides accurate, physically-relevant, minimal-order models, for both generally-damped and proportionally damped modal systems. The effectiveness of the proposed method is demonstrated through experimental validation on a prototype wafer-stage system, which features a large number of spatially distributed actuators and sensors and exhibits complex flexible dynamics.
Comments: 23 pages, 13 figures
Subjects:
Systems and Control (eess.SY); Signal Processing (eess.SP)
Cite as: arXiv:2510.10820 [eess.SY]
(or arXiv:2510.10820v2 [eess.SY] for this version)
https://doi.org/10.48550/arXiv.2510.10820
arXiv-issued DOI via DataCite
Journal reference: journal = {Mechanical Systems and Signal Processing}, volume = {247}, pages = {113948}, year = {2026}, issn = {0888-3270},
Related DOI:
https://doi.org/10.1016/j.ymssp.2026.113948
DOI(s) linking to related resources
Submission history
From: Maarten Van Der Hulst [view email] [v1] Sun, 12 Oct 2025 22:06:16 UTC (4,760 KB) [v2] Tue, 31 Mar 2026 11:33:53 UTC (3,211 KB)
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
modelannouncefeature
AI Is Insatiable
While browsing our website a few weeks ago, I stumbled upon “ How and When the Memory Chip Shortage Will End ” by Senior Editor Samuel K. Moore. His analysis focuses on the current DRAM shortage caused by AI hyperscalers’ ravenous appetite for memory, a major constraint on the speed at which large language models run. Moore provides a clear explanation of the shortage, particularly for high bandwidth memory (HBM). As we and the rest of the tech media have documented, AI is a resource hog. AI electricity consumption could account for up to 12 percent of all U.S. power by 2028. Generative AI queries consumed 15 terawatt-hours in 2025 and are projected to consume 347 TWh by 2030. Water consumption for cooling AI data centers is predicted to double or even quadruple by 2028 compared to 2023. B
Salt: Self-Consistent Distribution Matching with Cache-Aware Training for Fast Video Generation
Video generation models are distilled using self-consistent distribution matching to improve quality under extreme inference constraints, with cache-aware training enhancing real-time autoregressive generation. (1 upvotes on HuggingFace)
Swift-SVD: Theoretical Optimality Meets Practical Efficiency in Low-Rank LLM Compression
Swift-SVD is a compression framework that achieves optimal low-rank approximations for large language models through efficient covariance aggregation and eigenvalue decomposition, enabling faster and more accurate model compression. (3 upvotes on HuggingFace)
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Research Papers
Salt: Self-Consistent Distribution Matching with Cache-Aware Training for Fast Video Generation
Video generation models are distilled using self-consistent distribution matching to improve quality under extreme inference constraints, with cache-aware training enhancing real-time autoregressive generation. (1 upvotes on HuggingFace)
Swift-SVD: Theoretical Optimality Meets Practical Efficiency in Low-Rank LLM Compression
Swift-SVD is a compression framework that achieves optimal low-rank approximations for large language models through efficient covariance aggregation and eigenvalue decomposition, enabling faster and more accurate model compression. (3 upvotes on HuggingFace)
Communicating about Space: Language-Mediated Spatial Integration Across Partial Views
MLLMs demonstrate limited capability in collaborative spatial communication tasks, achieving only 72% accuracy compared to humans' 95%, with models struggling to build consistent shared mental models unlike human dialogues that become more specific during convergence. (9 upvotes on HuggingFace)
Test-Time Scaling Makes Overtraining Compute-Optimal
Train-to-Test scaling laws jointly optimize model size, training tokens, and inference samples under fixed budgets, revealing that optimal pretraining decisions shift into overtraining regimes when inference costs are considered. (11 upvotes on HuggingFace)


Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!