The DCT Neuron for Estimation and Compensation of Amplitude Distortions in OFDM Systems
arXiv:2603.29680v1 Announce Type: new Abstract: We present a receiver-side framework for identifying amplitude distortions in frequency-selective OFDM channels. The core novelty is the use of the DCT Neuron, a compact adaptive processor based on the discrete cosine transform (DCT), to characterize the channel's nonlinear response, leveraging its properties for highly efficient estimation. Operating directly in the time domain, the method builds an accurate signal model and tracks channel variations adaptively, achieving reliable identification with as few as two OFDM symbols. The learned nonlinear response can then be exploited for predistortion and iterative decoding, enabling low-complexity, real-time adaptive compensation of complex responses in multicarrier systems.
View PDF HTML (experimental)
Abstract:We present a receiver-side framework for identifying amplitude distortions in frequency-selective OFDM channels. The core novelty is the use of the DCT Neuron, a compact adaptive processor based on the discrete cosine transform (DCT), to characterize the channel's nonlinear response, leveraging its properties for highly efficient estimation. Operating directly in the time domain, the method builds an accurate signal model and tracks channel variations adaptively, achieving reliable identification with as few as two OFDM symbols. The learned nonlinear response can then be exploited for predistortion and iterative decoding, enabling low-complexity, real-time adaptive compensation of complex responses in multicarrier systems.
Comments: Paper submitted to URSI 2026
Subjects:
Signal Processing (eess.SP)
Cite as: arXiv:2603.29680 [eess.SP]
(or arXiv:2603.29680v1 [eess.SP] for this version)
https://doi.org/10.48550/arXiv.2603.29680
arXiv-issued DOI via DataCite
Submission history
From: Marc Martinez-Gost [view email] [v1] Tue, 31 Mar 2026 12:35:07 UTC (217 KB)
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
modelannouncearxivKnowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Models

$k$NNProxy: Efficient Training-Free Proxy Alignment for Black-Box Zero-Shot LLM-Generated Text Detection
arXiv:2604.02008v1 Announce Type: new Abstract: LLM-generated text (LGT) detection is essential for reliable forensic analysis and for mitigating LLM misuse. Existing LGT detectors can generally be categorized into two broad classes: learning-based approaches and zero-shot methods. Compared with learning-based detectors, zero-shot methods are particularly promising because they eliminate the need to train task-specific classifiers. However, the reliability of zero-shot methods fundamentally relies on the assumption that an off-the-shelf proxy LLM is well aligned with the often unknown source LLM, a premise that rarely holds in real-world black-box scenarios. To address this discrepancy, existing proxy alignment methods typically rely on supervised fine-tuning of the proxy or repeated inter

SAFE: Stepwise Atomic Feedback for Error correction in Multi-hop Reasoning
arXiv:2604.01993v1 Announce Type: new Abstract: Multi-hop QA benchmarks frequently reward Large Language Models (LLMs) for spurious correctness, masking ungrounded or flawed reasoning steps. To shift toward rigorous reasoning, we propose SAFE, a dynamic benchmarking framework that replaces the ungrounded Chain-of-Thought (CoT) with a strictly verifiable sequence of grounded entities. Our framework operates across two phases: (1) train-time verification, where we establish an atomic error taxonomy and a Knowledge Graph (KG)-grounded verification pipeline to eliminate noisy supervision in standard benchmarks, identifying up to 14% of instances as unanswerable, and (2) inference-time verification, where a feedback model trained on this verified dataset dynamically detects ungrounded steps in



Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!