Newegg shaves $240 off this well-equipped RTX 5070, 7800X3D gaming PC — at $1,929, this CyberpowerPC is at least $100 less than the current cost of its components
Newegg shaves $240 off this well-equipped RTX 5070, 7800X3D gaming PC — at $1,929, this CyberpowerPC is at least $100 less than the current cost of its components
(Image credit: CyberpowerPC / Newegg)
With AI demand driving up the prices (and driving down availability) of all manner of PC components, opting for a prebuilt continues to be a good way to save money if you want a new gaming rig. And at $1,929 for a liquid-cooled Ryzen 7 7800X3D / RTX 5070 rig, this system from CyberpowerPC is the best prebuilt deal we’ve spotted today.
- Grab this deal at Newegg
The RTX 5070 is our favorite mid-range GPU at the moment, thanks to its roughly 40% better performance over the 5060 Ti 16GB when games aren’t constrained by memory. And the 5070 has 12GB of GDDR7, a significant jump over the limiting 8GB that’s common with lower-end cards.
(Image credit: Future)
Pair that with the Ryzen 7 7800 X3D (which is about 10% faster than Intel’s new Core Ultra 7 270K Plus at gaming), plus a 1TB PCIe 4.0 SSD and 32GB of RAM, and this is a potent gaming combo.
Of course, cooling and presentation matter in a gaming rig as well. CyberpowerPC includes a 360mm AIO cooler and a total of seven RGB fans packed into Phanteks’ attractive NV5S case. You get glass on the front and side panels, and RGB light strips around the front and the side PSU shroud as well. Intake happens via the three side fans in front of the motherboard, which get air through vents in the back side panel.
To top it all off, Nvidia tosses a copy of Resident Evil Requiem in with the graphics card (while supplies last), and Cyberpower’s in-box keyboard and mice tend to be a step above what ships with most big-box gaming PCs.
After a rough start with the Mattel Aquarius as a child, Matt built his first PC in the late 1990s and ventured into mild PC modding in the early 2000s. He’s spent the last 15 years covering emerging technology for Smithsonian, Popular Science, and Consumer Reports, while testing components and PCs for Computer Shopper, PCMag and Digital Trends.
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
component
Valence-Arousal Subspace in LLMs: Circular Emotion Geometry and Multi-Behavioral Control
arXiv:2604.03147v1 Announce Type: new Abstract: We present a method to identify a valence-arousal (VA) subspace within large language model representations. From 211k emotion-labeled texts, we derive emotion steering vectors, then learn VA axes as linear combinations of their top PCA components via ridge regression on the model's self-reported valence-arousal scores. The resulting VA subspace exhibits circular geometry consistent with established models of human emotion perception. Projections along our recovered VA subspace correlate with human-crowdsourced VA ratings across 44k lexical items. Furthermore, steering generation along these axes produces monotonic shifts in the corresponding affective dimensions of model outputs. Steering along these directions also induces near-monotonic bi

A Reduction-Driven Local Search for the Generalized Independent Set Problem
arXiv:2505.21052v2 Announce Type: replace Abstract: The Generalized Independent Set (GIS) problem extends the classical maximum independent set problem by incorporating profits for vertices and penalties for edges. This generalized problem has been identified in diverse applications in fields such as forest harvest planning, competitive facility location, social network analysis, and even machine learning. However, solving the GIS problem in large-scale, real-world networks remains computationally challenging. In this paper, we explore data reduction techniques to address this challenge. We first propose 14 reduction rules that can reduce the input graph with rigorous optimality guarantees. We then present a reduction-driven local search (RLS) algorithm that integrates these reduction rule

Bilateral Intent-Enhanced Sequential Recommendation with Embedding Perturbation-Based Contrastive Learning
arXiv:2604.02833v1 Announce Type: new Abstract: Accurately modeling users' evolving preferences from sequential interactions remains a central challenge in recommender systems. Recent studies emphasize the importance of capturing multiple latent intents underlying user behaviors. However, existing methods often fail to effectively exploit collective intent signals shared across users and items, leading to information isolation and limited robustness. Meanwhile, current contrastive learning approaches struggle to construct views that are both semantically consistent and sufficiently discriminative. In this work, we propose BIPCL, an end-to-end Bilateral Intent-enhanced, Embedding Perturbation-based Contrastive Learning framework. BIPCL explicitly integrates multi-intent signals into both it
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Generative UI

Researchers train living rat neurons to perform real-time AI computations — experiments could pave the way for new brain-machine interfaces - Tom's Hardware
Researchers train living rat neurons to perform real-time AI computations — experiments could pave the way for new brain-machine interfaces Tom's Hardware

Researchers train living rat neurons to perform real-time AI computations — experiments could pave the way for new brain-machine interfaces
Researchers train living rat neurons to perform real-time AI computations — experiments could pave the way for new brain-machine interfaces

I Built a GitHub-Style Contribution Calendar That Shows When My AI Works Without Me
GitHub's contribution calendar shows when you coded. But what if half those green squares weren't actually you? I built cc-calendar — a terminal tool that renders a GitHub-style activity graph for your Claude Code sessions. Two rows: YOU (cyan) and AI (yellow). Ghost Days — when AI ran autonomously while you had zero interactive sessions — glow bright. The output $ npx cc-calendar cc-calendar — AI草カレンダー ══════════════════════════════════════════════════ Jan Feb Mar Sun ░░░░░▒░░░ Sun ░▒▒▒▓█▓█▒ Mon ░░░░░░░░░ Mon ░▒▒▒▓██▓░ Tue ░░░░░▒░░░ Tue ░▒▒▒▒▓▓▓░ Wed ░░░░▒░░░░ Wed ░▒▓▒▒▓▓▓░ Thu ░░░░░░██░ Thu ░▓▒▒▒▒▓▒░ Fri ░░░░░░█░░ Fri ░▒░█▒▒▓▒░ Sat ░░░░▒░░█░ Sat ▒░░▒▓▒▓█░ █ You █ AI █ Ghost Day ░▒▓█ = none→light→heavy ▸ Period: 2026-01-10 → 2026-03-01 ▸ Active Days: 48 total ├─ Both active: 8 days ├─ You

Please add New hardware the AMD ai pro R9700 "My GPU"
Please add “AMD Radeon AI PRO R9700” to “My Hardware” Specs: GPU Memory: Volume Memory - 32GB Memory Type - GDDR6 AMD Infinity Push Technology - 64 MB Memory Interface - 256-bit Max Memory Letters - 640 GB/s GPU: AMD RDNA™ 4 Execution Accelerators - 64 Against AI Accelerators - 128 Streams - Processors 4096 Compute Units - 64 Boost Ads - Up to 2920MHz Gameplay - 2350MHz Max Charged Speed - Up to 373.76 GP/s Max Single Precision (FP32 Vector) Performance - 47.8 TFLOPs Max Half Precision (FP16 Vector) Performance - 95.7 TFLOPs Max Half Precision (FP16 Matrix) Performance - 191 TFLOPs Gain Structural Spurtity Max Half-Precision (FP16 Matrix) Performance - 383 TFLOPs Max 8-Bit Performance (FP8 Matrix) (E5M2, E4M3) - 383 TFLOPs 8-Bit Performance (FP8 Matrix) with Structured Spursity (E5M2, E4


Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!