Supermicro co-founder pleads not guilty to smuggling billions of dollars of Nvidia servers to China — suspected smuggler released on $5 million bond
Supermicro co-founder pleads not guilty to smuggling billions of dollars of Nvidia servers to China — suspected smuggler released on $5 million bond
Super Micro Computer co-founder Yih-Shyan "Wally" Liaw pleaded not guilty on Wednesday in a Manhattan federal court to charges that he helped illegally divert billions of dollars' worth of Nvidia-powered servers to China, Bloomberg reported. Co-defendant Ting-Wei "Willy" Sun, an outside contractor described by prosecutors as a "fixer" in the smuggling scheme, also entered a not-guilty plea at the hearing before U.S. District Judge Edgardo Ramos.
Liaw has been released on a $5 million bond, while Sun's lawyer told the judge that he's negotiating a bail package with prosecutors. The third defendant, Ruei-Tsang "Steven" Chang, a former general manager in Super Micro's Taiwan office, is not in U.S. custody. Judge Ramos set a November 2 trial date for the case.
Article continues below
Super Micro itself isn’t named as a defendant in the indictment, but acknowledged that the three accused individuals are "associated" with the company in an official statement. The server maker called the alleged conduct a violation of its internal policies and said it maintains a compliance program covering U.S. export and re-export control laws. Nvidia also distanced itself from the scheme, telling Tom's Hardware that strict compliance is a priority and that it does not provide service or support for unlawfully diverted systems.
Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.
tomshardware.com
https://www.tomshardware.com/tech-industry/super-micro-co-founder-wally-liaw-pleads-not-guilty-to-nvidia-smuggling-chargesSign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
releasebillionmillion
How to Run Local AI Agents on Consumer‑Grade Hardware: A Practical Guide
How to Run Local AI Agents on Consumer‑Grade Hardware: A Practical Guide Want to run powerful AI agents without the endless API bills of cloud services? The good news is you don’t need a data‑center‑grade workstation. A single modern consumer GPU is enough to host capable 9B‑parameter models like qwen3.5:9b, giving you private, low‑latency inference at a fraction of the cost. This article walks you through the exact hardware specs, VRAM needs, software installation steps, and budget‑friendly upgrade paths so you can get a local agent up and running today—no PhD required. Why a Consumer GPU Is Enough It’s a common myth that you must buy a professional‑grade card (think RTX A6000 or multiple GPUs linked via NVLink) to run LLMs locally. In reality, for 9B‑class models the sweet spot lies in t
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.






Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!