Live
Black Hat USADark ReadingBlack Hat AsiaAI BusinessSmart food safety: implementing AI for risk, compliance and control - New Food magazineGoogle News: AI SafetyDonald Trump's Iran Address: White House Confirms Major Security Update Following Toll ThreatsInternational Business TimesTikTok ran ads for AI apps that let users undress strangersBusiness InsiderEnd of an era: Elon Musk says Tesla is no longer producing the Model S and XBusiness InsiderOpenAI's new partner wants to build ads that can chat with youBusiness InsiderOpenAI's new partner wants to build ads that can chat with you - Business InsiderGoogle News: OpenAIAnthropic confirms it leaked 512,000 lines of Claude Code source code — spilling some of its biggest secrets - TechRadarGoogle News: ClaudeQ1 2026 Shatters Venture Funding Records As AI Boom Pushes Startup Investment To Nearly $300BCrunchbase NewsMeet 'Dobby': The AI agent that could kill the app economyBusiness InsiderThis company is turning YouTube videos into TV shows as streamers chase Gen AlphaBusiness InsiderThe gig workers who are training humanoid robots at homeMIT Technology Review AIWhat to expect from WWDC 2026EngadgetBlack Hat USADark ReadingBlack Hat AsiaAI BusinessSmart food safety: implementing AI for risk, compliance and control - New Food magazineGoogle News: AI SafetyDonald Trump's Iran Address: White House Confirms Major Security Update Following Toll ThreatsInternational Business TimesTikTok ran ads for AI apps that let users undress strangersBusiness InsiderEnd of an era: Elon Musk says Tesla is no longer producing the Model S and XBusiness InsiderOpenAI's new partner wants to build ads that can chat with youBusiness InsiderOpenAI's new partner wants to build ads that can chat with you - Business InsiderGoogle News: OpenAIAnthropic confirms it leaked 512,000 lines of Claude Code source code — spilling some of its biggest secrets - TechRadarGoogle News: ClaudeQ1 2026 Shatters Venture Funding Records As AI Boom Pushes Startup Investment To Nearly $300BCrunchbase NewsMeet 'Dobby': The AI agent that could kill the app economyBusiness InsiderThis company is turning YouTube videos into TV shows as streamers chase Gen AlphaBusiness InsiderThe gig workers who are training humanoid robots at homeMIT Technology Review AIWhat to expect from WWDC 2026Engadget

Shiwei Liu starts his position as new Group Leader

is.mpg.deJuly 14, 20251 min read0 views
Source Quiz

Shiwei Liu starts his position as new Group Leader

He joins the ELLIS Institute Tübingen as PI and Hector Endowed Fellow and has a co-affiliation with the MPI-IS and the Tübingen AI Center as Independent Research Group Leader.

Shiwei Liu will start his new group on July 15. His WEI Lab (which stands for Wild, Efficient, and Innovative AI) will focus on empirically understanding the behavior of deep neural networks, developing deep learning algorithms and architectures that learn better, faster, and cheaper. One central theme of Shiwei’s research is to leverage, understand, and expand the role of low-dimentionality in neural networks, whose impacts span many important topics, such as efficient training/inference/scaling of large-foundation models, robustness and trustworthiness, and generative AI.

Shiwei Liu is a Royal Society Newton International Fellow at University of Oxford. He was a Postdoctoral Fellow at the University of Texas at Austin. He obtained his Ph.D. with Cum Laude from Eindhoven University of Technology in 2022. Liu has received two Rising Star Awards from the King Abdullah University of Science and Technology (KAUST) in Saudi Arabia and the Conference on Parsimony and Learning (CPAL). His Ph.D. thesis received the 2023 Best Dissertation Award from Informatics Europe.

In March 2024, Shiwei Liu gave a talk at the ELLIS Institute Scientific Symposium, held at MPI-IS, talking about sparsity in neural networks. While existing research predominantly focuses on exploiting sparsity for model compression—such as deriving sparse neural networks from pre-trained dense ones—many other promising benefits such as scalability, robustness, and fairness remain under-explored. His talk delved into these overlooked advantages. Specifically, He showcased how sparsity can boost the scalability of neural networks by efficiently training sparse models from scratch. This approach enables a significant increase in model capacity without proportionally escalating computational or memory requirements. Additionally, he explored the future implications of sparsity in the realm of large language models, discussing its potential benefits to efficient LLM scaling, lossless LLM compression, and fostering trustworthy AI.

Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by AI News Hub · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

Knowledge Map

Knowledge Map
TopicsEntitiesSource
Shiwei Liu …is.mpg.de

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Knowledge Graph100 articles · 123 connections
Scroll to zoom · drag to pan · click to open

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!

More in Analyst News