Shiwei Liu starts his position as new Group Leader
Shiwei Liu starts his position as new Group Leader
He joins the ELLIS Institute Tübingen as PI and Hector Endowed Fellow and has a co-affiliation with the MPI-IS and the Tübingen AI Center as Independent Research Group Leader.
Shiwei Liu will start his new group on July 15. His WEI Lab (which stands for Wild, Efficient, and Innovative AI) will focus on empirically understanding the behavior of deep neural networks, developing deep learning algorithms and architectures that learn better, faster, and cheaper. One central theme of Shiwei’s research is to leverage, understand, and expand the role of low-dimentionality in neural networks, whose impacts span many important topics, such as efficient training/inference/scaling of large-foundation models, robustness and trustworthiness, and generative AI.
Shiwei Liu is a Royal Society Newton International Fellow at University of Oxford. He was a Postdoctoral Fellow at the University of Texas at Austin. He obtained his Ph.D. with Cum Laude from Eindhoven University of Technology in 2022. Liu has received two Rising Star Awards from the King Abdullah University of Science and Technology (KAUST) in Saudi Arabia and the Conference on Parsimony and Learning (CPAL). His Ph.D. thesis received the 2023 Best Dissertation Award from Informatics Europe.
In March 2024, Shiwei Liu gave a talk at the ELLIS Institute Scientific Symposium, held at MPI-IS, talking about sparsity in neural networks. While existing research predominantly focuses on exploiting sparsity for model compression—such as deriving sparse neural networks from pre-trained dense ones—many other promising benefits such as scalability, robustness, and fairness remain under-explored. His talk delved into these overlooked advantages. Specifically, He showcased how sparsity can boost the scalability of neural networks by efficiently training sparse models from scratch. This approach enables a significant increase in model capacity without proportionally escalating computational or memory requirements. Additionally, he explored the future implications of sparsity in the realm of large language models, discussing its potential benefits to efficient LLM scaling, lossless LLM compression, and fostering trustworthy AI.
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Analyst News
DeepMind CEO is talking to Google CEO 'every day' as lab ramps up competition with OpenAI - CNBC
<a href="https://news.google.com/rss/articles/CBMiigFBVV95cUxNeUhQOFhmdHNobG1naFdJTXNnY0xKU0s3WlRBLVVaV1YwNkFtZzhUWVdKbkdDelRXTjVrTU5qeVVSMkpodjRJS24xcjNQcG5QbWRVOENDWEdaZ3RyUmlQYzJBN2hodkVqbG1wVUdJSjFRTUhSVDRCTE5OVU1Mc0FKVnZCZDN0TlFqelHSAY8BQVVfeXFMTU9EVW1zdW82TEhQQ3FNQTlmaF9LN0RHZHVSNTA5X05mZDRLY25CZHhRMjF0am5oVnVQaGt1OGlmdjBUVnFPWElCOGVhalNxOTJyaEF0MUhCZVNTSjk2RWVLTTRvMm9ldllNeEE5Y1p5dGFGUndKWDZXVWNLdzFlekFtVWkwOUZlTzEySEg0N2c?oc=5" target="_blank">DeepMind CEO is talking to Google CEO 'every day' as lab ramps up competition with OpenAI</a> <font color="#6f6f6f">CNBC</font>
Danish AI lab Corti beats OpenAI and Anthropic in medical coding - sifted.eu
<a href="https://news.google.com/rss/articles/CBMihwFBVV95cUxOSE9tMHZrbGVlMkhUNFF4X181OU13UVlCYjFzZHRLOHp6Q0pyVWFONTlWaFFEakhrcEVwYm80bmdmVEVNVS1VdTlDX0ljY2tTRzNwT2xnb2o4UWtXcWJsXzlJZ05hMjRybmZ5RjAyQVM1a2t6bVVZWC13Tmp1ejhQeW1uZnBkLVk?oc=5" target="_blank">Danish AI lab Corti beats OpenAI and Anthropic in medical coding</a> <font color="#6f6f6f">sifted.eu</font>

Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!