Upload speeds extremely slow / stalling since April 1st
Since yesterday afternoon (April 1st), I’ve been experiencing extremely slow upload speeds when uploading GGUF model files to the Hub using hf upload . The uploads start at a reasonable speed but progressively slow down from ~1 MB/s, then downgrades to a few KB/s, and eventually stall completely at ~110 KB/s with seemingly no progress at all. What I’ve tried: Uploading all files at once vs single file, same issue Disabling xet ( HF_HUB_ENABLE_XET=0 ) and hf-transfer ( HF_HUB_ENABLE_HF_TRANSFER=0 ), same issue Using an older version of huggingface-hub (0.36.2) — same issue Checked status.huggingface.co , no reported issues My internet connection is fine for everything else The pattern is consistent: uploads begin at normal speed, then gradually degrade over a few minutes until they complete
Could not retrieve the full article text.
Read on discuss.huggingface.co →discuss.huggingface.co
https://discuss.huggingface.co/t/upload-speeds-extremely-slow-stalling-since-april-1st/174910Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
modelversionreportKnowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Open Source AI


With hf cli, how do I resume an interrupted model download?
I have a slow internet and the download of a large file was interrupted 30GB in! I download using the ‘hf’ CLI command, like this: hf download unsloth/gemma-4-31B-it-GGUF gemma-4-31B-it-UD-Q8_K_XL.gguf When I ran it again, it started over instead of resuming, to my horror. How do I avoid redownloading a partial model next time? I don’t see a resume option in hf download –help 1 post - 1 participant Read full topic

Gemma 4 is great at real-time Japanese - English translation for games
When Gemma 3 27B QAT IT was released last year, it was SOTA for local real-time Japanese-English translation for visual novel for a while. So I want to see how Gemma 4 handle this use case. Model: Unsloth's gemma-4-26B-A4B-it-UD-Q5_K_M Context: 8192 Reasoning: OFF Softwares: Front end: Luna Translator Back end: LM Studio Workflow: Luna hooks the dialogue and speaker's name from the game. A Python script structures the hooked text (add name, gender). Luna sends the structured text and a system prompt to LM Studio Luna shows the translation. What Gemma 4 does great: Even with reasoning disabled, Gemma 4 follows instructions in system prompt very well. With structured text, gemma 4 deals with pronouns well. This is one of the biggest challenges because Japanese spoken dialogue often omit subj



Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!