Pro subscription that I didn’t asked for
I’ve cancel my pro subscription last year, have almost twelve months since I cancel it. But, a few minutes ago the I was charged 9$ for the pro subscrition I was even using huggingface when my credit card app notified me, my computer was hibernating and I wasn’t logged in my mobile. How do I deal with this situation? How do I ado for my money back? 2 posts - 2 participants Read full topic
Could not retrieve the full article text.
Read on discuss.huggingface.co →discuss.huggingface.co
https://discuss.huggingface.co/t/pro-subscription-that-i-didn-t-asked-for/174875Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
huggingface
Quantizers appriciation post
Hey everyone, Yesterday I decided to try and learn how to quantize ggufs myself with reasonable quality, in order to understand the magic behind the curtain. Holy... I did not expect how much work it is, how long it takes, and requires A LOT (500GB!) of storage space for just Gemma-4-26B-A4B in various sizes. There really is an art to configuring them too, with variations between architectures and quant types. Thanks to unsloth releasing their imatrix file and huggingface showing the weight types inside their viewer, I managed to cobble something together without LLM assistance. I ran into a few hiccups and some of the information is a bit confusing, so I documented my process in the hopes of making it easier for someone else to learn and experiment. My recipe and full setup guide can be f

Gemma 4 vs Qwen 3.5 Benchmark Comparison
I took the official benchmarks for Qwen 3.5 and Gemma 4 and compiled them into a neck-and-neck comparison here. The Benchmark Table Benchmark Qwen 2B Gemma E2B Qwen 4B Gemma E4B Qwen 27B Gemma 31B Qwen 35B (MoE) Gemma 26B (MoE) MMLU-Pro 66.5% 60.0% 79.1% 69.4% 86.1% 85.2% 85.3% 82.6% GPQA Diamond 51.6% 43.4% 76.2% 58.6% 85.5% 84.3% 84.2% 82.3% LiveCodeBench v6 69.4% 44.0% 55.8% 52.0% 80.7% 80.0% 74.6% 77.1% Codeforces ELO N/A 633 24.1 940 1899 2150 2028 1718 TAU2-Bench 48.8% 24.5% 79.9% 42.2% 79.0% 76.9% 81.2% 68.2% MMMLU (Multilingual) 63.1% 60.0% 76.1% 69.4% 85.9% 85.2% 85.2% 86.3% HLE-n (No tools) N/A N/A N/A N/A 24.3% 19.5% 22.4% 8.7% HLE-t (With tools) N/A N/A N/A N/A 48.5% 26.5% 47.4% 17.2% AIME 2026 N/A N/A N/A 42.5% N/A 89.2% N/A 88.3% MMMU Pro (Vision) N/A N/A N/A N/A 75.0% 76.9%

RFT FPCM OV - a Hugging Face Space by RFTSystems
huggingface.co RFT FPCM OV - a Hugging Face Space by RFTSystems RFT Fixed Parameter Cosmology Model, Open Validation 1. Fixed‑Parameter Cosmology Panel (FPCM‑OV) This side of the Space shows the core RFT cosmology running on one locked parameter set. Nothing adjusts itself — the whole model stands or falls on this single solution. What people can see here Age at z = 13.67: RFT gives 568.52 Myr , which lines up with JWST early‑galaxy maturity without any tuning. Horizon Ratio: The model naturally produces a horizon about 490× larger than ΛCDM. (This removes the horizon problem without inflation.) Unified Expansion Curve (H_RFT) The purple curve shows how expansion behaves across all redshifts using the same fixed parameters. JWST Maturity Plot The cyan and red lines show how RFT’s predicted
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Products

Building the Memory Layer for a Voice AI Agent
Photo by Enchanted Tools on Unsplash Voice AI raises the bar for responsiveness completely. In a chatbot, a two or three second delay feels acceptable. In voice, that same delay feels strange. People start wondering if the app heard them, whether the microphone failed, or if they should repeat themselves. Voice is much less forgiving. That was the main thing I kept running into while experimenting with a voice journal app: a voice-first app powered by Sarvam AI for speech to text and text to speech conversion and Redis Agent Memory Server for memory. It’s a pretty straight forward app. A user speaks, the app transcribes the audio, decides whether the user wants to save something or ask something, fetches the right context, and then responds back in voice. What makes it interesting is build



Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!