India and Brazil Are the Anti-AI Trade. Why Their Markets Are Ready to Shine. - Barron's
<a href="https://news.google.com/rss/articles/CBMijwNBVV95cUxNdFhkeTJBQlVuRFBNVHpxSEJ0TXJUbW51THQyazFvSmgxVXhCeEVhU0hBVi1BWmVNTXkwRW01d3N0dzBMbTVqMlhRVmVkWk9HLUFQYVhscHFsOWRlV2xWNW5FdnVKSFNuMjdPQTVIclVhX1J2Q3Y3X3MzeWVRS241Z3NVWFQxbHp1YkZkc1ExbWdIck5uNmY2RWU4R2VLRmxMa1pJVVY2N3NyNE1QOHQwWVJyYm5QV0ltdWx1WC05Y2ZNXy1QYjkxNFI5bU1EU2dPc3B1REQ5OUs3MHVXRU1SSmdQa1RjMVNqNkhwWTR5bEY1WkZnYTlUOGtFRE9MQmJmZnNNbGg4NEU3Zk43ajRmX0o0dndva2VpTXZHRkdlT29pOThyQUd5WEo2cXZDWHlFVWJpWGdvNm1DbXFxVktDMWJwTUo1Mk1LTGV2eUY0R1o2ellmVEliRjVibngwRXRKVlQ4NHA0YXhTOGRRQUwyQjRCdDkxS1FYTjNuUGROLTFsSDhmTldrMzAwRDVfZm8?oc=5" target="_blank">India and Brazil Are the Anti-AI Trade. Why Their Markets Are Ready to Shine.</a> <font color="#6f6f6f">Barron's</font>
Could not retrieve the full article text.
Read on GNews AI Brazil →Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
marketindia
Gemma-4 E4B model's vision seems to be surprisingly poor
The E4B model is performing very poorly in my tests and since no one seems to be talking about it that I had to unlurk myself and post this. Its performing badly even compared to qwen3.5-4b. Can someone confirm or dis...uh...firm (?) My test suite has roughly 100 vision related tasks: single-turn with no tools, only an input image and prompt, but with definitive answers (not all of them are VQA though). Most of these tasks are upstream from any kind of agentic use case. To give a sense: there are tests where the inputs are screenshots from which certain text information has to be extracted, others are images on which the model has to perform some inference (for example: geoguessing on travel images, calculating total cost of a grocery list given an image of the relevant supermarket display

Resume Skills Section: Best Layout + Examples (2026)
Your skills section is the most-scanned part of your resume after your name and current title. ATS systems use it for keyword matching. Recruiters use it as a 2-second compatibility check. If it's poorly organized, buried at the bottom, or filled with the wrong skills, both audiences move on. Where to Place Your Skills Section Situation Best Placement Why Technical role (SWE, DevOps, data) Below name, above experience Recruiters check your stack before reading bullets Non-technical role (PM, marketing, ops) Below experience Experience and results matter more Career changer Below name, above experience Establishes relevant skills before unrelated job titles New grad / intern Below education, above projects Education sets context, skills show what you can do The rule: place skills where they
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Models

Got Gemma 4 running locally on CUDA, both float and GGUF quantized, with benchmarks
Spent the last week getting Gemma 4 working on CUDA with both full-precision (BF16) and GGUF quantized inference. Here's a video of it running. Sharing some findings because this model has some quirks that aren't obvious. Performance (Gemma4 E2B, RTX 3090): | Config | BF16 Float | Q4_K_M GGUF | |-------------------------|------------|-------------| | short gen (p=1, g=32) | 110 tok/s | 170 tok/s | | long gen (p=512, g=128) | 72 tok/s | 93 tok/s | The precision trap nobody warns you about Honestly making it work was harder than I though. Gemma 4 uses attention_scale=1.0 (QK-norm instead of the usual 1/sqrt(d_k) scaling). This makes it roughly 22x more sensitive to precision errors than standard transformers. Things that work fine on LLaMA or Qwen will silently produce garbage on Gemma 4: F1

Gemma-4 E4B model's vision seems to be surprisingly poor
The E4B model is performing very poorly in my tests and since no one seems to be talking about it that I had to unlurk myself and post this. Its performing badly even compared to qwen3.5-4b. Can someone confirm or dis...uh...firm (?) My test suite has roughly 100 vision related tasks: single-turn with no tools, only an input image and prompt, but with definitive answers (not all of them are VQA though). Most of these tasks are upstream from any kind of agentic use case. To give a sense: there are tests where the inputs are screenshots from which certain text information has to be extracted, others are images on which the model has to perform some inference (for example: geoguessing on travel images, calculating total cost of a grocery list given an image of the relevant supermarket display



Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!