Jeremy Howard on winning the Predict Grant Applications Competition
Because I have recently started employment with Kaggle, I am not eligible to win any prizes. Which means the prize-winner for this comp is Quan Sun (team ‘student1’)! Congratulations! My approach to this competition was to first analyze the data in Excel pivottables. I looked for groups which had high or low application success rates. In this way, I found a large number of strong predictors — including by date (new years day is a strong predictor, as are applications processed on a Sunday), and for many fields a null value was highly predictive. I then used C# to normalize the data into Grants and Persons objects, and constructed a dataset for modeling including these features: CatCode, NumPerPerson, PersonId, NumOnDate, AnyHasPhd, Country, Dept, DayOfWeek, HasPhd, IsNY, Month, NoClass, No
Could not retrieve the full article text.
Read on medium.com →Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
modelreleaseapplication
China cuts cost of military-grade infrared chips to as little as a few dozen USD
A research team at a Chinese university has developed a new way to make high-end infrared chips that could slash their cost dramatically and improve the performance of smartphone cameras and self-driving cars. The key breakthrough was finding a way to make the chips using conventional manufacturing techniques, rather than the exotic, costly materials that were relied on before. Mass production is set to begin by the end of the year, according to a press release from Xidian University. The chips...
![[llama.cpp] 3.1x Q8_0 speedup on Intel Arc GPUs - reorder optimization fix (PR submitted)](https://d2xsxph8kpxj0f.cloudfront.net/310419663032563854/konzwo8nGf8Z4uZsMefwMr/default-img-neural-network-P6fqXULWLNUwjuxqUZnB3T.webp)
[llama.cpp] 3.1x Q8_0 speedup on Intel Arc GPUs - reorder optimization fix (PR submitted)
TL;DR : Q8_0 quantization on Intel Xe2 (Battlemage/Arc B-series) GPUs was achieving only 21% of theoretical memory bandwidth. My AI Agent and I found the root cause and submitted a fix that brings it to 66% - a 3.1x speedup in token generation. The problem : On Intel Arc Pro B70, Q8_0 models ran at 4.88 t/s while Q4_K_M ran at 20.56 t/s; a 4x gap that shouldn't exist since Q8_0 only has 1.7x more data. After ruling out VRAM pressure, drivers, and backend issues, we traced it to the SYCL kernel dispatch path. Root cause : llama.cpp's SYCL backend has a "reorder" optimization that separates quantization scale factors from weight data for coalesced GPU memory access. This was implemented for Q4_0, Q4_K, and Q6_K - but Q8_0 was never added. Q8_0's 34-byte blocks (not power-of-2) make the non-r

Got Gemma 4 running locally on CUDA, both float and GGUF quantized, with benchmarks
Spent the last week getting Gemma 4 working on CUDA with both full-precision (BF16) and GGUF quantized inference. Here's a video of it running. Sharing some findings because this model has some quirks that aren't obvious. Performance (Gemma4 E2B, RTX 3090): | Config | BF16 Float | Q4_K_M GGUF | |-------------------------|------------|-------------| | short gen (p=1, g=32) | 110 tok/s | 170 tok/s | | long gen (p=512, g=128) | 72 tok/s | 93 tok/s | The precision trap nobody warns you about Honestly making it work was harder than I though. Gemma 4 uses attention_scale=1.0 (QK-norm instead of the usual 1/sqrt(d_k) scaling). This makes it roughly 22x more sensitive to precision errors than standard transformers. Things that work fine on LLaMA or Qwen will silently produce garbage on Gemma 4: F1
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Products

Tech companies are cutting jobs and betting on AI. The payoff is far from guaranteed
AI experts say we’re living in an experiment that may fundamentally change the model of work Sign up for the Breaking News US email to get newsletter alerts in your inbox Hundreds of thousands of tech workers are facing a harsh reality. Their well-paying jobs are no longer safe. Now that artificial intelligence (AI) is here, their futures don’t look as bright as they did a decade ago. As US tech companies have ramped up investments in AI, they’ve slashed a staggering number of jobs. Microsoft cut 15,000 workers last year . Amazon laid off 30,000 employees in the last six months. Financial-services company Block eliminated more than 4,000 people, or 40% of its workforce, in February. Meta laid off more than 1,000 in the last six months, and, according to a Reuters report, may cut 20% of all

Resume Skills Section: Best Layout + Examples (2026)
Your skills section is the most-scanned part of your resume after your name and current title. ATS systems use it for keyword matching. Recruiters use it as a 2-second compatibility check. If it's poorly organized, buried at the bottom, or filled with the wrong skills, both audiences move on. Where to Place Your Skills Section Situation Best Placement Why Technical role (SWE, DevOps, data) Below name, above experience Recruiters check your stack before reading bullets Non-technical role (PM, marketing, ops) Below experience Experience and results matter more Career changer Below name, above experience Establishes relevant skills before unrelated job titles New grad / intern Below education, above projects Education sets context, skills show what you can do The rule: place skills where they

How AI Is Transforming Cybersecurity and Compliance — A Deep Dive into PCI DSS
The intersection of artificial intelligence and cybersecurity is no longer a future concept — it is the present reality shaping how organizations defend their data, detect threats, and demonstrate regulatory compliance. As cyber threats grow in sophistication and volume, traditional rule-based security tools are struggling to keep pace. AI is filling that gap with speed, precision, and adaptability that human analysts alone cannot match. Nowhere is this transformation more consequential than in the world of payment security and compliance. The Payment Card Industry Data Security Standard (PCI DSS) — the global framework governing how organizations handle cardholder data — has long been a compliance burden for businesses of all sizes. AI is now fundamentally changing how companies achieve,

Securing Plex on Synology NAS with Post-Quantum Cryptography via Cloudflare Tunnel
Introduction Securing remote access to a Plex media server hosted on a Synology NAS device presents a critical challenge, particularly in the face of advancing quantum computing capabilities. Traditional encryption algorithms, such as RSA and Elliptic Curve Cryptography (ECC), rely on the computational infeasibility of tasks like integer factorization and discrete logarithm problems. Quantum computers, leveraging Shor’s algorithm, can solve these problems exponentially faster, rendering traditional encryption obsolete. This vulnerability is not a speculative future concern but an imminent threat, especially for internet-exposed services like Plex. Without post-quantum cryptography (PQC), Plex servers—and the sensitive data stored on Synology NAS devices—are susceptible to quantum-enabled d


Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!