OpenAI ChatGPT Enables Location Sharing For More Localized Near Me Results - Search Engine Roundtable
<a href="https://news.google.com/rss/articles/CBMickFVX3lxTE5KU01PckZUNHc0U3ZDWHdQbXpkNW1wSC1qRGo2REUxRkg0Mk5KUEFRLWxHQWJNVnB5dGlXQ3JtRk9TMjV1RTJmOFNRbjh2VEd2NHNfWVp4NzZkYXl6VEIxVzBOYm1JZHNPRG5BLXdhVWVCdw?oc=5" target="_blank">OpenAI ChatGPT Enables Location Sharing For More Localized Near Me Results</a> <font color="#6f6f6f">Search Engine Roundtable</font>
Could not retrieve the full article text.
Read on Google News: ChatGPT →Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
chatgpt
How I Replaced 6 Paid AI Subscriptions With One Free Tool (Saved $86/Month)
I was paying $86/month for AI tools. Then I found one free platform that replaced all of them. Here's the exact breakdown: The Tools I Cancelled Tool Cost What I Replaced It With ChatGPT Plus $20/mo Free GPT-4o on Kelora Otter.ai $17/mo Free audio transcription Jasper $49/mo Free AI text tools Total $86/mo $0 GPT-4o — Free Kelora gives direct access to GPT-4o, the same model inside ChatGPT Plus. No subscription, no credit card. I use it daily for code reviews, email drafts, and research summaries. Audio Transcription — Free Upload any audio file — meeting recordings, lectures, podcasts — and get accurate text back in seconds. Replaced my Otter.ai subscription instantly. AI Writing — Free Blog drafts, product copy, social posts. The text tools cover everything Jasper did for me at $49/month
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Models
![[P] GPU friendly lossless 12-bit BF16 format with 0.03% escape rate and 1 integer ADD decode works for AMD & NVIDIA](https://preview.redd.it/qbx94xeeo2tg1.png?width=140&height=93&auto=webp&s=39ed7f02dad84ccf081f932903c016c7983d4fcd)
[P] GPU friendly lossless 12-bit BF16 format with 0.03% escape rate and 1 integer ADD decode works for AMD & NVIDIA
Hi everyone, I am from Australia : ) I just released a new research prototype It’s a lossless BF16 compression format that stores weights in 12 bits by replacing the 8-bit exponent with a 4-bit group code . For 99.97% of weights , decoding is just one integer ADD . Byte-aligned split storage: true 12-bit per weight, no 16-bit padding waste, and zero HBM read amplification. Yes 12 bit not 11 bit !! The main idea was not just “compress weights more”, but to make the format GPU-friendly enough to use directly during inference : sign + mantissa: exactly 1 byte per element group: two nibbles packed into exactly 1 byte too https://preview.redd.it/qbx94xeeo2tg1.png?width=1536 format=png auto=webp s=831da49f6b1729bd0a0e2d1f075786274e5a7398 1.33x smaller than BF16 Fixed-rate 12-bit per weight , no

Quoting Greg Kroah-Hartman
Months ago, we were getting what we called 'AI slop,' AI-generated security reports that were obviously wrong or low quality. It was kind of funny. It didn't really worry us. Something happened a month ago, and the world switched. Now we have real reports. All open source projects have real reports that are made with AI, but they're good, and they're real. Greg Kroah-Hartman , Linux kernel maintainer ( bio ), in conversation with Steven J. Vaughan-Nichols Tags: security , linux , generative-ai , ai , llms , ai-security-research

Quoting Daniel Stenberg
The challenge with AI in open source security has transitioned from an AI slop tsunami into more of a ... plain security report tsunami. Less slop but lots of reports. Many of them really good. I'm spending hours per day on this now. It's intense. Daniel Stenberg , lead developer of cURL Tags: daniel-stenberg , security , curl , generative-ai , ai , llms , ai-security-research

Vulnerability Research Is Cooked
Vulnerability Research Is Cooked Thomas Ptacek's take on the sudden and enormous impact the latest frontier models are having on the field of vulnerability research. Within the next few months, coding agents will drastically alter both the practice and the economics of exploit development. Frontier model improvement won’t be a slow burn, but rather a step function. Substantial amounts of high-impact vulnerability research (maybe even most of it) will happen simply by pointing an agent at a source tree and typing “find me zero days”. Why are agents so good at this? A combination of baked-in knowledge, pattern matching ability and brute force: You can't design a better problem for an LLM agent than exploitation research. Before you feed it a single token of context, a frontier LLM already en


Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!