Exclusive | The Sudden Fall of OpenAI’s Most Hyped Product Since ChatGPT - WSJ
Exclusive | The Sudden Fall of OpenAI’s Most Hyped Product Since ChatGPT WSJ
Could not retrieve the full article text.
Read on Google News: OpenAI →Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
productchatgptGoogle’s Nano Banana Pro might be the ChatGPT moment for AI image generation
By combining advanced reasoning with real-time data, Google's Nano Banana Pro redefines what's possible in image-generation AI. The post Google’s Nano Banana Pro might be the ‘ChatGPT moment’ for AI image generation first appeared on TechTalks .
Jailbreaking Generative AI: Multivector Phishing Threats and Transformer based Defenses
arXiv:2507.12185v2 Announce Type: replace Abstract: The rise of Generative AI (GenAI) has reshaped the cybersecurity landscape by enabling new attack vectors and lowering the barrier for executing advanced social engineering campaigns. This study conducts an empirical analysis of jailbreaking vulnerabilities in ChatGPT-4o-Mini, showing that novices can bypass safeguards to generate complete multivector phishing attacks across email, web, SMS, and voice channels. Controlled experiments reveal that role-based jailbreaks produce fully operational attack paths capable of credential harvesting. User studies further demonstrate the disruptive potential of GenAI: novice participants exhibited a 240\% increase in perceived phishing competence, a 400\% improvement in task completion rates, and a 57
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Models
Is 1-bit and TurboQuant the future of OSS? A simulation for Qwen3.5 models.
Simulation what the Qwen3.5 model family would look like using 1-bit technology and TurboQuant. The table below shows the results, this would be a revolution: Model Parameters Q4_K_M File (Current) KV Cache (256K) (Current) Hypothetical 1-bit Weights KV Cache 256K with TurboQuant Hypothetical Total Memory Usage Qwen3.5-122B-A10B 122B total / 10B active 74.99 GB 81.43 GB 17.13 GB 1.07 GB 18.20 GB Qwen3.5-35B-A3B 35B total / 3B active 21.40 GB 26.77 GB 4.91 GB 0.89 GB 5.81 GB Qwen3.5-27B 27B 17.13 GB 34.31 GB 3.79 GB 2.86 GB 6.65 GB Qwen3.5-9B 9B 5.89 GB 14.48 GB 1.26 GB 1.43 GB 2.69 GB Qwen3.5-4B 4B 2.87 GB 11.46 GB 0.56 GB 1.43 GB 1.99 GB Qwen3.5-2B 2B 1.33 GB 4.55 GB 0.28 GB 0.54 GB 0.82 GB submitted by /u/GizmoR13 [link] [comments]
SOTA Language Models Under 14B?
Hey guys, I was wondering what recent state-of-the-art small language models are the best for general question-answering task (diverse topics including math)? Any good/bad experience with specific models? Thank you! submitted by /u/No-Mud-1902 [link] [comments]
![[New Model] - CatGen v2 - generate 128px images of cats with this GAN](https://external-preview.redd.it/fC2fRSP_OWy5RuDbLNvQAg0sAWBf_cH5RSNIELsurYY.png?width=140&height=75&auto=webp&s=b701a0c0e4a43529d64b0d532d2ac0a8e61f3404)
[New Model] - CatGen v2 - generate 128px images of cats with this GAN
Hey, r/LocalLLaMA ! I am back with a new model - no transformer but a GAN! It is called CatGen v2 and it generates 128x128px of cats. You can find the full source code, samples and the final model here: https://huggingface.co/LH-Tech-AI/CatGen-v2 Look at this sample after epoch 165 (trained on a single Kaggle T4 GPU): https://preview.redd.it/t1k3v71auqsg1.png?width=1146 format=png auto=webp s=26b4639eb7f9635d8b58a24633f8e4125859fd9e Feedback is very welcome :D submitted by /u/LH-Tech_AI [link] [comments]


Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!