Sony's gaming division just bought an AI startup that turns photos into 3D volumes
Sony Interactive Entertainment, owner of the PlayStation brand, has acquired Cinemersive Labs , a UK startup developing tools to convert 2D photos and videos into 3D volumes. The startup team will join Sony's Visual Computing Group , a research engineering team focused on graphical technology, including game rendering, video coding and generative AI models. Cinemersive's most recent product is a virtual reality app called Parallax that works as a viewer for parallax photos — three-dimensional images that you can peer around with natural head movements — captured using traditional smartphones and professional cameras with stereo lenses. The startup developed custom AI tools to convert 2D images into 3D volumes to make Parallax possible, and Sony apparently wants to apply that expertise to i
Sony Interactive Entertainment, owner of the PlayStation brand, has acquired Cinemersive Labs, a UK startup developing tools to convert 2D photos and videos into 3D volumes. The startup team will join Sony's Visual Computing Group, a research engineering team focused on graphical technology, including game rendering, video coding and generative AI models.
Cinemersive's most recent product is a virtual reality app called Parallax that works as a viewer for parallax photos — three-dimensional images that you can peer around with natural head movements — captured using traditional smartphones and professional cameras with stereo lenses. The startup developed custom AI tools to convert 2D images into 3D volumes to make Parallax possible, and Sony apparently wants to apply that expertise to its own projects.
"Following the acquisition, the Cinemersive Labs team will join SIE’s Visual Computing Group (VCG) and contribute to our broader efforts in advancing state of the art visual computing within games," Sony says. "This includes applying machine learning to enhance gameplay visuals, improve rendering techniques, and unlock new levels of visual fidelity for players."
Machine learning has been a major focus of Sony's efforts to improve graphical performance on the PlayStation 5 and future hardware. The PlayStation 5 Pro was designed around a new GPU, faster storage and PlayStation Spectral Super Resolution (PSSR), custom AI upscaling tech that let the console run games at a lower resolution and then upscale them to 4K. The company recently squeezed even more performance out of the Pro with an updated version of PSSR it released in March. And with AMD, Sony is working on Project Amethyst, a multi-pronged collaboration to improve ray tracing and upscaling on the future consoles.
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
modelreleaseversion
The 3-File Context Kit: Everything Your AI Needs to Understand Your Project
Every time you start a new AI coding session, you re-explain your project. The stack, the conventions, the folder structure, the gotchas. It takes 10 minutes. Every. Single. Time. Here's how I fixed it with three files that take 15 minutes to set up once. The Problem AI assistants have no memory between sessions. Each conversation starts from zero. So you either: Dump your entire codebase (wasteful, confusing) Re-explain everything each time (tedious, inconsistent) Just wing it and hope for the best (chaotic) None of these work well. Option 3 is why your AI keeps suggesting Express when you use Fastify. The 3-File Kit File 1: PROJECT.md — The Identity Card This tells the AI what your project is. Keep it under 50 lines. # Project: invoice-api ## Stack - Runtime: Node.js 22 + TypeScript 5.4

What Gemma 4's multi-token prediction head actually means for your eval pipeline
Gemma 4 dropped with a multi-token prediction (MTP) head and immediately every benchmark thread on r/LocalLLaMA and r/MachineLearning filled up with MMLU scores, HumanEval numbers, and throughput charts. Most of those benchmarks are not measuring what the MTP head actually changes. Here's what's actually happening, and what it means if you're running your own eval pipeline. What MTP actually is Standard autoregressive generation predicts one token at a time. At each step, the model outputs a probability distribution over the vocabulary, samples a token, appends it, and repeats. Multi-token prediction trains an additional head to predict multiple future tokens simultaneously. The core model still generates token-by-token at inference time, but the MTP head is used during training as an auxi

AI Doesn't Fix Your Development Problems. It Accelerates Them.
I've watched the same failure pattern play out across every technology wave of my career. Team gets a new tool that promises to change everything. Productivity numbers go up. Everyone celebrates. Six months later, they're drowning in the same late-stage rework they were drowning in before. Just more of it, arriving faster. I saw it with CASE tools in the nineties. With offshore development in the 2000s. With Agile transformations in the 2010s. With DevOps automation in the 2020s. AI code generation is the most powerful version of this pattern I've ever seen. And most engineering organizations are walking straight into it. The Illusion Looks Like This Your team adopts GitHub Copilot or a similar tool. A developer asks it to implement a user authentication module. In forty seconds, it produc
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Products

Microservices Communication: REST, gRPC, and Message Queues
The Communication Problem in Microservices When you split a monolith into services, every function call becomes a network call. Network calls fail. They're slow. They're asynchronous. Choosing the right communication pattern determines whether your microservices work together or fight each other. Three Patterns 1. Synchronous REST Service A calls Service B, waits for a response. // Order Service calls Inventory Service async function createOrder ( items : OrderItem []) { // Check inventory (synchronous call) const availability = await fetch ( ' http://inventory-service/api/check ' , { method : ' POST ' , headers : { ' Content-Type ' : ' application/json ' }, body : JSON . stringify ({ items }), }). then ( r => r . json ()); if ( ! availability . allAvailable ) { throw new Error ( ' Some it

The 3-File Context Kit: Everything Your AI Needs to Understand Your Project
Every time you start a new AI coding session, you re-explain your project. The stack, the conventions, the folder structure, the gotchas. It takes 10 minutes. Every. Single. Time. Here's how I fixed it with three files that take 15 minutes to set up once. The Problem AI assistants have no memory between sessions. Each conversation starts from zero. So you either: Dump your entire codebase (wasteful, confusing) Re-explain everything each time (tedious, inconsistent) Just wing it and hope for the best (chaotic) None of these work well. Option 3 is why your AI keeps suggesting Express when you use Fastify. The 3-File Kit File 1: PROJECT.md — The Identity Card This tells the AI what your project is. Keep it under 50 lines. # Project: invoice-api ## Stack - Runtime: Node.js 22 + TypeScript 5.4

AI Doesn't Fix Your Development Problems. It Accelerates Them.
I've watched the same failure pattern play out across every technology wave of my career. Team gets a new tool that promises to change everything. Productivity numbers go up. Everyone celebrates. Six months later, they're drowning in the same late-stage rework they were drowning in before. Just more of it, arriving faster. I saw it with CASE tools in the nineties. With offshore development in the 2000s. With Agile transformations in the 2010s. With DevOps automation in the 2020s. AI code generation is the most powerful version of this pattern I've ever seen. And most engineering organizations are walking straight into it. The Illusion Looks Like This Your team adopts GitHub Copilot or a similar tool. A developer asks it to implement a user authentication module. In forty seconds, it produc

Why I Built Scenar.io - An AI-Powered DevOps Interview Practice Tool
Why I Built Scenar.io How It Started I was prepping for a Google SRE interview and struggling with the debugging portion. Not the knowledge - I knew the commands, I'd fixed real incidents at work. The problem was practicing under interview conditions: thinking out loud, explaining your reasoning, having someone challenge your approach. I started using Claude in the terminal to simulate it. I'd describe a scenario, ask it to act as a broken server, and practice talking through my debugging process. After a few weeks I realized I was spending more time setting up the prompts than actually practicing. I had this whole system - hidden server states, clue tracking, difficulty levels - and it hit me: this should just be a tool. I looked at what already existed. SadServers makes you type exact co

Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!