Ideogram v2 is an outstanding new inpainting model
We've partnered with Ideogram to bring their inpainting model to Replicate's API.
Posted October 22, 2024 by
- andreasjansson
Today Ideogram are launching their new inpainting feature for Ideogram v2. We’re thrilled to be partnering with Ideogram, to bring Ideogram v2 to Replicate’s API. We’ve been blown away by the quality of this model. It’s really good.
Ideogram v2 comes in two flavors:
-
ideogram-ai/ideogram-v2 - Produces the best image quality.
-
ideogram-ai/ideogram-v2-turbo - Still high quality, but faster.
For example, here is a herd of dinosaurs grazing on the Bucolic Green Hills:
Ideogram v2 is not just for inpainting: you can use it to generate any type of image. In our tests, we found it to be particularly good at generating text.
Run Ideogram v2 with an API on Replicate
To inpaint an image with the Replicate Python client, run
Or in JavaScript:
Live demo
We updated our open-source inpainter.app demo to use Ideogram v2.
You can try the inpainter live in the browser. Type a prompt to get started, and draw with your mouse to mask out parts of the image. Then enter a new prompt and hit submit.
Getting the best inpainting results
Here are some tips and tricks. Your mileage might vary!
-
As a rule of thumb, when Magic Prompt is off you should try to describe the whole scene and not just the inpainted region.
-
When Magic Prompt is on, the model will try to rewrite your prompt based on both the original prompt and the image, so you don’t necessarily need to describe the whole image.
-
If you only describe the inpainting region, the model will put more emphasis on the prompt which could produce better results.
Next steps
Inpainting has lots of fun applications. You can duplicate fonts, place objects in rooms, generate sprite maps for games, and lots more.
Check out these example projects:
-
Inpainter - Open-source Next.js app for inpainting images.
-
Outpainter - Open-source Nuxt.js app for extending images beyond their original canvas.
Let us know what you build on X or Discord.
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
model
R2-Write: Reflection and Revision for Open-Ended Writing with Deep Reasoning
arXiv:2604.03004v1 Announce Type: new Abstract: While deep reasoning with long chain-of-thought has dramatically improved large language models in verifiable domains like mathematics, its effectiveness for open-ended tasks such as writing remains unexplored. In this paper, we conduct a systematic investigation revealing that existing mainstream reasoning models achieve limited gains on open-ended writing tasks. Our further analysis shows that these models lack deep reflection and revision patterns in open-ended writing, resulting in substantially smaller improvements compared to mathematical reasoning tasks. To address this limitation, we introduce R2-Write: an automated framework that synthesizes high-quality thinking trajectories enriched with explicit reflection and revision patterns th

NeuReasoner: Towards Explainable, Controllable, and Unified Reasoning via Mixture-of-Neurons
arXiv:2604.02972v1 Announce Type: new Abstract: Large Reasoning Models (LRMs) have recently achieved remarkable success in complex reasoning tasks. However, closer scrutiny reveals persistent failure modes compromising performance and cost: I) Intra-step level, marked by calculation or derivation errors; II) Inter-step level, involving oscillation and stagnation; and III) Instance level, causing maladaptive over-thinking. Existing endeavors target isolated levels without unification, while their black-box nature and reliance on RL hinder explainability and controllability. To bridge these gaps, we conduct an in-depth white-box analysis, identifying key neurons (Mixture of Neurons, MoN) and their fluctuation patterns associated with distinct failures. Building upon these insights, we propos

LogicPoison: Logical Attacks on Graph Retrieval-Augmented Generation
arXiv:2604.02954v1 Announce Type: new Abstract: Graph-based Retrieval-Augmented Generation (GraphRAG) enhances the reasoning capabilities of Large Language Models (LLMs) by grounding their responses in structured knowledge graphs. Leveraging community detection and relation filtering techniques, GraphRAG systems demonstrate inherent resistance to traditional RAG attacks, such as text poisoning and prompt injection. However, in this paper, we find that the security of GraphRAG systems fundamentally relies on the topological integrity of the underlying graph, which can be undermined by implicitly corrupting the logical connections, without altering surface-level text semantics. To exploit this vulnerability, we propose \textsc{LogicPoison}, a novel attack framework that targets logical reaso
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Models

R2-Write: Reflection and Revision for Open-Ended Writing with Deep Reasoning
arXiv:2604.03004v1 Announce Type: new Abstract: While deep reasoning with long chain-of-thought has dramatically improved large language models in verifiable domains like mathematics, its effectiveness for open-ended tasks such as writing remains unexplored. In this paper, we conduct a systematic investigation revealing that existing mainstream reasoning models achieve limited gains on open-ended writing tasks. Our further analysis shows that these models lack deep reflection and revision patterns in open-ended writing, resulting in substantially smaller improvements compared to mathematical reasoning tasks. To address this limitation, we introduce R2-Write: an automated framework that synthesizes high-quality thinking trajectories enriched with explicit reflection and revision patterns th

NeuReasoner: Towards Explainable, Controllable, and Unified Reasoning via Mixture-of-Neurons
arXiv:2604.02972v1 Announce Type: new Abstract: Large Reasoning Models (LRMs) have recently achieved remarkable success in complex reasoning tasks. However, closer scrutiny reveals persistent failure modes compromising performance and cost: I) Intra-step level, marked by calculation or derivation errors; II) Inter-step level, involving oscillation and stagnation; and III) Instance level, causing maladaptive over-thinking. Existing endeavors target isolated levels without unification, while their black-box nature and reliance on RL hinder explainability and controllability. To bridge these gaps, we conduct an in-depth white-box analysis, identifying key neurons (Mixture of Neurons, MoN) and their fluctuation patterns associated with distinct failures. Building upon these insights, we propos

LogicPoison: Logical Attacks on Graph Retrieval-Augmented Generation
arXiv:2604.02954v1 Announce Type: new Abstract: Graph-based Retrieval-Augmented Generation (GraphRAG) enhances the reasoning capabilities of Large Language Models (LLMs) by grounding their responses in structured knowledge graphs. Leveraging community detection and relation filtering techniques, GraphRAG systems demonstrate inherent resistance to traditional RAG attacks, such as text poisoning and prompt injection. However, in this paper, we find that the security of GraphRAG systems fundamentally relies on the topological integrity of the underlying graph, which can be undermined by implicitly corrupting the logical connections, without altering surface-level text semantics. To exploit this vulnerability, we propose \textsc{LogicPoison}, a novel attack framework that targets logical reaso

How Annotation Trains Annotators: Competence Development in Social Influence Recognition
arXiv:2604.02951v1 Announce Type: new Abstract: Human data annotation, especially when involving experts, is often treated as an objective reference. However, many annotation tasks are inherently subjective, and annotators' judgments may evolve over time. This study investigates changes in the quality of annotators' work from a competence perspective during a process of social influence recognition. The study involved 25 annotators from five different groups, including both experts and non-experts, who annotated a dataset of 1,021 dialogues with 20 social influence techniques, along with intentions, reactions, and consequences. An initial subset of 150 texts was annotated twice - before and after the main annotation process - to enable comparison. To measure competence shifts, we combined


Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!