How I Built an Islamic Storytelling App with AI, Audio Narration & 8 Languages
How I Built an Islamic Storytelling App with AI, Audio Narration 8 Languages An indie developer's journey from side project to 270+ paying subscribers, 8 markets, and a content engine that runs itself. There are over 1.8 billion Muslims worldwide, yet the App Store has surprisingly few high-quality apps that tell the stories of the Quran in an engaging, audio-first way. Most existing apps felt like digitized textbooks -- walls of text, no narration, no atmosphere. I wanted something that felt more like Audible meets Duolingo, but for Islamic stories. That idea became Qissah , an iOS app with 40+ professionally narrated Quran and prophet stories in 8 languages, an AI-powered Islamic Q A chat, a full Quran reader, dhikr counter, and prayer times. This is the story of how I built it as a solo
How I Built an Islamic Storytelling App with AI, Audio Narration & 8 Languages
An indie developer's journey from side project to 270+ paying subscribers, 8 markets, and a content engine that runs itself.
There are over 1.8 billion Muslims worldwide, yet the App Store has surprisingly few high-quality apps that tell the stories of the Quran in an engaging, audio-first way. Most existing apps felt like digitized textbooks -- walls of text, no narration, no atmosphere. I wanted something that felt more like Audible meets Duolingo, but for Islamic stories.
That idea became Qissah, an iOS app with 40+ professionally narrated Quran and prophet stories in 8 languages, an AI-powered Islamic Q&A chat, a full Quran reader, dhikr counter, and prayer times. This is the story of how I built it as a solo developer, the technical decisions that worked, the ones that didn't, and what I learned shipping to 8 markets.
The Problem: Islamic Content Deserves Better UX
I grew up listening to stories of the prophets. The Story of Yusuf. The trials of Ayub. Musa and the parting of the sea. These are some of the most dramatic, emotionally powerful narratives in human history, and they were being delivered through apps that looked like they were built in 2012.
Parents wanted something they could put on for their kids during car rides. Adults wanted something they could listen to during commutes. Converts wanted accessible entry points into Islamic knowledge. The audience was there. The product wasn't.
I started sketching out what a modern Quran stories experience should look like: professional voice narration, ambient background sounds (rain, wind, desert atmosphere), synchronized subtitles, and beautiful illustrations. Not a reading app -- a listening app.
The Tech Stack: Boring Choices, Fast Shipping
I'm an iOS-first developer, so the core app is native Swift with SwiftUI. Here is what powers each layer:
Client (iOS):
-
Swift + SwiftUI, minimum iOS 16
-
Clean Architecture with MVVM-C (Model-View-ViewModel-Coordinator)
-
Feature-based module structure: each feature (Stories, Chat, Dhikr, Quran, Prayer Times) lives in its own Domain/Data/Presentation stack
Backend:
-
Fastify v5 (TypeScript) deployed on Vercel serverless
-
Firebase Firestore for user data, content metadata, and feature flags
-
Apple App Store Server Library for JWS-verified server-to-server notifications
Monetization & Analytics:
-
RevenueCat for subscription management (weekly, monthly, yearly, lifetime)
-
Superwall for paywall A/B testing -- this was a game-changer for conversion optimization
-
AppsFlyer for attribution
-
Firebase Analytics (GA4) for event tracking
-
Microsoft Clarity for session replay
Website & SEO:
-
Static HTML deployed on Cloudflare Workers
-
Python scripts for generating story pages at build time
No React Native. No Flutter. No over-engineered microservices. The philosophy was: use boring, proven tools and spend my energy on the content and experience instead of the infrastructure.
The MVVM-C architecture deserves a quick mention. Each feature module follows the same structure: domain entities and use case protocols at the top, repository implementations and DTOs in the data layer, and views with their view models in presentation. A single AppCoordinator manages navigation. This pattern scales well for a solo developer -- when I added the Quran reader six months after launch, I just dropped in a new feature module and wired it to the coordinator. Zero changes to existing code.
The Audio-First Approach: Where the Magic Lives
This is the part that took the most time and had the biggest impact on retention.
Each of the 40+ stories has professional voice narration recorded in 8 languages: English, Arabic, German, Dutch, Turkish, French, Swedish, and Spanish. That is not machine-generated TTS -- these are real voice actors, recorded in professional studios, with proper intonation and emotion.
On top of the narration, I layered ambient background sounds. When a story describes a storm, you hear rain and thunder. Desert scenes get wind and sand ambience. Ocean scenes get waves. The user can toggle these on or off, adjust the mix, and even swap background sounds.
The playback system tracks detailed engagement metrics: playback started, paused, resumed, completed, abandoned (with progress percentage). This data told me something crucial early on -- users who enabled background sounds had significantly higher completion rates. That insight shaped the entire onboarding flow: now the app gently introduces background sounds during the first story playback.
Subtitles are synchronized to the audio timeline and work across all 8 languages. For Arabic and any future RTL languages, the entire UI flips -- layout direction, text alignment, even the swipe gestures reverse. SwiftUI's environment-based locale system made this surprisingly painless once I set up the infrastructure.
The content pipeline works like this: I commission narrations per story per language, process the audio files, generate subtitle timing data, and push everything to the CDN. A Python script on the backend generates the web versions of each story page from the same source data. One source of truth, two outputs (app + web).
Adding AI Chat: Firebase VertexAI for Islamic Q&A
About four months after launch, I noticed an interesting pattern in user feedback: people were finishing stories and wanting to ask follow-up questions. "What does this story teach us about patience?" "How does the story of Yusuf relate to modern life?" "Can you explain the context of this Quranic verse?"
Instead of building a traditional FAQ, I added an AI chat feature powered by Firebase VertexAI (Google's Gemini models accessed through the Firebase SDK). The chat is scoped specifically to Islamic knowledge -- it can discuss prophets in Islam, Quranic context, Islamic history, and ethical lessons from the stories.
The implementation sits behind the same Clean Architecture boundary as everything else: a Chat feature module with its own domain entities (conversations, messages, subjects), a data layer that handles the VertexAI streaming API, and a SwiftUI presentation layer with a conversational UI.
A few things I learned building the AI chat:
Context matters more than model quality. The system prompt that scopes the AI to Islamic topics and connects it to the story the user just listened to made a bigger difference than any model parameter tuning.
Streaming is non-negotiable. Users expect to see tokens appear in real-time. A 3-second wait for a complete response feels broken. Firebase's streaming API made this straightforward on iOS with Swift's async/await concurrency model.
Guardrails are essential. The AI needs to stay in its lane. It should discuss Islamic knowledge thoughtfully and refuse to generate content outside its scope. Getting this right took multiple iterations of the system prompt and testing across edge cases.
The chat feature is now one of the strongest conversion drivers -- users who engage with the AI chat are significantly more likely to convert to paid subscribers.
Growth & SEO: A Static Website Doing Heavy Lifting
Here is something that might surprise other indie developers: a hand-built static HTML website is one of my strongest growth channels.
The Qissah website is not a fancy Next.js app. It is static HTML, hand-authored for the main pages and Python-generated for the 41 individual story pages. It runs on a Cloudflare Worker. Total infrastructure cost: essentially zero.
But the SEO work behind it is deliberate and systematic:
56 URLs in the sitemap, each with proper structured data (MobileApplication, Article, FAQPage, BreadcrumbList schemas), hreflang alternates for 6 language variants, and Apple Smart App Banners that deep-link directly into the app.
Content that serves real queries. Each story page (like the Story of Prophet Musa or Prophet Yusuf) targets long-tail keywords that people actually search for. The pages provide genuine value -- story summaries, Quranic references, lessons -- not thin SEO bait.
Localized homepages in Arabic (with full RTL support), German, French, Dutch, and Turkish. Each locale gets its own hreflang tags, localized meta descriptions, and culturally appropriate copy.
The result: the website now drives 2,200+ daily impressions from Google Search, with a meaningful chunk of that traffic converting to app installs via the Smart App Banner. Around 60% of the website traffic comes from ChatGPT and AI assistants surfacing the content, which was completely unexpected but has become a significant acquisition channel.
The key insight: for a niche content app, SEO compounds in a way that paid acquisition doesn't. Every story page I publish keeps working forever. Every Apple Search Ads dollar I spend stops working the moment I pause the campaign.
The Analytics Stack: Measuring Everything, Twice
One slightly unusual thing about Qissah's analytics setup: every event fires to both Firebase Analytics (GA4) and AppsFlyer simultaneously. This dual-logging approach gives me two independent data sources to cross-reference, which has caught discrepancies more than once.
The app tracks 102 distinct event keys across onboarding, story playback, chat engagement, dhikr sessions, Quran reading, prayer times, streaks, and subscription lifecycle. Nine user attributes (subscription status, locale, stories completed, streak length, primary feature, etc.) are synced to both Firebase and Superwall for segmentation.
The full acquisition funnel looks like this:
Apple Search Ads (spend/taps) -> AppsFlyer (attribution) -> GA4 (in-app engagement) -> Superwall (paywall A/B tests) -> RevenueCat (trial -> paid conversion)Apple Search Ads (spend/taps) -> AppsFlyer (attribution) -> GA4 (in-app engagement) -> Superwall (paywall A/B tests) -> RevenueCat (trial -> paid conversion)Enter fullscreen mode
Exit fullscreen mode
This cross-platform chain lets me calculate true ROAS: how much I spent on Apple Ads, how many attributed installs converted to trials, and how many of those trials became paying subscribers. Most indie developers stop at "cost per install." Knowing cost per paying subscriber changed how I allocate budget across markets.
Lessons Learned
What worked:
-
Audio-first, not text-first. The narration quality is the single biggest differentiator. Users mention it in almost every 5-star review. Investing in professional voice actors instead of TTS was expensive but worth every dollar.
-
Superwall for paywall experimentation. Being able to A/B test different paywall designs, copy, and trigger points without shipping app updates accelerated my monetization learning by months. The paywall exposure problem (showing the paywall at the right moment, not too early, not too late) is still the biggest lever I'm optimizing.
-
Localization as a growth strategy. Supporting 8 audio languages and 22 app languages opened up markets (Germany, Netherlands, France, Turkey) that most English-only competitors ignore entirely. Some of my best-performing keywords have zero competition in non-English App Stores.
-
Static SEO over paid acquisition. The website's organic traffic compounds monthly. Paid ads gave me an initial boost but the ROI of content marketing has been dramatically better over a 12-month horizon.
What didn't work:
-
Launching with too many features. The first version had stories, Quran reader, dhikr, prayer times, and chat. I should have launched with stories only and added features based on demand. The Quran reader, for instance, gets moderate usage but consumed weeks of development time that could have gone into more story content.
-
Pricing experiments without guardrails. I ran an aggressive low-price test early on that drove a 4x spike in MRR but created a cohort of subscribers at unsustainably low prices. When I corrected pricing, churn spiked. Now I have formal guardrails around any pricing change: define the hypothesis, set a rollback trigger, and never change more than one variable at a time.
-
Underestimating content operations. Recording narrations in 8 languages, synchronizing subtitles, QA-ing pronunciation, and managing voice actor relationships is a content operations challenge, not an engineering challenge. I spent too long optimizing the playback engine and not enough time building content pipelines.
-
Ignoring Android for too long. Almost half my website visitors are on Android. I have a waitlist page collecting emails, but every month without an Android app is leaving significant revenue on the table.
What's Next
The roadmap is shaped by what the data says, not what I think is cool:
Android launch. The Kotlin/Compose version is in development. Same architecture patterns, same content pipeline, different platform. The backend and content are already platform-agnostic, so the heavy lifting is the native UI and audio playback.
More stories, more languages. The content library is expanding from 40+ to 60+ stories, and I am exploring Urdu, Malay, and Indonesian as the next language tiers based on App Store keyword demand in those markets.
Deeper AI integration. The chat feature proved that users want to engage with the content, not just consume it. I am exploring personalized story recommendations based on listening history and an AI-guided study mode that connects stories to their Quranic sources.
Community features. Streak sharing, family accounts, and group listening are all in the research phase. The dhikr streaks already drive strong daily retention -- extending that social layer could meaningfully improve word-of-mouth growth.
Final Thoughts
Building Qissah taught me that the best indie app opportunities are in underserved niches where the audience is large but the existing products are mediocre. 1.8 billion Muslims is not a small market. It is a market that has been largely ignored by quality-focused app developers.
The technical stack is intentionally boring. The content is not. That imbalance is the whole strategy.
If you are thinking about building a niche content app, my one piece of advice: invest disproportionately in the content quality over the technology. Users do not care that you used Clean Architecture with MVVM-C. They care that the narration gave them chills.
Qissah is free to download on iOS. You can explore the stories at qissahapp.com/quran-stories or learn more about the prophets at qissahapp.com/prophets-in-islam.
Tags: #ios #swift #indiehacker #ai #firebase #mobiledevelopment #startup
Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
geminimodellaunch
I Compared Make.com and n8n Across 20+ Client Deployments. Here Is My Verdict.
A client came to me in January with a Make.com scenario that had started as a simple lead routing workflow and mutated into a 47-step monster. It was timing out. It was burning through their operations credits. And when they needed to add an AI agent that could make decisions based on their CRM data, Make had no good answer. Three weeks later, after rebuilding the whole thing in n8n, their monthly automation bill dropped by 71% and the AI agent actually worked. That project pushed me to do something I had been putting off: a real, systematic comparison of Make.com and n8n for AI agent workflows. Not a feature checklist review. A practitioner's assessment built on two years of deploying both platforms across more than 20 client environments. Here is what I found, and more importantly, here

AWS CDK Deployment Best Practices
Best Practices for AWS CDK Deployment In AWS CDK, you can define and deploy stacks in various ways. However, with so many approaches available, it can be difficult to determine which one is best. In this article, I will share what I consider to be best practices for deploying with AWS CDK. Rather than covering resource composition or Construct implementation patterns, I have specifically focused the scope on deployment-related practices in CDK . This article introduces the following four approaches: Use Static Stack Creation + Stage Synthesize once, deploy many Separate the asset build/publish and deploy phases Commit cdk.context.json Disclaimer While this article is titled "Best Practices," the content reflects my own ongoing experimentation and represents just one example . It is not nec

Mastercard and Google Are Building the Trust Layer for AI That Spends Money
Mastercard and Google Are Building the Trust Layer for AI That Spends Money 16% of U.S. consumers trust AI to make payments on their behalf. Not because they don't understand the technology—because they don't understand what the AI will actually do. Will it book the flight I asked for, or also add travel insurance I didn't authorize? Will it buy the specific product I selected, or the "best" one according to criteria I never approved? This isn't an AI capability problem. It's a trust infrastructure problem. Mastercard and Google just open-sourced a piece of that infrastructure: Verifiable Intent. What Verifiable Intent Actually Does The framework creates cryptographic proof that an AI agent is operating within bounds a human explicitly authorized. Think of it as a digitally signed power of
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Products

I Compared Make.com and n8n Across 20+ Client Deployments. Here Is My Verdict.
A client came to me in January with a Make.com scenario that had started as a simple lead routing workflow and mutated into a 47-step monster. It was timing out. It was burning through their operations credits. And when they needed to add an AI agent that could make decisions based on their CRM data, Make had no good answer. Three weeks later, after rebuilding the whole thing in n8n, their monthly automation bill dropped by 71% and the AI agent actually worked. That project pushed me to do something I had been putting off: a real, systematic comparison of Make.com and n8n for AI agent workflows. Not a feature checklist review. A practitioner's assessment built on two years of deploying both platforms across more than 20 client environments. Here is what I found, and more importantly, here

Vibe Coding Is Dead. Orchestration Is What Comes Next.
How Cursor 3, Codex, and a wave of new tools are proving that the future of software development is not writing code. It is managing the agents that do. Vibe coding was only phase one For the past year, vibe coding has been the story. One person, one AI agent, one task. You write a prompt, the agent writes the code, you review it, and you ship. Repeat. It worked. It proved something that a lot of people did not believe was possible: that non-engineers, designers, product managers, and founders could build real software with AI as their collaborator. Tools like Cursor, Claude Code, and Codex made this accessible. The barrier to building dropped to nearly zero. But vibe coding hit a ceiling. One agent at a time does not scale. You are still the bottleneck. You prompt, you wait, you review, y

How I Built a Production Observability Stack — And Broke It Twice Before It Worked
I used to dismiss monitoring as something you bolt on after the real engineering is done. Logs were noise. Metrics were "a later problem." Alerts were for teams with dedicated SREs, not a small startup running three service types on Render. I was wrong. Badly wrong. And it took a self-inflicted incident — where my own monitoring system became the thing that needed monitoring — to understand why observability is engineering, not afterthought. This is a detailed account of building our observability stack from scratch: what we built, what broke, why it broke, and what the architecture looks like today. The starting point We run three service types on Render: A web service — the main API and frontend server Background workers — async job processors (queuing, retries, scheduled tasks) Key-valu



Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!