Live
Black Hat USAAI BusinessBlack Hat AsiaAI BusinessHigh-Precision OCR for Medical Device Labeling with RF-DETR and Gemini 2.5 FlashRoboflow BlogNvidia’s AI Powerhouse Rally Ignites Fresh Wall Street Hype - TipRanksGNews AI NVIDIAI Asked ChatGPT To Explain Ethereum to Me Like I’m 12 - Yahoo Finance UKGoogle News: ChatGPTOpenAI Called The One Person AI Startup And Three Founders Proved It - ForbesGoogle News: OpenAItrunk/3dcc1a51f1fb1700a975d91d24f44be49f60e45dPyTorch ReleasesAnthropic Just Leaked Its Own AI Secrets. Here’s What It Means for You.Towards AITutorial - How to Toggle On/OFf the Thinking Mode Directly in LM Studio for Any Thinking ModelReddit r/LocalLLaMAApono Amplifies Agentic AI Security Push With New Privilege Guard Product and RSA 2026 Campaign - TipRanksGNews AI agenticThe Real Reason OpenAI Shut Sora Down Is a Warning to Every AI Startup - FuturismGoogle News: OpenAIDeep Machine Learning - Artificial Neural Network - - TradingViewGoogle News: Machine LearningChinese firms market Iran war intelligence ‘exposing’ U.S. forces - The Washington PostGNews AI military[P] Implemented ACT-R cognitive decay and hyperdimensional computing for AI agent memory (open source)Reddit r/MachineLearningBlack Hat USAAI BusinessBlack Hat AsiaAI BusinessHigh-Precision OCR for Medical Device Labeling with RF-DETR and Gemini 2.5 FlashRoboflow BlogNvidia’s AI Powerhouse Rally Ignites Fresh Wall Street Hype - TipRanksGNews AI NVIDIAI Asked ChatGPT To Explain Ethereum to Me Like I’m 12 - Yahoo Finance UKGoogle News: ChatGPTOpenAI Called The One Person AI Startup And Three Founders Proved It - ForbesGoogle News: OpenAItrunk/3dcc1a51f1fb1700a975d91d24f44be49f60e45dPyTorch ReleasesAnthropic Just Leaked Its Own AI Secrets. Here’s What It Means for You.Towards AITutorial - How to Toggle On/OFf the Thinking Mode Directly in LM Studio for Any Thinking ModelReddit r/LocalLLaMAApono Amplifies Agentic AI Security Push With New Privilege Guard Product and RSA 2026 Campaign - TipRanksGNews AI agenticThe Real Reason OpenAI Shut Sora Down Is a Warning to Every AI Startup - FuturismGoogle News: OpenAIDeep Machine Learning - Artificial Neural Network - - TradingViewGoogle News: Machine LearningChinese firms market Iran war intelligence ‘exposing’ U.S. forces - The Washington PostGNews AI military[P] Implemented ACT-R cognitive decay and hyperdimensional computing for AI agent memory (open source)Reddit r/MachineLearning
AI NEWS HUBbyEIGENVECTOREigenvector

REST vs GraphQL vs WebSockets vs Webhooks: A Real-World Decision Guide (With Code)

DEV Communityby Rose WabereMarch 31, 20269 min read2 views
Source Quiz
🧒Explain Like I'm 5Simple language

Hey there, little explorer! Imagine you have lots of toys, right? Some toys are for building, some for drawing, and some for playing outside!

Computers are a bit like that. They have different ways to "talk" to each other, like sending messages back and forth.

This article is like a superhero guide for grown-ups, teaching them which "talking tool" is best for different jobs.

Sometimes, a computer just asks for one thing, like "What color is the sky?" (That's like REST). Other times, it wants to keep a special "walkie-talkie" open to hear new things all the time, like when you're watching a live cartoon! (That's like WebSockets).

And sometimes, when a computer is waiting for an answer, it's super smart and does other things instead of just sitting there doing nothing. It's like you playing with blocks while waiting for your toast to pop! That's called async/await.

So, it's all about picking the right way for computers to chat so they can play together really well!

<p>You have used all of these. But when someone asks you, maybe in an interview, or in a system design meeting, <strong>why</strong> you chose WebSockets over polling, or webhooks over a queue, can you answer precisely?</p> <p>This isn't another definitions post. This article is about <strong>knowing which tool to reach for and why</strong>, with code you can actually use.</p> <p>Quick mental model before we start:<br> </p> <div class="highlight js-code-highlight"> <pre class="highlight plaintext"><code>Communication patterns → REST | GraphQL | WebSockets | Webhooks Code execution model → async/await </code></pre> </div> <p>These live at different layers. Conflating them is the most common source of confusion.</p> <h2> async/await: The Foundation, Not the Feature </h2> <p>Let's kill one my

You have used all of these. But when someone asks you, maybe in an interview, or in a system design meeting, why you chose WebSockets over polling, or webhooks over a queue, can you answer precisely?

This isn't another definitions post. This article is about knowing which tool to reach for and why, with code you can actually use.

Quick mental model before we start:

Communication patterns → REST | GraphQL | WebSockets | Webhooks Code execution model → async/await

Enter fullscreen mode

Exit fullscreen mode

These live at different layers. Conflating them is the most common source of confusion.

async/await: The Foundation, Not the Feature

Let's kill one myth immediately: async/await is not a communication pattern. It's how your server handles waiting.

Every I/O operation — database queries, HTTP calls, file reads — makes your code wait. async/await ensures that waiting doesn't freeze every other user's request.

# BAD: blocks the event loop - all other requests stall for 40ms @app.get("/order/{id}") def get_order(id: int):  order = db.execute("SELECT * FROM orders WHERE id = %s", id) # blocking  return order*

GOOD: yields control during the wait - other requests run while DB responds

@app.get("/order/{id}") async def get_order(id: int): order = await db.fetch_one("SELECT * FROM orders WHERE id = $1", id) return order`*

Enter fullscreen mode

Exit fullscreen mode

When it matters most: High-concurrency services. A delivery platform handling 500 simultaneous drivers checking order status. An NGO dashboard pulling live survey data.

The trap: Calling a synchronous library inside an async function. This blocks the entire event loop.

import requests # synchronous - don't use this inside async functions import httpx # async-capable - use this instead

WRONG

async def fetch_rate(): r = requests.get("https://api.exchangerate.host/latest") # blocks event loop return r.json()

CORRECT

async def fetch_rate(): async with httpx.AsyncClient() as client: r = await client.get("https://api.exchangerate.host/latest") return r.json()`

Enter fullscreen mode

Exit fullscreen mode

REST: Default Choice for Good Reason

Use REST when:

  • The client initiates all interactions

  • Data doesn't change faster than the user refreshes

  • You're building standard CRUD

from fastapi import FastAPI from pydantic import BaseModel

app = FastAPI()

class Order(BaseModel): customer_id: int items: list[str] total: float

@app.get("/orders/{order_id}") async def get_order(order_id: int): order = await db.fetch_one("SELECT * FROM orders WHERE id = $1", order_id) return order*

@app.post("/orders") async def create_order(order: Order): result = await db.execute( "INSERT INTO orders (customer_id, items, total) VALUES ($1, $2, $3) RETURNING id", order.customer_id, order.items, order.total ) return {"order_id": result["id"]}

@app.delete("/orders/{order_id}") async def cancel_order(order_id: int): await db.execute("UPDATE orders SET status = 'cancelled' WHERE id = $1", order_id) return {"status": "cancelled"}`

Enter fullscreen mode

Exit fullscreen mode

Real-world scenario: A SACCO member portal. Members log in, check their loan balance, submit a loan application. All request/response. No data is changing while they're looking at a page. REST is perfect.

Production considerations:

  • Add proper HTTP caching headers (Cache-Control, ETag) - REST can be highly cacheable

  • Version your APIs (/v1/orders) from day one

  • Return proper status codes (201 for created, 404 for not found, 422 for validation errors - FastAPI handles this automatically)

GraphQL: REST With Client-Controlled Queries

Use GraphQL when:

  • Multiple clients (mobile, web, third-party integrations) need different shapes of the same data

  • You're constantly over-fetching or under-fetching with REST

  • You have deeply nested, relational data

# Using Strawberry (best GraphQL library for FastAPI) import strawberry from fastapi import FastAPI from strawberry.fastapi import GraphQLRouter

@strawberry.type class Order: id: int total: float status: str

@strawberry.type class User: id: int name: str email: str orders: list[Order]

@strawberry.type class Query: @strawberry.field async def user(self, id: int) -> User: return await get_user_with_orders(id)

schema = strawberry.Schema(query=Query) graphql_router = GraphQLRouter(schema)

app = FastAPI() app.include_router(graphql_router, prefix="/graphql")`

Enter fullscreen mode

Exit fullscreen mode

Now a mobile app can request exactly what it needs:

# Mobile - bandwidth-conscious, needs minimal data query {  user(id: 1) {  name  orders(last: 3) {  id  status  }  } }

Web dashboard - needs full details

query { user(id: 1) { name email orders { id total status items { name price } } } }`

Enter fullscreen mode

Exit fullscreen mode

Real-world scenario: An NGO platform serving both field officers on 2G mobile and HQ analysts on desktops. Field officers need lightweight data. Analysts need full datasets. GraphQL lets one API serve both without maintaining separate endpoints.

Production considerations:

  • Implement depth limiting to prevent abusive nested queries

  • Add query complexity analysis: prevent user → orders → user → orders recursion

  • GraphQL doesn't cache well at HTTP layer: use DataLoader for N+1 query prevention

  • Don't default to GraphQL for simple services: it adds real overhead

WebSockets: When the Server Needs to Talk First

Use WebSockets when:

  • Data changes continuously, and the user needs to see updates immediately

  • Polling would generate unacceptable load or latency

  • Both client and server need to send messages freely

from fastapi import FastAPI, WebSocket, WebSocketDisconnect from typing import dict

app = FastAPI()

Connection manager for multiple clients

class ConnectionManager: def init(self): self.active: dict[str, WebSocket] = {}

async def connect(self, user_id: str, websocket: WebSocket): await websocket.accept() self.active[user_id] = websocket

def disconnect(self, user_id: str): self.active.pop(user_id, None)

async def send_to_user(self, user_id: str, message: dict): ws = self.active.get(user_id) if ws: await ws.send_json(message)

async def broadcast(self, message: dict): for ws in self.active.values(): await ws.send_json(message)

manager = ConnectionManager()

@app.websocket("/ws/track/{driver_id}") async def track_driver(websocket: WebSocket, driver_id: str): await manager.connect(driver_id, websocket) try: while True: data = await websocket.receive_json()

Driver sent a location update - broadcast to assigned rider

if data["type"] == "location_update": await manager.send_to_user( data["rider_id"], {"type": "driver_location", "lat": data["lat"], "lng": data["lng"]} ) except WebSocketDisconnect: manager.disconnect(driver_id)`

Enter fullscreen mode

Exit fullscreen mode

Real-world scenario: A ride-hailing app showing a driver moving on the map in real time. The driver's app sends GPS coordinates every 3 seconds over a persistent WebSocket. The rider's app receives them without polling. This would require: 1,000 REST requests per ride if you used polling at 3-second intervals.

Production considerations:

  • Persistent connections consume server resources — plan for horizontal scaling early

  • Use Redis Pub/Sub to share WebSocket state across multiple server instances

  • Always handle WebSocketDisconnect — clients drop off constantly (network, battery, background app)

  • Heartbeats keep connections alive through load balancers that close idle connections after 60 seconds

# Heartbeat to keep connection alive @app.websocket("/ws/live") async def live_feed(websocket: WebSocket):  await websocket.accept()  try:  while True:  try:

Wait for message with 30s timeout

data = await asyncio.wait_for(websocket.receive_json(), timeout=30.0) await handle_message(data) except asyncio.TimeoutError:

Send ping to keep connection alive

await websocket.send_json({"type": "ping"}) except WebSocketDisconnect: pass`

Enter fullscreen mode

Exit fullscreen mode

Webhooks: Event Notification Between Services

Use webhooks when:

  • An external system needs to notify your system that something happened

  • You don't control the other system's push mechanism

  • You want event-driven integration without maintaining a persistent connection

import hmac import hashlib from fastapi import FastAPI, Request, HTTPException

app = FastAPI()

WEBHOOK_SECRET = "your_flutterwave_webhook_secret"

def verify_flutterwave_signature(payload: bytes, signature: str) -> bool: expected = hmac.new( WEBHOOK_SECRET.encode(), payload, hashlib.sha256 ).hexdigest() return hmac.compare_digest(f"sha256={expected}", signature)

@app.post("/webhooks/payment") async def payment_webhook(request: Request):

1. Verify the request is actually from Flutterwave

signature = request.headers.get("verif-hash", "") body = await request.body()

if not verify_flutterwave_signature(body, signature): raise HTTPException(status_code=401, detail="Invalid signature")

payload = await request.json() event_id = payload["data"]["id"]

2. Idempotency - Flutterwave WILL retry on 500s

if await db.webhook_event_exists(event_id): return {"status": "already_processed"}

3. Acknowledge immediately, process async

Don't do heavy work here -- return 200 fast, process in background

await background_tasks.add_task(process_payment_event, payload) await db.mark_webhook_received(event_id)

return {"status": "received"}

async def process_payment_event(payload: dict): order_id = payload["data"]["meta"]["order_id"] await db.execute( "UPDATE orders SET status = 'paid', paid_at = NOW() WHERE id = $1", order_id ) await send_confirmation_email(order_id)`

Enter fullscreen mode

Exit fullscreen mode

Real-world scenario: Your Kenyan e-commerce platform integrates Mpesa via Daraja API. When a customer pays, Safaricom calls your /webhooks/mpesa endpoint with the transaction details. You mark the order paid and send a confirmation SMS. No polling. No persistent connection.

Production considerations:

  • Always return 200 fast; the webhook caller will retry if you're slow or error

  • Never trust webhook data without verifying the signature

  • Log every webhook payload for debugging; payment disputes will happen

  • Queue heavy processing (emails, SMS, inventory updates) with a background worker

Real-World Architecture: All Four Together

Here's how a production fintech app in Kenya uses all of these together:

┌──────────────────────────────────────────────────────────┐ │ Mobile/Web Client │ └───────┬───────────────────┬─────────────────┬────────────┘  │ │ │  REST (CRUD) WebSocket GraphQL  POST /loans /ws/notifications /graphql  GET /balance (real-time alerts) (analytics)  │ │ │ ┌───────▼───────────────────▼─────────────────▼────────────-┐ │ FastAPI Backend │ │ (all async/await internally) │ └───────┬───────────────────────────────────────────────────┘  │  Webhook receiver  POST /webhooks/mpesa ← Safaricom calls this  POST /webhooks/credit ← Credit bureau calls this

Enter fullscreen mode

Exit fullscreen mode

Each pattern handles exactly what it's good at. The async/await inside FastAPI makes sure none of them block each other.

When to Use What: Decision Framework

Signal Use

Client requests data on demand REST

Multiple clients need different data shapes GraphQL

User needs to see live updates without refreshing WebSocket

External service needs to notify your backend of events Webhook

Your code waits on database, HTTP, or file I/O async/await

Hard rules:

  • If polling interval < 5 seconds and data changes frequently → switch to WebSocket

  • If you have > 3 different client types with different data needs → consider GraphQL

  • If you're integrating a payment provider, shipping tracker, or auth service → expect webhooks

  • If you're running FastAPI → use async def everywhere there's I/O, no exceptions

Common Mistakes (Those That Hurt in Production)

  1. requests inside async def: blocks the entire event loop. Use httpx with await.

  2. No idempotency on webhook handler: payment events get retried. Without idempotency checks, you'll charge customers twice.

  3. WebSocket without reconnection logic: mobile networks drop. Your client-side WebSocket needs exponential backoff reconnection, or users see frozen data silently.

  4. Assuming GraphQL is real-time: GraphQL subscriptions require a separate WebSocket-based setup. Standard queries/mutations are still request/response.

  5. No signature verification on webhooks: your endpoint is public. Anyone can POST to it. Always verify HMAC signatures.

  6. Keeping heavy processing in the webhook handler: the caller expects a fast response. Queue everything with a task worker (Celery, ARQ) and return 200 immediately.

If you found this helpful, please share it with others who it might help. And if you have questions, kindly drop them in the comments below!

Rose Wabere - Data & Analytics Engineer, Nairobi. Building real-world data systems with Python, FastAPI, and whatever tool the problem actually needs.

Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by Eigenvector · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

Knowledge Map

Knowledge Map
TopicsEntitiesSource
REST vs Gra…modelversionupdateproductapplicationplatformDEV Communi…

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Knowledge Graph100 articles · 189 connections
Scroll to zoom · drag to pan · click to open

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!