Building a Python Workflow That Catches Bugs Before Production
Using modern tooling to identify defects earlier in the software lifecycle. The post Building a Python Workflow That Catches Bugs Before Production appeared first on Towards Data Science .
of those languages that can make you feel productive almost immediately.
That is a big part of why it’s so popular. Moving from idea to working code can be very quick. You don’t need a lot of scaffolding just to test an idea. Some input parsing, a few functions maybe, stitch them together, and very often you’ll have something useful in front of you within minutes.
The downside is that Python can also be very forgiving in places where sometimes you would prefer it not to be.
It will quite happily assume a dictionary key exists when it does not. It will allow you to pass around data structures with slightly different shapes until one finally breaks at runtime. It will let a typo survive longer than it should. And perhaps, sneakily, it will let the code be “correct” while still being far too slow for real-world use.
That’s why I have become more interested in code development workflows in general rather than in any single testing technique.
When people talk about code quality, the conversation usually goes straight to tests. Tests matter, and I use them constantly, but I don’t think they should carry the whole burden. It would be better if most mistakes were caught before the code is even run. Maybe some issues should be caught as soon as you save your code file. Others, when you commit your changes to GitHub. And if those pass OK, perhaps you want to run a series of tests to verify that the code behaves properly and performs well enough to withstand real-world contact.
In this article, I want to walk through a set of tools you can use to build a Python workflow to automate the tasks mentioned above. Not a giant enterprise setup or an elaborate DevOps platform. Just a practical, relatively simple toolchain that helps catch bugs in your code before deployment to production.
To make that concrete, I am going to use a small but realistic example. Imagine I am building a Python module that processes order payloads, calculates totals, and generates recent-order summaries. Here’s a deliberately rough first pass.
from datetime import datetime import jsonfrom datetime import datetime import jsondef normalize_order(order): created = datetime.fromisoformat(order["created_at"]) return { "id": order["id"], "customer_email": order.get("customer_email"), "items": order["items"], "created_at": created, "discount_code": order.get("discount_code"), }
def calculate_total(order): total = 0 discount = None
for item in order["items"]: total += item["price"] * item["quantity"]*
if order.get("discount_code"): discount = 0.1 total = 0.9
return round(total, 2)
def build_order_summary(order): normalized = normalize_order(order); total = calculate_total(order) return { "id": normalized["id"], "email": normalized["customer_email"].lower(), "created_at": normalized["created_at"].isoformat(), "total": total, "item_count": len(normalized["items"]), }
def recent_order_totals(orders): summaries = [] for order in orders: summaries.append(build_order_summary(order))
summaries.sort(key=lambda x: x["created_at"], reverse=True) return summaries[:10]`
There’s a lot to like about code like this when you’re “moving fast and breaking things”. It’s short and readable, and probably even works on the first couple of sample inputs you try.
But there are also several bugs or design problems waiting in the wings. If customer_email is missing, for example, the .lower() method will raise an AttributeError. There is also an assumption that the items variable always contains the expected keys. There’s an unused import and a leftover variable from what appears to be an incomplete refactor. And in the final function, the entire result set is sorted even though only the 10 most recent items are needed. That last point matters because we want our code to be as efficient as possible. If we only need the top ten, we should avoid fully sorting the dataset whenever possible.
It’s code like this where a good workflow starts paying for itself.
With that being said, let’s have a look at some of the tools you can use in your code development pipeline, which will ensure your code has the best possible chance to be correct, maintainable and performant. All the tools I’ll discuss are free to download, install and use.
Note that some of the tools I mention are multi-purpose. For example some of the formatting that the black utility can do, can also be done with the ruff tool. Often it’s just down to personal preference which ones you use.
Tool #1: Readable code with no formatting noise
The first tool I usually install is called Black. Black is a Python code formatter. Its job is very simple, it takes your source code and automatically applies a consistent style and format.
Installation and use
Install it using pip or your preferred Python package manager. After that, you can run it like this,
$ black your_python_file.py
or
$ python -m black your_python_file`
Black requires Python version 3.10 or later to run.
Using a code formatter might seem cosmetic, but I think formatters are more important than people sometimes admit. You don’t want to spend mental energy deciding how a function call should wrap, where a line break should go, or whether you have formatted a dictionary “nicely enough.” Your code should be consistent so you can focus on logic rather than presentation.
Suppose you have written this function in a hurry.
def build_order_summary(order): normalized=normalize_order(order); total=calculate_total(order) return {"id":normalized["id"],"email":normalized["customer_email"].lower(),"created_at":normalized["created_at"].isoformat(),"total":total,"item_count":len(normalized["items"])}def build_order_summary(order): normalized=normalize_order(order); total=calculate_total(order) return {"id":normalized["id"],"email":normalized["customer_email"].lower(),"created_at":normalized["created_at"].isoformat(),"total":total,"item_count":len(normalized["items"])}It’s messy, but Black turns that into this.
def build_order_summary(order): normalized = normalize_order(order) total = calculate_total(order) return { "id": normalized["id"], "email": normalized["customer_email"].lower(), "created_at": normalized["created_at"].isoformat(), "total": total, "item_count": len(normalized["items"]), }def build_order_summary(order): normalized = normalize_order(order) total = calculate_total(order) return { "id": normalized["id"], "email": normalized["customer_email"].lower(), "created_at": normalized["created_at"].isoformat(), "total": total, "item_count": len(normalized["items"]), }Black hasn’t fixed any business logic here. But it has done something extremely useful: it has made the code easier to inspect. When the formatting disappears as a source of friction, any real coding problems become much easier to see.
Black is configurable in many different ways, which you can read about in its official documentation. (Links to this and all the tools mentioned are at the end of the article)
Tool #2: Catching the small suspicious mistakes
Once formatting is handled, I usually add Ruff to the pipeline. Ruff is a Python linter written in Rust. Ruff is fast, efficient and very good at what it does.
Installation and use
Like Black, Ruff can be installed with any Python package manager.
$ pip install ruff
$ # And used like this $ ruff check your_python_code.py`
Linting is useful because many bugs begin life as little suspicious details. Not deep logic flaws or clever edge cases. Just slightly wrong code.
For example, let’s say we have the following simple code. In our sample module, for example, there’s a couple of unused imports and a variable that is assigned but never really needed:
from datetime import datetime import jsonfrom datetime import datetime import jsondef calculate_total(order): total = 0 discount = 0
for item in order["items"]: total += item["price"] * item["quantity"]*
if order.get("discount_code"): total = 0.9
return round(total, 2)`
Ruff can catch those immediately:
$ ruff check test1.py
F401 [] datetime.datetime imported but unused
--> test1.py:1:22
|
1 | from datetime import datetime
| ^^^^^^^^
2 | import json
|
help: Remove unused import: datetime.datetime
F401 [] json imported but unused
--> test1.py:2:8
|
1 | from datetime import datetime
2 | import json
| ^^^^
3 |
4 | def calculate_total(order):
|
help: Remove unused import: json
F841 Local variable discount is assigned to but never used
--> test1.py:6:5
|
4 | def calculate_total(order):
5 | total = 0
6 | discount = 0
| ^^^^^^^^
7 |
8 | for item in order["items"]:
|
help: Remove assignment to unused variable discount
Found 3 errors.
[] 2 fixable with the --fix option (1 hidden fix can be enabled with the --unsafe-fixes option).`
Tool #3: Python starts feeling much safer
Formatting and linting help, but neither really addresses the source of much of the trouble in Python: assumptions about data.
That’s where mypy comes in. Mypy is a static type checker for Python.
Installation and use
Install it with pip, then run it like this
$ pip install mypy
$ # To run use this
$ mypy test3.py`
Mypy will run a type check on your code (without actually executing it). This is an important step because many Python bugs are really data-shape bugs. You assume a field exists. You assume a value is a string or that a function returns one thing when in reality it sometimes returns another.
To see it in action, let’s add some types to our order example.
from datetime import datetime from typing import NotRequired, TypedDictfrom datetime import datetime from typing import NotRequired, TypedDictclass Item(TypedDict): price: float quantity: int
class RawOrder(TypedDict): id: str items: list[Item] created_at: str customer_email: NotRequired[str] discount_code: NotRequired[str]
class NormalizedOrder(TypedDict): id: str customer_email: str | None items: list[Item] created_at: datetime discount_code: str | None
class OrderSummary(TypedDict): id: str email: str created_at: str total: float item_count: int`
Now we can annotate our functions.
def normalize_order(order: RawOrder) -> NormalizedOrder: return { "id": order["id"], "customer_email": order.get("customer_email"), "items": order["items"], "created_at": datetime.fromisoformat(order["created_at"]), "discount_code": order.get("discount_code"), }def normalize_order(order: RawOrder) -> NormalizedOrder: return { "id": order["id"], "customer_email": order.get("customer_email"), "items": order["items"], "created_at": datetime.fromisoformat(order["created_at"]), "discount_code": order.get("discount_code"), }def calculate_total(order: RawOrder) -> float: total = 0.0
for item in order["items"]: total += item["price"] * item["quantity"]*
if order.get("discount_code"): total = 0.9
return round(total, 2)
def build_order_summary(order: RawOrder) -> OrderSummary: normalized = normalize_order(order) total = calculate_total(order)
return { "id": normalized["id"], "email": normalized["customer_email"].lower(), "created_at": normalized["created_at"].isoformat(), "total": total, "item_count": len(normalized["items"]), }`
Now the bug is much harder to hide. For example,
$ mypy test3.py test.py:36: error: Item "None" of "str | None" has no attribute "lower" [union-attr] Found 1 error in 1 file (checked 1 source file)$ mypy test3.py test.py:36: error: Item "None" of "str | None" has no attribute "lower" [union-attr] Found 1 error in 1 file (checked 1 source file)customer_email comes from order.get(“customer_email”), which means it may be missing and therefore evaluates to None. Mypy tracks that asstr | None, and correctly rejects calling .lower() on it without first handling the None case.
It may seem a simple thing, but I think it’s a big win. Mypy forces you to be more honest about the shape of the data that you’re actually handling. It turns vague runtime surprises into early, clearer feedback.
Tool #4: Testing, testing 1..2..3
At the start of this article, we identified three problems in our order-processing code: a crash when customer_email is missing, unchecked assumptions about item keys, and an inefficient sort, which we’ll return to later. Black, Ruff and Mypy have already helped us address the first two structurally. But tools that analyse code statically can only go so far. At some point, you need to verify that the code actually behaves correctly when it runs. That’s what pytest is for.
Installation and use
$ pip install pytest $ $ # run it with $ pytest your_test_file.py$ pip install pytest $ $ # run it with $ pytest your_test_file.pyPytest has a great deal of functionality, but its simplest and most useful feature is also its most direct: the assert directive. If the condition you assert is false, the test fails. That’s it. No elaborate framework to learn before you can write something useful.
Assuming we now have a version of the code that handles missing emails gracefully, along with a sample base_order, here is a test that protects the discount logic:
import pytest
@pytest.fixture def base_order(): return { "id": "order-123", "customer_email": "[email protected]", "created_at": "2025-01-15T10:30:00", "items": [ {"price": 20, "quantity": 2}, {"price": 5, "quantity": 1}, ], }
def test_calculate_total_applies_10_percent_discount(base_order): base_order["discount_code"] = "SAVE10"
total = calculate_total(base_order)
subtotal = (20 * 2) + (5 * 1) expected = subtotal * 0.9*
assert total == expected`
And here are the tests that protect the email handling, specifically the crash we flagged at the start, where calling .lower() on a missing email would bring the whole function down:
def test_build_order_summary_returns_valid_email(base_order): summary = build_order_summary(base_order)def test_build_order_summary_returns_valid_email(base_order): summary = build_order_summary(base_order)assert "email" in summary assert summary["email"].endswith("@example.com")
def test_build_order_summary_when_email_missing(base_order): base_order.pop("customer_email")
summary = build_order_summary(base_order)
assert summary["email"] == ""`
That second test is important too. Without it, a missing email is a silent assumption — code that works fine in development and then throws an AttributeError the first time a real order comes in without that field. With it, the assumption is explicit and checked every time the test suite runs.
This is the division of labour worth keeping in mind. Ruff catches unused imports and dead variables. Mypy catches bad assumptions about data types. Pytest catches something different: it protects behaviour. When you change the way build_order_summary handles missing fields, or refactor calculate_total, pytest is what tells you whether you’ve broken something that was previously working. That’s a different kind of safety net, and it operates at a different level from everything that came before it.
Tool #5: Because your memory is not a reliable quality-control system
Even with a good toolchain, there’s still one obvious weakness: you can forget to run it. That’s where a tool like pre-commit comes into its own. Pre-commit is a framework for managing and maintaining multi-language hooks, such as those that run when you commit code to GitHub or push it to your repo.
Installation and use
The standard setup is to pip install it, then add a .pre-commit-config.yaml file, and run pre-commit install so the hooks run automatically before each commit to your source code control system, e.g., GitHub
A simple config might look like this:
repos:
-
repo: https://github.com/psf/black rev: 24.10.0 hooks:
-
id: black
-
repo: https://github.com/astral-sh/ruff-pre-commit rev: v0.11.13 hooks:
-
id: ruff
-
id: ruff-format
-
repo: local hooks:
-
id: mypy name: mypy entry: mypy language: system types: [python] stages: [pre-push]
-
id: pytest name: pytest entry: pytest language: system pass_filenames: false stages: [pre-push]`
Now you run it with,
$ pre-commit install
pre-commit installed at .git/hooks/pre-commit
$ pre-commit install --hook-type pre-push
pre-commit installed at .git/hooks/pre-push`
From that point on, the checks run automatically when your code is changed and committed/pushed.
-
git commit → triggers black, ruff, ruff-format
-
git push → triggers mypy and pytest
Here’s an example.
Let’s say we have the following Python code in file test1.py
from datetime import datetime import jsonfrom datetime import datetime import jsondef calculate_total(order): total = 0 discount = 0
for item in order["items"]: total += item["price"] * item["quantity"]*
if order.get("discount_code"): total = 0.9
return round(total, 2)`
Create a file called .pre-commit-config.yaml with the YAML code from above. Now if test1.py is being tracked by git, here’s the type of output to expect when you commit it.
$ git commit test1.py
[INFO] Initializing environment for https://github.com/psf/black. [INFO] Initializing environment for https://github.com/astral-sh/ruff-pre-commit. [INFO] Installing environment for https://github.com/psf/black. [INFO] Once installed this environment will be reused. [INFO] This may take a few minutes... [INFO] Installing environment for https://github.com/astral-sh/ruff-pre-commit. [INFO] Once installed this environment will be reused. [INFO] This may take a few minutes... black....................................................................Failed
- hook id: black
- files were modified by this hook
reformatted test1.py
All done! ✨ 🍰 ✨ 1 file reformatted.
ruff (legacy alias)......................................................Failed
- hook id: ruff
- exit code: 1
test1.py:1:22: F401 [] datetime.datetime imported but unused
|
1 | from datetime import datetime
| ^^^^^^^^ F401
2 | import json
|
= help: Remove unused import: datetime.datetime
test1.py:2:8: F401 [] json imported but unused
|
1 | from datetime import datetime
2 | import json
| ^^^^ F401
|
= help: Remove unused import: json
test1.py:7:5: F841 Local variable discount is assigned to but never used
|
5 | def calculate_total(order):
6 | total = 0
7 | discount = 0
| ^^^^^^^^ F841
8 |
9 | for item in order["items"]:
|
= help: Remove assignment to unused variable discount
Found 3 errors.
[] 2 fixable with the --fix option (1 hidden fix can be enabled with the --unsafe-fixes option).`
Tool #6: Because “correct” code can still be broken
There is one final category of problems that I think gets underestimated when developing code: performance. A function can be logically correct and still be wrong in practice if it’s too slow or too memory-hungry.
A profiling tool I like for this is called py-spy. Py-spy is a sampling profiler for Python programs. It can profile Python without restarting the process or modifying the code. This tool is different from the others we’ve discussed, as you typically wouldn’t use it in an automated pipeline. Instead, this is more of a one-off process to be run against code that was already formatted, linted, type checked and tested.
Installation and use
$ pip install py-spy
Now let’s revisit the “top ten” example. Here is the original function again:
Here’s the original function again:
def recent_order_totals(orders): summaries = [] for order in orders: summaries.append(build_order_summary(order))def recent_order_totals(orders): summaries = [] for order in orders: summaries.append(build_order_summary(order))summaries.sort(key=lambda x: x["created_at"], reverse=True) return summaries[:10]`
If all I have is an unsorted collection in memory, then yes, you still need some ordering logic to know which ten are the most recent. The point is not to avoid ordering entirely, but to avoid doing a full sort of the entire dataset if I only need the best ten. A profiler helps you get to that more precise level.
There are many different commands you can run to profile your code using py-spy. Perhaps the simplest is:
$ py-spy top python test3.py
Collecting samples from 'python test3.py' (python v3.11.13) Total Samples 100 GIL: 22.22%, Active: 51.11%, Threads: 1
%Own %Total OwnTime TotalTime Function (filename) 16.67% 16.67% 0.160s 0.160s _path_stat () 13.33% 13.33% 0.120s 0.120s get_data () 7.78% 7.78% 0.070s 0.070s _compile_bytecode () 5.56% 6.67% 0.060s 0.070s _init_module_attrs () 2.22% 2.22% 0.020s 0.020s _classify_pyc () 1.11% 1.11% 0.010s 0.010s _check_name_wrapper () 1.11% 51.11% 0.010s 0.490s _load_unlocked () 1.11% 1.11% 0.010s 0.010s cache_from_source () 1.11% 1.11% 0.010s 0.010s _parse_sub (re/_parser.py) 1.11% 1.11% 0.010s 0.010s (importlib/metadata/_collections.py) 0.00% 51.11% 0.010s 0.490s _find_and_load () 0.00% 4.44% 0.000s 0.040s (pygments/formatters/init.py) 0.00% 1.11% 0.000s 0.010s _parse (re/_parser.py) 0.00% 0.00% 0.000s 0.010s _path_importer_cache () 0.00% 4.44% 0.000s 0.040s (pygments/formatter.py) 0.00% 1.11% 0.000s 0.010s compile (re/_compiler.py) 0.00% 50.00% 0.000s 0.470s (_pytest/_code/code.py) 0.00% 27.78% 0.000s 0.250s get_code () 0.00% 1.11% 0.000s 0.010s (importlib/metadata/_adapters.py) 0.00% 1.11% 0.000s 0.010s (email/charset.py) 0.00% 51.11% 0.000s 0.490s (pytest/init.py) 0.00% 13.33% 0.000s 0.130s _find_spec ()
Press Control-C to quit, or ? for help.`
top gives you a live view of which functions are consuming the most time, which makes it the fastest way to get oriented before doing anything more detailed.
Once we realise there may be an issue, we can consider alternative implementations of our code. In our example case, one option would be to use heapq.nlargest in our function:
from datetime import datetime from heapq import nlargestfrom datetime import datetime from heapq import nlargestdef recent_order_totals(orders): return nlargest( 10, (build_order_summary(order) for order in orders), key=lambda x: datetime.fromisoformat(x["created_at"]), )`
The new code still performs comparisons, but it avoids fully sorting every summary just to discard almost all of them. In my tests on large inputs, the version using the heapq was 2–3 times faster than the original function. And in a real system, the best optimisation is often not to solve this in Python at all. If the data comes from a database, I would usually prefer to ask the database for the 10 most recent rows directly.
The reason I bring this up is that performance advice gets vague very quickly. “Make it faster” is not useful. “Avoid sorting everything when I only need ten results” is useful. A profiler helps you get to that more precise level.
Resources
Here are the official GitHub links for each tool:
+------------+---------------------------------------------+ | Tool | Official page | +------------+---------------------------------------------+ | Ruff | https://github.com/astral-sh/ruff | | Black | https://github.com/psf/black | | mypy | https://github.com/python/mypy | | pytest | https://github.com/pytest-dev/pytest | | pre-commit | https://github.com/pre-commit/pre-commit | | py-spy | https://github.com/benfred/py-spy | +------------+---------------------------------------------++------------+---------------------------------------------+ | Tool | Official page | +------------+---------------------------------------------+ | Ruff | https://github.com/astral-sh/ruff | | Black | https://github.com/psf/black | | mypy | https://github.com/python/mypy | | pytest | https://github.com/pytest-dev/pytest | | pre-commit | https://github.com/pre-commit/pre-commit | | py-spy | https://github.com/benfred/py-spy | +------------+---------------------------------------------+Note also that many modern IDEs, such as VSCode and PyCharm, have plugins for these tools that provide feedback as you type, making them even more useful.
Summary
Python’s greatest strength — the speed at which you can go from idea to working code — is also the thing that makes disciplined tooling worth investing in. The language won’t stop you from making assumptions about data shapes, leaving dead code around, or writing a function that works perfectly on your test input but falls over in production. That’s not a criticism of Python. It’s just the trade-off you’re making.
The tools in this article help recover some of that safety without sacrificing speed.
Black handles formatting so you never have to think about it again. Ruff catches the small suspicious details — unused imports, assigned-but-ignored variables — before they quietly survive into a release. Mypy forces you to be honest about the shape of the data you’re actually passing around, turning vague runtime crashes into early, specific feedback. Pytest protects behaviour so that when you change something, you know immediately what you broke. Pre-commit makes all of this automatic, removing the single biggest weakness in any manual process: remembering to run it.
Py-spy sits slightly apart from the others. You don’t run it on every commit. You reach for it when something correct is still too slow — when you need to move from “make it faster” to something precise enough to actually act on.
None of these tools is a substitute for thinking carefully about your code. What they do is give mistakes fewer places to hide. And in a language as permissive as Python, that’s worth quite a lot.
Note that there are several tools that can replace any one of those mentioned above, so if you have a favourite linter that’s not ruff, for example, feel free to use it in your workflow instead.
Towards Data Science
https://towardsdatascience.com/building-a-python-workflow-that-catches-bugs-before-production/Sign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
product![How to Embed ChatGPT in Your Website: 5 Methods Compared [2026 Guide]](https://media2.dev.to/dynamic/image/width=1200,height=627,fit=cover,gravity=auto,format=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fap1l58ek0p6aqj2yrzi6.png)
How to Embed ChatGPT in Your Website: 5 Methods Compared [2026 Guide]
You want ChatGPT on your website. Maybe for customer support. Maybe to answer FAQs automatically. Or maybe you're running live events and need AI to handle the flood of questions pouring into your chat room. Learning how to embed ChatGPT in your website is simpler than you think - but there's more to consider than most guides tell you. Here's the thing: most guides only cover half the picture. They show you how to add a basic AI chatbot widget. But what happens when 5,000 people hit your site during a product launch? What about moderating AI responses before your chatbot tells a customer something embarrassingly wrong? And what if you need AI assistance in a group chat, not just a 1-to-1 support conversation? To embed ChatGPT in your website, you have two main approaches: use a no-code pla

Truth Technology and the Architecture of Digital Trust
The digital economy has entered a credibility crisis. Across industries, borders, and institutions, systems now move information at extraordinary speed, yet too often fail at the more fundamental task of proving what that information actually means. Credentials can be duplicated. Professional claims can be inflated. Identity can be fragmented across platforms. In this environment, the central challenge is no longer access to data, but confidence in its validity. This is not a peripheral issue. It is one of the defining infrastructure problems of the modern technological era. My work sits precisely at this intersection. As a Data Scientist and Full-Stack Developer, I have come to view trust not as a social abstraction, but as a systems problem that must be solved through rigorous engineerin
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Products

Why I Run 22 Docker Services at Home
Somewhere in my living room, a 2018 gaming PC is running 22 Docker containers, processing 15,000 emails through a local LLM, and managing the finances of a real business. It was never supposed to do any of this. I run a one-person software consultancy in the Netherlands; web development, 3D printing, and consulting. Last year, I started building an AI system to help me manage it all. Eight specialized agents handling email triage, financial tracking, infrastructure monitoring, and scheduling. Every piece of inference runs locally. No cloud APIs touching my private data. This post covers the hardware, what it actually costs, and what I'd do differently if I started over. The Setup: Three Machines, One Mesh Network The entire system runs on three machines connected via Tailscale mesh VPN: do
![How to Embed ChatGPT in Your Website: 5 Methods Compared [2026 Guide]](https://media2.dev.to/dynamic/image/width=1200,height=627,fit=cover,gravity=auto,format=auto/https%3A%2F%2Fdev-to-uploads.s3.amazonaws.com%2Fuploads%2Farticles%2Fap1l58ek0p6aqj2yrzi6.png)
How to Embed ChatGPT in Your Website: 5 Methods Compared [2026 Guide]
You want ChatGPT on your website. Maybe for customer support. Maybe to answer FAQs automatically. Or maybe you're running live events and need AI to handle the flood of questions pouring into your chat room. Learning how to embed ChatGPT in your website is simpler than you think - but there's more to consider than most guides tell you. Here's the thing: most guides only cover half the picture. They show you how to add a basic AI chatbot widget. But what happens when 5,000 people hit your site during a product launch? What about moderating AI responses before your chatbot tells a customer something embarrassingly wrong? And what if you need AI assistance in a group chat, not just a 1-to-1 support conversation? To embed ChatGPT in your website, you have two main approaches: use a no-code pla

I Switched From GitKraken to This Indie Git Client and I’m Not Going Back
I've been using GitKraken for the past three years. It's a solid tool, no doubt. But when they bumped the price to $99/year and started locking basic features behind the paywall, I started looking around. I didn't expect to find anything worth switching to. Then I stumbled on GitSquid. I honestly don't remember how I found it - probably a random thread on Reddit or Hacker News. The website looked clean, the screenshots looked promising, and it had a free tier, so I figured I'd give it a shot. Worst case, I'd uninstall it after 10 minutes like every other "GitKraken alternative" I'd tried before. That was two weeks ago. I've since uninstalled GitKraken. First Impressions The install was fast. No account creation, no sign-in, no "let us send you onboarding emails", just download the DMG, dra




Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!