Live
Black Hat USAAI BusinessBlack Hat AsiaAI Business🔥 sponsors/atilaahmettanerGitHub Trending🔥 google-ai-edge/LiteRT-LMGitHub Trending🔥 sponsors/badlogicGitHub Trending🔥 HKUDS/RAG-AnythingGitHub Trending🔥 google-deepmind/gemmaGitHub Trending🔥 google-ai-edge/galleryGitHub TrendingEverything Works, But Users Are Still Confused: What SaaS Teams Are MissingDEV Community"Be Anything You Want" — OK, Here's How (Technically)DEV CommunityAI Automation for Data Analysts: 10 Workflows That Will Make You Irreplaceable in 2026Medium AII Started Learning AI-Assisted Development — And It Completely Changed How I Think About CodingDEV CommunityO que uma usina nuclear tem a ver com o seu processo de QA?DEV Community80 Claude Skills for Every Profession — Lawyers, Doctors, Finance, HR, Sales, and MoreMedium AIBlack Hat USAAI BusinessBlack Hat AsiaAI Business🔥 sponsors/atilaahmettanerGitHub Trending🔥 google-ai-edge/LiteRT-LMGitHub Trending🔥 sponsors/badlogicGitHub Trending🔥 HKUDS/RAG-AnythingGitHub Trending🔥 google-deepmind/gemmaGitHub Trending🔥 google-ai-edge/galleryGitHub TrendingEverything Works, But Users Are Still Confused: What SaaS Teams Are MissingDEV Community"Be Anything You Want" — OK, Here's How (Technically)DEV CommunityAI Automation for Data Analysts: 10 Workflows That Will Make You Irreplaceable in 2026Medium AII Started Learning AI-Assisted Development — And It Completely Changed How I Think About CodingDEV CommunityO que uma usina nuclear tem a ver com o seu processo de QA?DEV Community80 Claude Skills for Every Profession — Lawyers, Doctors, Finance, HR, Sales, and MoreMedium AI
AI NEWS HUBbyEIGENVECTOREigenvector

I Started Learning AI-Assisted Development — And It Completely Changed How I Think About Coding

DEV Communityby ARAFAT AMAN ALIMApril 5, 202618 min read0 views
Source Quiz

I'll be honest with you. When I first heard the phrase "AI-assisted development," I rolled my eyes a little. I'd seen the Twitter threads. I'd watched developers rave about tools that supposedly wrote entire apps in seconds. I was skeptical. Wasn't this just autocomplete with better marketing? Then I actually sat down and took a proper course on it. What happened over the next few hours genuinely shifted my perspective — not just about the tools, but about what being a developer even means in 2026. This isn't a hype piece. It's the real story of what I learned, what surprised me, what scared me a little, and what I'm going to keep using going forward. Let's get into it. First: The Stuff They Don't Tell You in the Ads Before touching a single tool, the course started with fundamentals. And

I'll be honest with you.

When I first heard the phrase "AI-assisted development," I rolled my eyes a little. I'd seen the Twitter threads. I'd watched developers rave about tools that supposedly wrote entire apps in seconds. I was skeptical. Wasn't this just autocomplete with better marketing?

Then I actually sat down and took a proper course on it.

What happened over the next few hours genuinely shifted my perspective — not just about the tools, but about what being a developer even means in 2026. This isn't a hype piece. It's the real story of what I learned, what surprised me, what scared me a little, and what I'm going to keep using going forward.

Let's get into it.

First: The Stuff They Don't Tell You in the Ads

Before touching a single tool, the course started with fundamentals. And I'm glad it did, because understanding how these AI systems actually work is what separates developers who use AI effectively from those who just get frustrated by it.

Tokens — The Currency of AI

Every time you type something to an AI, it doesn't read words the way you do. It reads tokens — small chunks that might be a word, part of a word, a punctuation mark, or even a space. The word function might be a single token. The word functionalize might get broken into two or three.

Why does this matter to you as a developer? Because most AI services have token limits. Free tiers restrict how many tokens you can use per day or month. Longer prompts cost more. Understanding this helps you stop wondering why you hit limits and start thinking strategically about how you write prompts.

Context Windows — The AI's Short-Term Memory

The context window is how much information the AI can hold in its "mind" at once during a conversation.

Here's the real-world difference this makes:

  • GPT-4: ~128,000 tokens

  • Claude: up to 200,000 tokens

  • Gemini: over 1 million tokens

A small context window means the AI can only look at a few files at a time. A large context window means it can analyze your entire codebase in one pass. This isn't just a spec sheet detail — it fundamentally changes what kind of tasks you can assign to each tool.

Hallucinations — The Thing That Should Keep You Humble

This one matters most. A hallucination is when the AI confidently suggests something that's completely wrong. Not "maybe wrong." Wrong — like referencing a library that was deprecated years ago, inventing an API method that never existed, or calling a function that doesn't exist anywhere.

The AI isn't lying to you. It's doing exactly what it was trained to do: predicting what token should come next. Sometimes that prediction is perfect. Sometimes it's completely made up — and delivered with the exact same confident tone.

This is why you never, ever blindly accept AI-generated code. Test it. Check the docs. Verify the functions actually exist. This isn't optional.

The Mental Model That Changed Everything For Me

Here's the framing that clicked for me and I haven't stopped thinking about it since:

Think of AI as a very fast, very knowledgeable junior developer.

They can write code quickly. They know a lot of syntax. But you need to review their work, guide the architecture, and make the important decisions.

AI is phenomenal at the how: how to implement something, how to write the syntax, how to structure a function. But you should be deciding the what and the why. What you're building and why you're building it that way.

Use AI for:

  • Boilerplate code (getters, setters, basic CRUD)

  • Learning new frameworks or syntax

  • Writing tests and documentation

  • Refactoring repetitive patterns

  • Getting unstuck on syntax errors

Keep these to yourself:

  • System architecture decisions

  • Security-critical logic

  • Complex business logic

  • Performance-critical optimizations

  • Learning a new concept for the first time

GitHub Copilot: Your Pair Programmer Lives in VS Code

The first tool I got hands-on with was GitHub Copilot, and it's the most accessible entry point for most developers.

Setting It Up

Install the GitHub Copilot Chat extension in VS Code (not the old deprecated one). Sign in with your GitHub account, and you're done. The extension now handles both code completions and chat in one package.

On pricing: the free tier gives you 2,000 code completions and 50 chat/agent requests per month. If you're a student, teacher, or open-source maintainer, you can get the Pro plan free — which gives you unlimited completions and access to premium models.

The "Neighboring Tabs" Trick

This was the first genuine "whoa" moment for me.

Copilot doesn't just look at the file you're currently editing. It also scans the other tabs you have open in VS Code. So if you have your CSS file, your test file, and your component file all open simultaneously, Copilot will suggest code that uses the actual class names from your CSS and the actual test IDs from your test file.

Without those tabs open, it might suggest something generic like container or user-box. With them open, it knows you need user-card-name because that's what your test is looking for. This is the difference between suggestions that feel generic and suggestions that feel like the tool actually understands your project.

Three Modes You Need to Know

Ask Mode is for questions, learning, and exploration. It won't touch a single line of your code unless you manually copy something. Safe to use when you're in a "what is this" headspace.

Edit Mode is for targeted refactoring. Point it at a file and give it a specific instruction — like "refactor this to async/await with a 5-second timeout" — and it applies changes directly to your file in a diff view. Red = old code, green = new code. You review each line and choose keep or undo. This is the fastest way to safely refactor.

Agent Mode is where things get genuinely powerful — and genuinely require your attention. Give it a massive task like "create a full REST API with JWT authentication and an SQLite database" and it will plan, create files, install npm packages, write middleware, and even run test builds to check for syntax errors. It's autonomous. It asks for permission before sensitive operations. And it will make decisions you might not agree with — so you need to watch the plan it generates and stay engaged.

Teaching Copilot Your Project's Rules

You can create a file called .github/copilot-instructions and write plain-English rules for your project — always use TypeScript, always use functional components, use bcrypt for passwords — and Copilot will follow them automatically.

Even better: just type /init in the Copilot chat, and it will scan your codebase and generate this file for you. Every suggestion it makes from that point forward will respect your project's conventions automatically.

CodeRabbit: The AI That Reviews Your Code So You Don't Have To (Well, Sort Of)

This is the one that genuinely impressed me most, and it's the one I didn't expect to care about.

Here's the workflow problem CodeRabbit solves: you push code, you wait a day or two for a human code review, and then you have to context-switch back to fix issues that are now stale in your memory. With AI code review, you get feedback in minutes — while the context is still fresh.

What CodeRabbit Actually Does

The moment you create a pull request, CodeRabbit:

  • Analyzes your changes and generates a summary

  • Flags security vulnerabilities before they reach production

  • Identifies bugs and code quality issues

  • Assigns severity levels: critical, major, minor, nitpick

  • Provides specific, implementable suggested fixes

  • Lets you chat with it directly in the PR comments

It integrates with GitHub, GitLab, Bitbucket, and Azure DevOps.

A Real Example That Got Me

During the course, a demo project had these issues caught by CodeRabbit on a PR:

  • Major: Hard-coded admin password in source code (with a one-click commitable fix)

  • Critical: Admin list and delete endpoints were completely unauthenticated — anyone could enumerate or delete discount codes

  • Major: Cart total was being accepted from the client, which could be manipulated — it should be computed server-side

  • Minor: Dead code with unfixed bugs

And here's the part that got me: you can just type @coderabbit ai can you show me how to properly secure these admin endpoints using middleware? right in a PR comment, and it will give you a complete, contextual implementation guide.

The CLI Power Move

CodeRabbit also has a CLI. You can run coderabbit from your terminal before you've even committed anything. This means you can set up a loop: write code → review locally → fix → commit. And it gets better: if you're using an AI agent like Copilot or Claude, you can tell the agent to "run coderabbit --prompt-only and fix any issues it finds" — and the agent will handle the entire review-and-fix cycle autonomously.

One AI writing code. Another AI reviewing it. That's not science fiction anymore.

CodeRabbit Plan — This One's a Game Changer

Before you write a single line of code, create a GitHub issue describing what you want. Then comment @coderabbit ai plan on the issue. CodeRabbit will analyze your entire codebase and generate:

  • A detailed implementation plan

  • Specific files to modify

  • Patterns to follow

  • Acceptance criteria

  • Ready-to-use prompts for AI agents

Instead of telling an AI agent "add a wish list feature and figure it out," you give it this context-rich plan. The agent knows exactly where to touch the codebase and what the success criteria look like.

Claude Code: When You Need Deep Reasoning in Your Terminal

Claude Code is Anthropic's CLI tool, and it occupies a different niche from Copilot. Where Copilot lives inside your IDE and excels at quick completions and inline suggestions, Claude Code is a terminal-first tool designed for complex reasoning, large-scale analysis, and autonomous multi-step tasks.

Installing and Getting Started

npm install -g @anthropic-ai/claude-code claude

Enter fullscreen mode

Exit fullscreen mode

Claude Code requires a Claude Pro subscription or API credits — there's no free tier. But for what it can do, it's worth it.

Thinking Modes

This is a genuinely clever design decision. When you give Claude Code a command, you can prefix it with different thinking modes:

  • think — quick analysis

  • think hard — deeper reasoning

  • think harder — even more thorough

  • ultrathink — maximum depth, maximum time

When you're making an architectural decision or asking it to analyze your entire codebase for issues, telling it to ultrathink means it'll spend significantly more time reasoning through the problem before it responds. For complex tasks, this produces noticeably better output.

The Claude.md File

Create a claude.md file in your project directory. In plain English, describe what your application is, what technologies you use, your coding standards, your project structure. Claude will read this and use it as context for everything it does in your project — following your conventions automatically.

What It Can Actually Do

During the demo, the instructor told Claude to analyze the TechMart codebase and suggest improvements. Claude read through every file, understood the architecture, and identified issues that the team hadn't caught — including a race condition in user ID collection during registration and CORS being configured to allow all origins with credentials. Then, with a single "yes, please fix all five," it made all the changes, installed the necessary packages, and summarized exactly what it changed.

That's a meaningfully different category of assistance than code completion.

Gemini CLI: Google's Secret Weapon Is That Context Window

Gemini CLI comes from Google DeepMind and has one spec that makes it uniquely powerful: over 1 million tokens in its context window. That's not a typo. It can hold an almost comically large amount of information in its working memory at once.

Combined with a free tier of 1,000 messages per day, it's the most accessible of the powerful CLI tools.

The Multimodal Trick

Gemini can process images. This means you can take a photo of a whiteboard sketch and ask it to turn it into a React component. Or — as demonstrated in the course — take a screenshot of an SVG keyboard image, pass it in with a command, and ask it to add more keys to the SVG file. It found the keyboard, understood the structure, and added the keys. That's multimodal AI working in a developer context, and it's wild.

Something I Didn't Expect

When Gemini CLI started up in the demo, it didn't wait for a command. It just started exploring the codebase on its own — running the application, logging into the demo account, verifying the session. All before the instructor typed anything. That kind of autonomous initialization is either impressive or slightly unsettling depending on your perspective. Probably both.

OpenClaw: The AI That Never Sleeps, Lives on Your Computer, and You Text From WhatsApp

This is the one that sounds the most like a product someone invented for a science fiction story.

OpenClaw is an open-source AI assistant that runs directly on your own computer — your laptop, a Mac Mini, a cloud VM. Once it's set up, you can message it from WhatsApp, Telegram, Discord, or any chat app you already use. You're essentially texting a co-worker who has full access to your development environment and never goes offline.

What Makes It Different

Always available: It's not a chat interface you have to open. It's a background process you message from wherever you are.

Persistent memory: Unlike most AI tools where every conversation starts fresh, OpenClaw maintains context continuously. It knows your projects, your preferences, your ongoing tasks — because it's been running the whole time.

It takes real actions: It doesn't just suggest things. It can run commands, open browsers, manage files, deploy projects, send emails, manage GitHub notifications. This is an AI that does things, not just recommends them.

The Cron Jobs Feature

This is where it gets genuinely interesting for developer productivity. You can set up scheduled tasks in OpenClaw — like a cron job, but you just describe it in plain English:

  • "Check my email every morning and summarize important messages"

  • "Work on my SaaS project every night"

  • "Generate one YouTube script with slides every week"

  • "Monitor my PRs and ping me on Discord when CodeRabbit finds issues"

It builds the automation, runs it, and keeps it running. If you can describe it, it can probably build it.

The Orchestration Layer

Here's the insight that reframed all the other tools for me: OpenClaw can use any CLI tool, which means it can use Claude Code, it can use CodeRabbit, it can use Gemini. It becomes the orchestration layer for your entire AI workflow.

Imagine you're on a walk and get a notification that tests are failing. You text your OpenClaw: "fix the failing tests and open a PR." It spawns a Claude Code session, runs the tests, figures out what's wrong, fixes it, and opens the PR. You finish your walk. This is real. This is available right now.

The Full Stack AI Workflow

By the end of the course, the picture of how all these tools fit together became clear. Here's the pattern that was demonstrated:

  • Plan with Claude Code (ultrathink the architecture)

  • Generate a coding plan with CodeRabbit Plan from a GitHub issue

  • Implement with GitHub Copilot Agent Mode in VS Code

  • Write tests with Claude Code first (so the AI has concrete goals to code against)

  • Review with CodeRabbit (catches the edge cases, security issues, and code smells)

  • Orchestrate with OpenClaw (tie everything together, run it while you're not at your desk)

And with MCP (Model Context Protocol) — a way to give AI tools additional capabilities — you can extend Claude Code to browse the web, run Puppeteer for browser automation, do DuckDuckGo searches, query databases, and manage GitHub repos. MCP turns these tools from powerful into extensible.

The Security Reality Check You Need to Read

This is not optional reading. This matters.

AI doesn't understand your security context. Only you do. Here's what CodeRabbit and the course both flagged as things AI commonly gets wrong — and that you must check every single time:

  • Hard-coded passwords or API keys in source code (there are countless stories of devs "vibe coding" secrets directly onto GitHub)

  • SQL string concatenation instead of parameterized queries (classic SQL injection vulnerability)

  • eval() with user input — never

  • Sensitive data exposed in error messages

  • Unauthenticated admin endpoints

  • Client-supplied totals (cart totals, discount amounts — always compute server-side)

  • Disabled security features

  • Secrets not stored in environment variables

The course showed a concrete example of AI generating a SQL query using string concatenation instead of parameterized queries. Classic vulnerability. CodeRabbit caught it automatically. But if you don't have CodeRabbit, you need to catch it yourself.

AI code review tools are especially valuable when you're using AI to write code in the first place — precisely because the thing generating the vulnerabilities isn't the thing catching them.

Better Prompts = Better Code (The Formula)

One of the most practical sections of the course was about prompt engineering. The principle is simple: vague prompts get vague code. Specific prompts get specific, useful code.

When you ask an AI to write something, include:

  • Input parameters and their types

  • Expected output format

  • Error handling requirements

  • Style preferences

  • Edge cases you care about

Instead of: "write a function to get users"

Write: "write an async TypeScript function that fetches a user by ID from a PostgreSQL database, returns a typed User object or null if not found, handles connection errors by throwing a DatabaseError, and uses parameterized queries"

The second prompt gets you production-ready code. The first gets you a guess.

My Honest Assessment After Going Through All of This

Here's what I'll keep using:

GitHub Copilot for daily coding — the inline completions are genuinely magic once you have the neighboring tabs trick wired into your workflow. The /init instruction file has saved me from repeating project conventions in every prompt.

CodeRabbit for every pull request — I'm not a team of one forever, and even solo, the security catches alone are worth it. The plan feature is becoming part of how I approach new features.

Claude Code when I need to think big — architecture analysis, large refactors, complex reasoning tasks. The ultrathink mode for hard problems is something I didn't know I needed.

Gemini CLI when the context window matters — there are tasks where you need the AI to hold your entire codebase in mind at once. Gemini's free tier and 1M token window make it the right tool for those moments.

OpenClaw — I'm in the process of setting this up. The idea of a background AI that can run automated tasks while I'm away from my desk is too compelling to pass on.

The Mindset Shift

After going through all of this, the thing I keep coming back to isn't a specific tool or feature. It's a mental model:

AI speeds up implementation. You make the decisions.

The architectural choices, the security design, the business logic — these are yours. The AI is a fast, knowledgeable colleague who can write code quickly but needs you to review it, guide it, and catch what it misses.

The developers who will thrive aren't the ones who hand everything to the AI. They're the ones who understand the tools well enough to know when to use them, when not to, and how to catch the mistakes before they ship.

That's the course. That's what I learned. And if you're a developer who hasn't seriously explored this space yet — now's a good time.

Have you tried any of these tools? Which ones are in your workflow? Drop a comment — I'm genuinely curious what's working for other developers.

Tools mentioned in this post:

  • GitHub Copilot

  • CodeRabbit

  • Claude Code by Anthropic

  • Gemini CLI by Google

  • OpenClaw

Was this article helpful?

Sign in to highlight and annotate this article

AI
Ask AI about this article
Powered by Eigenvector · full article context loaded
Ready

Conversation starters

Ask anything about this article…

Daily AI Digest

Get the top 5 AI stories delivered to your inbox every morning.

More about

claudegeminimodel

Knowledge Map

Knowledge Map
TopicsEntitiesSource
I Started L…claudegeminimodelavailableopen-sourceproductDEV Communi…

Connected Articles — Knowledge Graph

This article is connected to other articles through shared AI topics and tags.

Knowledge Graph100 articles · 185 connections
Scroll to zoom · drag to pan · click to open

Discussion

Sign in to join the discussion

No comments yet — be the first to share your thoughts!

More in Products