Migrating From Laravel Forge to Deploynix: A Step-by-Step Guide
Switching server management platforms is one of those tasks that sounds risky but, with proper planning, is remarkably straightforward. If you are currently using Laravel Forge and considering a move to Deploynix, this guide will walk you through the entire migration process. We will cover how to document your existing configuration, provision equivalent infrastructure on Deploynix, migrate your data, and cut over DNS with minimal downtime. The key principle throughout this migration is simple: run both platforms in parallel until you are confident the new setup works perfectly, then switch. Why Migrate? Before diving into the how, let us briefly address the why. Developers move between server management platforms for various reasons: pricing, features, support, or simply wanting a fresh a
Switching server management platforms is one of those tasks that sounds risky but, with proper planning, is remarkably straightforward. If you are currently using Laravel Forge and considering a move to Deploynix, this guide will walk you through the entire migration process. We will cover how to document your existing configuration, provision equivalent infrastructure on Deploynix, migrate your data, and cut over DNS with minimal downtime.
The key principle throughout this migration is simple: run both platforms in parallel until you are confident the new setup works perfectly, then switch.
Why Migrate?
Before diving into the how, let us briefly address the why. Developers move between server management platforms for various reasons: pricing, features, support, or simply wanting a fresh approach. While Forge has added features like zero-downtime deployments and server monitoring in its recent overhaul, Deploynix offers a different set of strengths: built-in load balancing with multiple balancing methods (Round Robin, Least Connections, IP Hash), seven specialized server types for granular architecture control, scheduled deployments via API, Valkey as a truly open-source Redis alternative, support for modern frontend frameworks (Next.js, Nuxt, SvelteKit) alongside Laravel, and a free tier with vanity domains on deploynix.cloud to get started without a credit card.
But ultimately, the decision to migrate is yours. This guide assumes you have already made that decision and need a clear path from A to B.
Phase 1: Document Your Forge Configuration
The first step is to create a complete inventory of everything running on your Forge-managed servers. This is your migration blueprint.
Server Inventory
For each server on Forge, document:
-
Provider and region (e.g., DigitalOcean NYC3, Hetzner Falkenstein)
-
Server size (RAM, CPU, disk)
-
PHP version (8.1, 8.2, 8.3, 8.4)
-
Database type and version (MySQL 8, PostgreSQL 15, etc.)
-
Operating system (Ubuntu 22.04, 24.04)
-
Any custom packages installed via recipes or SSH
Site Configuration
For each site, document:
-
Domain name(s) and any aliases
-
Repository URL and branch
-
Web directory (usually /public)
-
Environment variables — Export or copy your entire .env file
-
SSL certificate type (Let's Encrypt, custom)
-
Nginx configuration — Copy any custom directives
-
Deployment script — If you have customized the default deploy script, copy it
Background Services
Document all:
-
Queue workers — Command, number of processes, connection, queue names
-
Daemons — Any custom long-running processes
-
Cron jobs / Scheduled tasks — What is in your scheduler and what cron entries Forge manages
-
Firewall rules — Any custom rules beyond the defaults
Database Details
For each database:
-
Database name
-
Database users and their permissions
-
Database size (so you can plan server sizing appropriately)
-
Whether you use database backups and where they are stored
DNS Configuration
Document your current DNS records:
-
A records pointing to your Forge server IPs
-
CNAME records for any subdomains
-
MX records (these typically will not change, but document them)
-
TXT records (SPF, DKIM, DMARC)
Take your time with this inventory. Missing a cron job or environment variable can cause subtle issues that are hard to debug after migration.
Phase 2: Set Up Deploynix
Create Your Account and Organization
Sign up at deploynix.io and create your organization. If you have team members, you can invite them with appropriate roles: Owner, Admin, Manager, Developer, or Viewer. Each role has different permissions, so assign them according to your team's needs.
Connect Your Cloud Provider
Navigate to your organization's settings and connect the same cloud provider you are using with Forge. Deploynix supports DigitalOcean, Vultr, Hetzner, Linode, AWS, and custom servers.
Using the same cloud provider means your new servers will be in the same network, which makes database migration faster and simplifies the transition.
Connect Your Git Provider
Connect GitHub, GitLab, Bitbucket, or your custom Git provider. Deploynix needs access to the same repositories that Forge is currently deploying from.
Phase 3: Provision Servers on Deploynix
Now, provision servers on Deploynix that mirror your Forge infrastructure. For each Forge server, create an equivalent Deploynix server.
Choose the same region as your existing server. This keeps latency consistent for your users and allows for easy data migration between old and new servers.
Match the server size. If your Forge server is a 2GB/2vCPU instance, provision the same on Deploynix. You can always resize later.
Select the right server type. Deploynix offers specialized server types: App, Web, Database, Cache, Worker, Meilisearch, and Load Balancer. For a standard Forge server that runs everything, an App Server is the equivalent.
Match the database. If your Forge server runs MySQL 8, select MySQL during Deploynix server provisioning.
Match the PHP version. Deploynix supports PHP 8.0 through 8.4, so you can match your current Forge PHP version exactly. If your Forge server runs an older version, this is also a good opportunity to upgrade, but only if your application supports it.
Wait for provisioning to complete. Deploynix will configure Nginx, PHP-FPM, your database, Valkey, and Supervisor automatically.
Phase 4: Create Sites and Configure
Create Sites
For each site on your Forge server, create a corresponding site on your Deploynix server. Enter the domain name, select the repository, choose the branch, and set the web directory.
Important: Do not change your DNS yet. The new sites will not be receiving traffic at this point. You are building the new environment in parallel.
Configure Environment Variables
Copy your .env from Forge and paste it into the environment editor on Deploynix. Update the following values:
-
DB_HOST — Should be 127.0.0.1 or localhost for an App Server
-
DB_DATABASE, DB_USERNAME, DB_PASSWORD — Use the credentials Deploynix generated, or create a database matching your existing names
-
CACHE_STORE — Set to redis to use Valkey
-
SESSION_DRIVER — Set to redis
-
QUEUE_CONNECTION — Set to redis if you use queues
If you have external service credentials (mail, payment, etc.), those stay the same.
Configure Custom Nginx Rules
If you had custom Nginx configuration on Forge, apply the same rules in Deploynix. Navigate to the Nginx configuration for your site and add your custom directives. Deploynix validates the configuration before applying, so you will catch syntax errors immediately.
Set Up Queue Workers
Recreate your queue workers as Daemons in Deploynix. For each queue worker on Forge, create a corresponding daemon with the same command, number of processes, and configuration.
Set Up Cron Jobs
Add your scheduled tasks through the Deploynix Cron Jobs interface. At minimum, ensure the Laravel scheduler cron entry is configured: php artisan schedule:run running every minute.
Set Up Firewall Rules
Review your Forge firewall rules and recreate any custom rules in Deploynix. The default rules (SSH, HTTP, HTTPS) are created automatically.
Phase 5: Migrate Your Database
This is the most critical step. You need to move your data from the old server to the new one.
Export from Forge Server
SSH into your Forge server and create a database dump:
mysqldump -u forge -p --single-transaction --routines --triggers your_database > /tmp/database_backup.sql
Enter fullscreen mode
Exit fullscreen mode
For PostgreSQL:
pg_dump -U forge -h localhost your_database > /tmp/database_backup.sql
Enter fullscreen mode
Exit fullscreen mode
Transfer to Deploynix Server
Copy the dump file to your new Deploynix server:
scp /tmp/database_backup.sql deploynix@your-new-server-ip:/tmp/
Enter fullscreen mode
Exit fullscreen mode
Import on Deploynix Server
SSH into your Deploynix server (or use the web terminal) and import:
mysql -u your_db_user -p your_database < /tmp/database_backup.sql
Enter fullscreen mode
Exit fullscreen mode
For PostgreSQL:
psql -U your_db_user -h localhost your_database < /tmp/database_backup.sql
Enter fullscreen mode
Exit fullscreen mode
Verify the Import
Run a few quick checks: count records in key tables, verify that recent data is present, and test a few queries. Make sure the import is complete before proceeding.
Phase 6: Deploy and Test
Initial Deployment
Trigger a deployment for each site on Deploynix. Watch the deployment log to ensure everything completes successfully: code cloned, dependencies installed, migrations run (they should report nothing to migrate since you imported an existing database), assets built.
Test Without DNS
You can test your new sites without changing DNS by modifying your local /etc/hosts file:
your-new-server-ip yourdomain.com
Enter fullscreen mode
Exit fullscreen mode
This makes your browser connect to the new server when visiting your domain, while everyone else still hits the old server. Browse through your application, test critical flows (login, signup, payments, API endpoints), and verify that everything works correctly.
Verify Background Services
Check that your queue workers are processing jobs, scheduled tasks are running, and any daemons are functioning correctly. The Deploynix dashboard shows the status of all managed processes.
Phase 7: DNS Cutover
Once you are confident the new setup works perfectly, it is time to switch DNS.
Reduce TTL (Optional but Recommended)
If your DNS records have a high TTL (Time to Live), consider reducing it to 300 seconds (5 minutes) a few hours before the cutover. This ensures that after you update DNS records, the change propagates quickly.
Update DNS Records
Update your A records to point to the IP address of your Deploynix server. If you use CNAME records for subdomains, update those as well.
Provision SSL Certificates
Once DNS points to your new server, provision SSL certificates through Deploynix. If you are using Let's Encrypt, Deploynix handles the entire ACME challenge process. If you are using a DNS provider supported by Deploynix (Cloudflare, DigitalOcean, AWS Route 53, Vultr), you can provision wildcard certificates using DNS validation.
Monitor
After the DNS switch, monitor your application closely for the first few hours. Check error logs, response times, and user reports. The Deploynix dashboard provides real-time monitoring and health alerts to help you catch issues quickly.
Phase 8: Cleanup
Keep Forge Running Temporarily
Do not cancel your Forge subscription or destroy your old servers immediately. Keep them running for at least a week after the migration. This gives you a safety net: if something goes wrong, you can revert DNS back to the old servers instantly.
Final Database Sync (If Needed)
If your application continued receiving writes on the old server during DNS propagation, you may need to do a final data sync. This depends on your application's tolerance for brief data gaps and whether you can put the old site in maintenance mode during the cutover.
Verify Backups
Configure automated database backups on Deploynix before decommissioning your old servers. Deploynix supports backups to AWS S3, DigitalOcean Spaces, Wasabi, and any S3-compatible storage. Verify that at least one backup completes successfully.
Decommission Old Servers
Once you are satisfied that everything is working on Deploynix and you have verified backups, you can safely destroy your Forge servers and cancel your Forge subscription.
Common Gotchas
SSH keys: If your application needs to pull from private repositories or connect to other servers via SSH, make sure the necessary SSH keys are configured on your Deploynix server.
File storage: If your application stores user uploads on the local filesystem (not S3), you need to copy the storage directory from the old server to the new one. Consider migrating to S3-compatible storage during this transition.
Environment-specific settings: Double-check any settings that reference the old server's IP address, such as trusted proxies, CORS origins, or webhook URLs.
Email configuration: If your old server ran a local mail server, you will need to configure an external mail service on the new setup or ensure the same mail configuration works.
Cron timing: After migration, verify that your scheduled tasks are running at the expected times. Check the timezone configuration in both your server and your Laravel application.
Conclusion
Migrating from Forge to Deploynix is a methodical process, not a risky one. By running both platforms in parallel, testing thoroughly before DNS cutover, and keeping the old infrastructure as a fallback, you eliminate virtually all risk.
The entire migration can typically be completed in an afternoon for simple setups, or over a weekend for more complex multi-server environments. Once complete, you gain access to Deploynix's full feature set: real-time monitoring, health alerts, web terminal, scheduled deployments, load balancing, and more.
Start your migration today at deploynix.io. Your first server is free.
DEV Community
https://dev.to/deploynix/migrating-from-laravel-forge-to-deploynix-a-step-by-step-guide-230nSign in to highlight and annotate this article

Conversation starters
Daily AI Digest
Get the top 5 AI stories delivered to your inbox every morning.
More about
versionupdateopen-source
oh-my-claudecode is a Game Changer: Experiencing Local AI Swarm Orchestration
While the official Claude Code CLI has been making waves recently, I stumbled upon a tool that pushes its potential to the absolute limit: oh-my-claudecode (OMC) . More than just a coding assistant, OMC operates on the concept of local swarm orchestration for AI agents . It’s been featured in various articles and repos, but after spinning it up locally, I can confidently say this is a paradigm shift in the developer experience. Here is my hands-on review and why I think it’s worth adding to your stack. Why is oh-my-claudecode so powerful? If the standard Claude Code is like having a brilliant junior developer sitting next to you, OMC is like hiring an entire elite engineering team . Instead of relying on a single AI to handle everything sequentially, OMC leverages multiple specialized agen

50 Hours Building a Next.js Boilerplate So You Can Ship in 30 Minutes!
Next.js Boilerplate: The Ultimate SaaS Starter Kit Looking for the best Next.js Boilerplate to launch your next project? You've found it. This production-ready starter kit is designed to help you go from idea to deployment in record time. Table of Contents The Problem That Kept Me Up at Night Why This Next.js Boilerplate is Different Key Features of Nextjs-Elite-Boilerplate How to Get Started (The Real Way) The Project Structure (Explained for Humans) What You Get Out of the Box Contributing Support Final Thoughts The Problem That Kept Me Up at Night You know that feeling when you start a new Next.js project and spend the first week just setting things up? Authentication, internationalization, role management, SEO configuration... By the time you're done with the boilerplate, you've lost a

We audited LoCoMo: 6.4% of the answer key is wrong and the judge accepts up to 63% of intentionally
Projects are still submitting new scores on LoCoMo as of March 2026. We audited it and found 6.4% of the answer key is wrong, and the LLM judge accepts up to 63% of intentionally wrong answers. LongMemEval-S is often raised as an alternative, but each question's corpus fits entirely in modern context windows, making it more of a context window test than a memory test. Here's what we found. LoCoMo LoCoMo ( Maharana et al., ACL 2024 ) is one of the most widely cited long-term memory benchmarks. We conducted a systematic audit of the ground truth and identified 99 score-corrupting errors in 1,540 questions (6.4%). Error categories include hallucinated facts in the answer key, incorrect temporal reasoning, and speaker attribution errors. Examples: The answer key specifies "Ferrari 488 GTB," bu
Knowledge Map
Connected Articles — Knowledge Graph
This article is connected to other articles through shared AI topics and tags.
More in Products

oh-my-claudecode is a Game Changer: Experiencing Local AI Swarm Orchestration
While the official Claude Code CLI has been making waves recently, I stumbled upon a tool that pushes its potential to the absolute limit: oh-my-claudecode (OMC) . More than just a coding assistant, OMC operates on the concept of local swarm orchestration for AI agents . It’s been featured in various articles and repos, but after spinning it up locally, I can confidently say this is a paradigm shift in the developer experience. Here is my hands-on review and why I think it’s worth adding to your stack. Why is oh-my-claudecode so powerful? If the standard Claude Code is like having a brilliant junior developer sitting next to you, OMC is like hiring an entire elite engineering team . Instead of relying on a single AI to handle everything sequentially, OMC leverages multiple specialized agen

We Moved a Production System from Azure VMs to Bare Metal Kubernetes in 3 Months
This wasn’t one of those long, overplanned migrations that drag on for years. It took us about three months from start to finish, and most of that time was spent being careful rather than building something complicated. The system we inherited was running on Azure with multiple VMs, a self-managed MySQL instance, and a load balancer in front. It had all the symptoms of something that had grown without a plan. Everything was configured manually. No infrastructure as code, no containers, no orchestration. If something broke, someone would log into a machine and try to fix it directly, which worked until it didn’t. The biggest issues always showed up under load. The database would start locking, queries would slow down, and parts of the system would just stop responding. Not crash, just hang



Discussion
Sign in to join the discussion
No comments yet — be the first to share your thoughts!