Rails Deployment — From Server Prep to Production Traffic
Rails deployment is not a single problem. It is a collection of tightly coupled decisions about server provisioning, process management, reverse proxying, database configuration, secrets handling and monitoring, each of which can quietly break the others if you get the ordering wrong. This topic page maps out the full deployment surface for Rails applications, covering VPS, container and PaaS approaches, with emphasis on the failure modes that actually consume engineering time. The sub-topics below connect to specific guides and learning paths where each area gets step-by-step treatment. If you are deploying a Rails app for the first time, the Deploy Rails on Your Own Server learning path is the recommended starting point.
Server provisioning and initial setup
Every Rails deployment starts with a machine. Whether that is a $6/month VPS, an EC2 instance or a container running behind a load balancer, the first few minutes of setup determine how much friction you encounter later.
The critical early decisions are: which Linux distribution, which user account model, which firewall rules, and whether you are going to use a configuration management tool or do it by hand. For teams of one or two developers shipping a single application, manual setup with good notes is often faster and more debuggable than learning Ansible at the same time as learning deployment. For anything larger, some form of repeatable provisioning becomes essential.
Ubuntu LTS remains the most common choice for Rails servers, mostly because it has the widest documentation coverage for the Ruby ecosystem. Debian is equally capable. CentOS and its successors work fine but have a thinner Rails-specific knowledge base.
What catches people first: SSH key management, firewall configuration that locks them out, and user permission issues when installing Ruby. These are all solvable in the first 20 minutes if you have done it before, and capable of absorbing a full afternoon if you have not.
Ruby version management
Getting the right Ruby version on your server is simple in concept and surprisingly error-prone in practice. The two dominant approaches are rbenv with ruby-build, and asdf with the Ruby plugin. Both work. The choice matters less than consistency.
What does matter: make sure your deploy user, your application process and your background job runner all resolve to the same Ruby binary. Mismatched Ruby paths are one of the most common "works on my machine" problems in Rails deployment. The which ruby command should return the same path whether you run it interactively, in a deploy script, or from a systemd service unit.
Ruby versioning interacts with your Bundler version, your Gemfile.lock expectations, and your native extension compilation requirements. If you upgrade Ruby on the server without re-running bundle install, native gems compiled against the old Ruby will segfault or fail silently.
Application server: Puma configuration
Puma is the default Rails application server and the right choice for most deployments. The configuration surface is small but the defaults are not always appropriate for production.
Key decisions: how many workers (processes), how many threads per worker, and how you handle preloading. The worker count should roughly match your available CPU cores. The thread count depends on your application's I/O profile. A heavily database-dependent application that spends most request time waiting on PostgreSQL can safely use more threads. A CPU-bound application doing image processing or heavy computation should use fewer threads and more workers.
The preload_app! directive loads your application code once and then forks workers, which reduces memory through copy-on-write sharing. It also changes how you need to handle database connection management and background job connections, because connections opened before the fork are not safe to use after it.
A common failure mode: Puma starts, serves a few requests, then hangs. The cause is usually a database connection pool mismatch. If your thread count exceeds your pool setting in database.yml, threads will queue waiting for connections during traffic spikes.
Reverse proxy: Nginx
Nginx sits in front of Puma and handles SSL termination, static asset serving, request buffering and connection management. It is not optional for production Rails, even though Puma can technically serve HTTP directly.
The Nginx configuration that most Rails developers need is a single upstream block pointing at Puma's socket, a server block with SSL configuration, and a location block for static files. The detailed walkthrough is in the Nginx for Rails Apps guide.
What trips people up: getting the upstream socket path wrong, forgetting to set client_max_body_size for file uploads, and misconfiguring SSL certificate paths. Each of these produces a different cryptic error message that does not obviously point at Nginx configuration as the cause.
Database: PostgreSQL in production
PostgreSQL is the standard production database for Rails. The setup itself is straightforward, but the ongoing care—connection pooling, indexing, backup strategy and vacuum tuning—is where real deployment maturity shows.
Connection pooling is the single most impactful PostgreSQL configuration decision for Rails applications. Without a connection pooler like PgBouncer, each Puma thread holds its own persistent connection to PostgreSQL. At 4 workers × 5 threads, that is 20 connections per application server. Add background jobs and you can easily exceed PostgreSQL's default max_connections of 100.
Indexing strategy deserves its own focused treatment. See the PostgreSQL Indexing for Rails guide for detailed coverage of which indexes matter and how to read EXPLAIN output.
Backup strategy should be decided before the first deploy, not after the first data loss scare. The minimum viable approach is daily pg_dump with compressed output stored off-server. For larger databases, WAL archiving with pg_basebackup provides point-in-time recovery.
Background jobs
Almost every production Rails application needs background job processing. Emails, webhook deliveries, report generation, image processing and periodic cleanup tasks all belong in a background queue rather than the request cycle.
Sidekiq is the dominant choice, and for good reason. It is fast, well-documented and actively maintained. The Sidekiq Background Jobs Patterns guide covers queue design, retry behaviour and idempotency patterns in detail.
The deployment-specific concern is ensuring Sidekiq starts reliably, restarts on failure, and shuts down cleanly during deploys. A systemd unit file for Sidekiq should use Type=notify or Type=simple, set appropriate memory limits, and configure graceful shutdown signals so in-flight jobs complete before the process exits.
A common deployment mistake: starting Sidekiq manually in a screen/tmux session instead of using a process manager. This works until the server reboots or the SSH session drops, at which point background jobs silently stop processing and nobody notices for hours.
Secrets and environment management
Rails credentials (encrypted secrets) or environment variables delivered through dotenv files, systemd, or a secrets manager. The approach matters less than the discipline.
Hard rules: no secrets in Git, no secrets in Docker images pushed to registries, no secrets in shell history. The RAILS_MASTER_KEY must be on the server but not in the repository. Environment variables should be set through a mechanism that survives reboots—typically a systemd service unit's EnvironmentFile directive pointing at a protected file in /etc.
SSL and domain configuration
Let's Encrypt with Certbot is the standard approach for SSL on Rails servers. The setup is a one-time cost of about 15 minutes, and auto-renewal handles ongoing maintenance.
The most common SSL-related deployment failure is a certificate that expires because the renewal cron job stopped working. Set a monitoring check for certificate expiry. It costs nothing and saves the midnight scramble when browsers start blocking your site.
Monitoring and failure detection
A deployment without monitoring is a deployment that fails silently. At minimum, you need:
- Process monitoring: is Puma running? Is Sidekiq running? Is Nginx running?
- Resource monitoring: CPU, memory, disk space and swap usage
- Application monitoring: error rates, response times and background job queue depth
- External uptime checking: an outside service that hits your health endpoint every few minutes
For small deployments, a combination of systemd watchdog, basic Prometheus node_exporter and a free uptime service covers the essentials.
Common mistakes and anti-patterns
After years of watching Rails deployments fail, these are the patterns that come up most frequently:
- Deploying without a rollback plan. If the new release breaks something, how do you get back to the previous version in under 60 seconds? If you cannot answer that, you are not ready to deploy.
- Skipping database migration dry-runs. Running migrations directly on production without testing them against a copy of the production schema first. This is how you get table-locking migrations that take down the site during a deploy.
- Ignoring disk space. Log files, temporary uploads and old releases accumulate. Disk full errors produce bizarre symptoms that do not point at disk space as the cause.
- No connection pooling. Running without PgBouncer and hitting PostgreSQL's connection limit during traffic spikes.
- Manual server configuration without notes. Making changes by hand, not documenting them, and then being unable to recreate the server when it fails.
Sub-topic map
| Sub-topic | Coverage | Related guide |
|---|---|---|
| Server provisioning | VPS setup, users, firewall, SSH | Deploy on a VPS |
| Ruby versioning | rbenv, asdf, path consistency | Deploy on a VPS |
| Puma configuration | Workers, threads, preloading | Deploy on a VPS |
| Nginx reverse proxy | SSL, static files, buffering | Nginx for Rails Apps |
| PostgreSQL | Connection pooling, backups, indexes | PostgreSQL Indexing |
| Background jobs | Sidekiq, queues, retry, idempotency | Sidekiq Patterns |
| Secrets management | Credentials, env vars, key storage | Deploy on a VPS |
| Performance | Response times, profiling, caching | Web Performance |
Frequently asked questions
Should I use a PaaS instead of a VPS?
If deployment mechanics are not interesting to you and your budget supports it, a PaaS like Render or Fly.io removes most of the infrastructure work. The trade-offs are cost at scale, less control over the runtime environment, and platform-specific debugging when things go wrong. For learning purposes, deploying on a VPS at least once is extremely valuable.
How many servers do I need?
For most early-stage applications: one. A single server running Nginx, Puma, Sidekiq and PostgreSQL handles more traffic than most new applications receive. Separate your database to its own server when PostgreSQL needs more memory or CPU than your application server can spare.
What about Docker in production?
Docker adds a layer of abstraction that can simplify multi-service deployments and CI/CD pipelines. It also adds a layer of complexity to debugging, logging and storage management. If your team already uses Docker effectively, deploying Rails in containers is fine. If you are learning deployment for the first time, skip Docker initially and add it later when you understand what it abstracts away.
When should I worry about zero-downtime deploys?
When your deployment process causes visible errors or timeout for users. For many applications, a Puma phased restart with USR1 signal achieves near-zero-downtime without additional tooling. For applications that need true zero-downtime including database migrations, you need a more sophisticated approach involving rolling deploys and backward-compatible migrations.
Follow the step-by-step deployment path from blank VPS to production traffic. ::