Docker + Rails: Solutions to Common Hurdles
Eli Fatsi, Former Development Director
Tips n Tricks for working with Rails applications in Docker
This is the follow on post to an earlier post: Docker: Right for Us. Right for You?
Docker has made a number of large problems go away, and replaced them with (usually) (hopefully) a smaller set of its own problems. Sometimes it's not worth it to make that sacrifice, but it often is for us, and here are the tricks we've picked up for working with Rails sites and Docker Compose:
Rails Tips & Tricks
Trying to docker-ify a Rails app and running into this?
could not connect to server: No such file or directory Is the server running locally and accepting connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
Make sure you have a
db service defined in
docker-compose.yml, and add
host: db to your
database.yml file. All of the services you define in the compose file become network-addressable names to each other. So adding
host: db tells Rails that the hostname of the database server is
db, which should point to your database service.
AKA the one tool I use the most by far I gotta have it. TLDR on the solution: https://stackoverflow.com/questions/35211638/how-to-debug-a-rails-app-in-docker-with-pry/37264588#37264588
The gist is that you need to add some Docker Compose flags to the service you want to run a pry inside, and then
docker attach [running_container_name] after you've started the service container.
If you're testing w/ Capybara and want to get a glimpse of your site while a feature test is running, then Docker is going to make that difficult for you. TLDR - https://github.com/copiousfreetime/launchy/issues/108#issuecomment-319644326
And the gist? You need to configure a volume so screenshots saved on the Docker container are accessible to the host machine. You then need to rig up
guard locally to watch the volume directory and automatically open any HTML files that land there.
cacheing bundler and npm builds
If you don't configure this properly, you could find yourself re-installing dependencies every time you spin up your containers. We lean on persisted volumes for this, and pick up the
tmp directories while we're at it. like so:
- services - app: entrypoint: ./.docker-config/entrypoint.sh volumes: - ./node_modules:/usr/src/app/node_modules - gems:/usr/local/bundle - log:/usr/src/app/log - tmp:/usr/src/app/tmp
I'll also point out that we prefer to move the dependency installation steps into an
entrypoint script, instead of within the Dockerfile build definition. This means that we're rarely rebuilding our baseline image (unless something deeper level changes, like needing an additional runtime available), and our dependencies are kept up to date everytime we cycle the containers. Here's an example of an entrypoint script for an app with Ruby gems and NPM packages:
#!/bin/bash set -e # Ensure PID is reset. This can happen if docker isn't cleanly shut down. rm -rf /usr/src/app/tmp/pids # Verify node_modules are up to date yarn install --silent # Verify gems are up to date if ! bundle check > /dev/null; then echo "Gems dependencies are out of date. Installing..." bundle install fi exec "$@"
Out of the box, sites that run perfectly quickly locally would take a lonnnnng time to respond to local asset requests. One solution we settled on was to add
:delegated to the volume dedicated to syncing the codebase. For example:
app: image: ruby:3.0 command: rails server -p 3000 -b '0.0.0.0' ports: - 3000:3000 volumes: # Ensure changes to the codebase are picked up by the docker container # The `:delegated` here is a critical config for speedy development - .:/usr/src/app:delegated
- The docs have a lot to say about this if you want to learn more: http://docs.docker.oeynet.com/docker-for-mac/osxfs-caching/
- And this StackOverflow answer has a good recap in case that docs link goes down: https://stackoverflow.com/a/63437557/1655658
This one took me a while to figure out as I was learning the difference between a running Docker container and a service and an image and a volume. Once I had a clearer picture of how Docker (and Docker Compose) worked, opening up a shell into a running thing stopped being tricky. Here's what I've picked up though, hopefully these words mean something to you:
If you have a running container based off of a docker-compose service, you can bash into it like so:
docker-compose exec [name-of-service] bash
If you have a non-running container based off of a docker-compose service, you can bash into it like so:
docker-compose run --rm [name-of-service] bash
The difference here is
run --rm. You can probably guess what each does. The
--rm flag is important so that the newly spun up containers don't hang around forever after you close out of the session.
importing database dumps
If you're running something like PostgreSQL within Docker, it becomes a bit trickier to
psql your way in from a local command line. As is often the case with Docker, there are a few ways to get the job done here, and the one that's right for you is dependent on what you're trying to do.
One path is to requiring you to expose a port and run commands through that port. This can be done in a docker-compose.yml:
services: db: image: mdillon/postgis:11-alpine ports: # Expose Postgres to host networking (your computer) at port 5433. # This prevents conflicts with Postgres installed on your machine # while still allowing the database to be browsed at port 5432 - 5433:5432 volumes: - db-data:/var/lib/postgresql/data
Another option is to define a new volume within your database service which makes a database dump on your local machine accessible to the running Docker container. If you had a database dump stored within your project directory at
./local/db-backups, you could make it accessible within Docker like so:
services: db: image: mdillon/postgis:11-alpine volumes: - db-data:/var/lib/postgresql/data - ./local/db-backups:/app/db-backups
bash into the running container and have at it with all your usual
Note: I'm including the
db-data volume definition to point out that we always have a persistent volume configured for database services so the data persists when the containers are wound down.
Using local SSH credentials on Docker container
We have a lot of deploy processes that lean on SSH private keys for authorization. If you attempt to run one of those commands in a running Docker container, it'll fail due to it's sandboxed environment not being able to reach out into your local machine for the credentials. This is our Docker Compose solution for that:
services: app: ... volumes: - type: bind source: /run/host-services/ssh-auth.sock target: /run/host-services/ssh-auth.sock
That's about all I could recall now that it's been ~1.5 years since digging in on my own Docker journey. Hopefully something in there was useful to you. If you have another tip or trick that solved an early challenge, drop a line in the comments below!