Containerize Redwood Sides with Docker Compose

Docker Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a YAML file to configure your application’s services. You can create and start all your services with a single command.

This example containerizes Redwood’s Web and API sides into individual containers that can be run with Docker Compose. The code for this example can be found on my GitHub.

Outline

Create Project

This example will start with a new Redwood project.

yarn create redwood-app redwood-docker-compose
cd redwood-docker-compose

Create the following files:

  • Dockerfiles inside the web and api directories
  • Nginx configuration file in web directory
  • docker-compose.yml and .dockerignore files in the root of the project
touch web/Dockerfile api/Dockerfile web/nginx.conf \
  docker-compose.yml .dockerignore

Set up API Side

To set up the API side we need to have CORS configured, an apiUrl specified, and a database migration applied to a production database.

Configure CORS

Our backend and frontend will each be in their own containers, and possibly on entirely separate domains. To ensure the frontend can query the backend, we will set origin to * and credentials to true in the cors option of our GraphQL handler.

// api/src/functions/graphql.js

import { createGraphQLHandler } from '@redwoodjs/graphql-server'

import directives from 'src/directives/**/*.{js,ts}'
import sdls from 'src/graphql/**/*.sdl.{js,ts}'
import services from 'src/services/**/*.{js,ts}'

import { db } from 'src/lib/db'
import { logger } from 'src/lib/logger'

export const handler = createGraphQLHandler({
  loggerConfig: { logger, options: {} },
  directives,
  sdls,
  services,
  cors: {
    origin: '*',
    credentials: true,
  },
  onException: () => {
    db.$disconnect()
  },
})

Set apiUrl

Inside redwood.toml set the apiUrl to http://localhost:8911/api. If you are deploying this to a service like Fly or Qovery, you will need to set this to the endpoint of your deployed GraphQL handler.

[web]
  title = "Redwood App"
  port = 8910
  apiUrl = "http://localhost:8911/api"
[api]
  port = 8911
[browser]
  open = true

Prisma Schema

Our schema has the same Post model used in the Redwood tutorial.

datasource db {
  provider = "postgresql"
  url      = env("DATABASE_URL")
}

generator client {
  provider = "prisma-client-js"
}

model Post {
  id        Int      @id @default(autoincrement())
  title     String
  body      String
  createdAt DateTime @default(now())
}

Add Database Environment Variables

Normally .env is contained in the root of your project, but as of now it will need to be contained inside your api/db folder due to Docker weirdness.

touch api/db/.env
rm -rf .env .env.defaults

Include DATABASE_URL in api/db/.env. See this post for instructions on quickly setting up a remote database on Railway.

DATABASE_URL=postgresql://postgres:password@containers-us-west-10.railway.app:5513/railway

Apply Database Migration

yarn rw prisma migrate dev --name posts

Set up Web Side

Create a home page and generate a cell called BlogPostsCell to perform our data fetching.

yarn rw g page home /
yarn rw g cell BlogPosts

BlogPostsCell

The query returns an array of posts, each of which has an id, title, body, and createdAt date.

// web/src/components/BlogPostsCell/BlogPostsCell.js

export const QUERY = gql`
  query POSTS {
    posts {
      id
      title
      body
      createdAt
    }
  }
`

export const Loading = () => <div>Loading...</div>
export const Empty = () => <div>Empty</div>
export const Failure = ({ error }) => (
  <div style={{ color: 'red' }}>Error: {error.message}</div>
)

export const Success = ({ posts }) => {
  return posts.map((post) => (
    <article key={post.id}>
      <header>
        <h2>{post.title}</h2>
      </header>

      <p>{post.body}</p>
      <time>{post.createdAt}</time>
    </article>
  ))
}

HomePage

Import the BlogPostsCell into HomePage and return a <BlogPostsCell /> component.

// web/src/pages/HomePage/HomePage.js

import BlogPostsCell from 'src/components/BlogPostsCell'
import { MetaTags } from '@redwoodjs/web'

const HomePage = () => {
  return (
    <>
      <MetaTags
        title="Home"
        description="This is the home page"
      />

      <h1>Redwood+Docker 🐳</h1>
      <BlogPostsCell />
    </>
  )
}

export default HomePage

Scaffold Admin Dashboard

yarn rw g scaffold post

Set up Docker

We will have two Dockerfiles, a .dockerignore file, an nginx.conf configuration file, and a docker-compose.yml file to stitch it all together.

dockerignore

node_modules

API Dockerfile

Our Dockerfile is using the node:14-alpine image. This may cause issues if you are on an M1 and want to build the image locally. Change node:14-alpine to node:14 if you encounter this issue.

We set our working directory to app and copy either the api side or web side along with .nvmrc, graphql.config.js, package.json, redwood.toml, and yarn.lock.

FROM node:14-alpine

WORKDIR /app

COPY api api
COPY .nvmrc .
COPY graphql.config.js .
COPY package.json .
COPY redwood.toml .
COPY yarn.lock .

RUN yarn install --frozen-lockfile
RUN yarn add react react-dom --ignore-workspace-root-check
RUN yarn rw build api
RUN rm -rf ./api/src

WORKDIR /app/api

EXPOSE 8911

ENTRYPOINT [ "yarn", "rw", "serve", "api", "--port", "8911", "--rootPath", "/api" ]

Web Dockerfile

FROM node:14-alpine as builder

WORKDIR /app

COPY web web
COPY .nvmrc .
COPY graphql.config.js .
COPY package.json .
COPY redwood.toml .
COPY yarn.lock .

RUN yarn install --frozen-lockfile
RUN yarn rw build web
RUN rm -rf ./web/src

FROM nginx as runner

COPY --from=builder /app/web/dist /usr/share/nginx/html
COPY web/nginx.conf /etc/nginx/conf.d/default.conf

RUN ls -lA /usr/share/nginx/html

EXPOSE 8910

Nginx Configuration

Nginx is a web server that can also be used as a reverse proxy, load balancer, mail proxy or HTTP cache. Don’t ask me to explain this code, but I promise it works.

server {
  listen 8910 default_server;
  root /usr/share/nginx/html;

  location ~* \.(?:css|js)$ {
    expires 1h;
    add_header Pragma public;
    add_header Cache-Control "public";
    access_log off;
  }

  location ~* \.(?:ico|gif|jpe?g|png)$ {
    expires 7d;
    add_header Pragma public;
    add_header Cache-Control "public";
    access_log off;
  }

  location / {
    try_files $uri $uri/ /index.html;
  }
}

Docker Compose File

We will have two services in our docker-compose.yml file: web and api. Each will have ports exposed and a build that is set to the root directory for the context along with the corresponding location for the Dockerfiles.

version: "3.9"
services:
  web:
    build:
      context: .
      dockerfile: ./web/Dockerfile
    ports:
      - "8910:8910"
  api:
    build:
      context: .
      dockerfile: ./api/Dockerfile
    ports:
      - "8911:8911"

Build Images

The docker compose up command aggregates the output of each container and builds, (re)creates, starts, and attaches to containers for a service.

docker compose up

Open http://localhost:8910/posts to create a test post and return to http://localhost:8910/ to see the result.

Check the image information with docker images.

docker images

Keep in mind that I am on an M1 so this image is much larger than it would be with the Alpine version of Node. To see different approaches to optimizing your container, see Dockerize RedwoodJS and redwoodjs-docker.

REPOSITORY           TAG       IMAGE ID       CREATED          SIZE
redwood-docker_api   latest    243369952fa0   57 seconds ago   2.96GB
redwood-docker_web   latest    c1610495648c   42 minutes ago   137MB

See the specific running containers with docker ps.

docker ps
CONTAINER ID   IMAGE                COMMAND                  CREATED          STATUS          PORTS                            NAMES
a4d2a221278f   redwood-docker_web   "/docker-entrypoint.…"   35 seconds ago   Up 34 seconds   80/tcp, 0.0.0.0:8910->8910/tcp   redwood-docker-web-1
f5ab7bf289a9   redwood-docker_api   "yarn rw serve api -…"   35 seconds ago   Up 34 seconds   0.0.0.0:8911->8911/tcp           redwood-docker-api-1

Test GraphQL Endpoint

Hit localhost:8911/api/graphql with your favorite API tool or curl. Send a query to the root schema asking for the current version.

curl \
  --request POST \
  --header 'content-type: application/json' \
  --url 'http://localhost:8911/api/graphql' \
  --data '{"query":"{ redwood { version } }"}'
{
  "data":{
    "redwood":{
      "version":"0.41.0"
    }
  }
}

Send another query for the title and body of the posts in the database.

curl \
  --request POST \
  --header 'content-type: application/json' \
  --url 'http://localhost:8911/api/graphql' \
  --data '{"query":"{ posts { title body } }"}'
{
  "data":{
    "posts":[
      {
        "title":"Docker Compose",
        "body":"How to compose a Redwood app"
      }
    ]
  }
}

Publish to GitHub Container Registry

GitHub Packages is a platform for hosting and managing packages that combines your source code and packages in one place including containers and other dependencies. You can integrate GitHub Packages with GitHub APIs, GitHub Actions, and webhooks to create an end-to-end DevOps workflow that includes your code, CI, and deployment solutions.

GitHub Packages offers different package registries for commonly used package managers, such as npm, RubyGems, Maven, Gradle, and Docker. GitHub’s Container registry is optimized for containers and supports Docker and OCI images. To publish our images to the GitHub Container Registry, we need to first push our project to a GitHub repository.

Initialize Git

git init
git add .
git commit -m "I can barely contain my excitement"

Create a New Repository

You can create a blank repository by visiting repo.new or using the gh repo create command with the GitHub CLI. Enter the following command to create a new repository, set the remote name from the current directory, and push the project to the newly created repository.

gh repo create redwood-docker-compose \
  --public \
  --source=. \
  --remote=upstream \
  --push

If you created a repository from the GitHub website instead of the CLI then you will need to set the remote and push the project with the following commands.

git remote add origin https://github.com/YOUR_USERNAME/redwood-docker-compose.git
git push -u origin main

Login to ghcr

To login, create a PAT (personal access token) and include it instead of xxxx.

export CR_PAT=xxxx

Login with your own username in place of YOUR_USERNAME.

echo $CR_PAT | docker login ghcr.io -u YOUR_USERNAME --password-stdin

Tag Images

Docker tags are mutable named references for pulling and running images, similar to branch refs in Git.

docker tag redwood-docker-compose_web ghcr.io/YOUR_USERNAME/redwood-docker-compose_web
docker tag redwood-docker-compose_api ghcr.io/YOUR_USERNAME/redwood-docker-compose_api

Push to Registry

Once you have tagged your image, you can push and pull the images much like you would push or pull a Git repository.

docker push ghcr.io/YOUR_USERNAME/redwood-docker-compose_web:latest
docker push ghcr.io/YOUR_USERNAME/redwood-docker-compose_api:latest

Pull from Registry

To test that our project has a docker image published to a public registry, pull it from your local development environment.

docker pull ghcr.io/YOUR_USERNAME/redwood-docker-compose_web:latest
docker pull ghcr.io/YOUR_USERNAME/redwood-docker-compose_api:latest
6 Likes

Amazing work on this tutorial!

I figure this is a good place to start a discussion about optimizations. Apologies if this is not the right place.

Context here: my current project is a consumer-facing app with lots of dependencies.

  • redwood v0.40 w/ modified dbAuth
  • 3 packages in /packages which must be built prior to building api

I’ve tested the Dockerfiles in jeliasson/redwoodjs-docker, which are similar to the one here. Web side is great! However api side is a nightmare. The container size is >4GB! Further, all dependencies are installed (both dev + production, both web + api), which takes 15+ minutes to build the container

I made some modifications below, which cuts the image down to 1GB, which is still way too large, and it takes 15+ min.

# ==
# Base
FROM node:14 as base

WORKDIR /app

ARG NODE_ENV
ARG RUNTIME_ENV

ENV NODE_ENV=$NODE_ENV
ENV RUNTIME_ENV=$RUNTIME_ENV

COPY package.json package.json
COPY yarn.lock yarn.lock

COPY redwood.toml redwood.toml
COPY graphql.config.js graphql.config.js

COPY packages packages
COPY api/package.json api/package.json
COPY web/package.json web/package.json

RUN yarn install --frozen-lockfile

# ==
# Build
FROM base as build

COPY api api
COPY packages packages

RUN yarn build-packages && yarn rw build api

# ==
# Serve
FROM node:14 as serve

WORKDIR /app

COPY tasks/environment/ tasks/environment/
COPY serve-api.sh serve-api.sh

COPY --from=build /app/packages /app/packages

COPY --from=build /app/node_modules/.prisma /app/node_modules/.prisma
COPY --from=build /app/api/dist /app/api/dist
COPY --from=build /app/package.json /app/package.json
COPY --from=build /app/api/package.json /app/api/package.json
COPY --from=build /app/yarn.lock /app/api/yarn.lock

COPY --from=build /app/redwood.toml /app/redwood.toml

RUN yarn --cwd "api" --frozen-lockfile install

# Expose RedwoodJS api port
EXPOSE 8911

# Entrypoint to @redwoodjs/api-server binary
ENTRYPOINT ["./serve-api.sh"]

The script ./serve-api.sh injects some environment variables before running yarn rw serve api

My plan to remedy this is to:

  1. Come up the bare minimum package.json to run yarn rw serve api
  2. Build everything outside the container, which allows npm caching to skip the install step. Then copy the built files into the container.

Thoughts and suggestions welcome!

1 Like

To make sure I’m understanding correctly, are you running the base Node image like that code snippet shows or are you using node:14-alpine like the Dockerfiles in the redwoodjs-docker repo?

This is definitely @jeliasson’s specialty more than me so I’ll defer to his advice, but I think that the biggest difference you’ll see will be just from having a more lightweight base image for Node by switching from node:14 and node:14-alpine.

1 Like

I’m using node:14. The alpine version had some errors with one of the packages I’m using. Maybe I should revisit that error to get alpine working instead.

1 Like

Ok ROUND 2 fight with docker size. I got the image down to 1.15GB from ~2GB. I documented my journey here:

2 Likes

Apologies if this not the right place for this; however I have been on a similar path, leveraging a devcontainer.json / Dockerfile combo.

I am currently running into oddities around the default volume mount containing the host’s node_modules.

The workaround mentioned here does not work well for me Add Option to ignore certain folder · Issue #620 · microsoft/vscode-remote-release · GitHub

I took a look at GitHub - pi0neerpat/redwood-devops-example and might be able to leverage some learnings from that; but was wondering if anyone had ideas on how to better mount the project?

current impl: feat(docker): add dev container by virtuoushub · Pull Request #34 · redwoodjs/example-storybook · GitHub

Thanks for sharing, super helpful. I’ve been dockerizing my redwood projects for deploying on a Pi with Balena, and the images are massive and slow. This will help significantly.

In understanding how you start the service I noticed you’re using an entrypoint script to migrate the DB and then start the API process.

Sharing in case it helps someone, one thing I’ve found that’s useful is defining both ENTRYPOINT and CMD in the Dockerfile. This way, you can easily change/override the image’s start command within docker-compose.yml without having to rebuild the image, like if you wanted to add a debug param or change the port, etc.

For example,

# Dockerfile.api: 
# ...
ENTRYPOINT [ "./entrypoint.sh" ]
CMD [ "yarn", "redwood", "serve", "api" ]

Entrypoint vs CMD are confusing, but the way I understand it, the Entrypoint will always be called, but the CMD is overridable, because the CMD array is passed to Entrypoint script as args.

So after entrypoint does its job, you just need to call exec "$@" to run the startup command.

#!/bin/sh
set -eo pipefail

if [[ -z "${DB_MIGRATE}" || "${DB_MIGRATE}" == "true" ]]; then
  echo "entrypoint.sh: MIGRATING Prisma schema (\$DB_MIGRATE=${DB_MIGRATE})"
  yarn redwood prisma migrate deploy
else
  echo "entrypoint.sh: SKIPPING Prisma schema migration (\$DB_MIGRATE=${DB_MIGRATE})"
fi

echo " "

echo "entrypoint.sh Starting service: $@"
exec "$@"     # <<<----------------------------Run command

And finally in docker-compose.yml you can either let the image start with a default image command, or override it by specifying a command.

# docker-compose.yml
...
  api-server:
    build:
      context: .
      dockerfile: Dockerfile.api-server
    restart: always
    command: yarn redwood serve api --apiRootPath api
    environment:
      - DATABASE_URL=postgresql://postgres:${POSTGRES_PASSWORD}@postgresdb:5432/postgres
...

Hope it helps, cheers

1 Like

I got this error

#15 47.28 error Couldn’t find any versions for “web” that matches “workspace:^”

ok, so I don’t know how this happened, but I had that line in my package.json - devDeps

  "devDependencies": {
    "@redwoodjs/core": "^2.1.0",
    "storybook-addon-headless": "^2.1.3",
    "web": "workspace:^",
    "workspace": "^0.0.1-preview.1"
  },

getting rid of that gets me past the error

but now I get

error Your lockfile needs to be updated, but yarn was run with –frozen-lockfile`

Hmmm, weird. Any ideas what’s going on @jeliasson or how to resolve that error? If not I can take a look at it this week. This blog post is also like 3 major versions behind at this point so that probably doesn’t help.

I had to get corepack installed (replaces the yarn version manager) and specify yarn 1

in package.json

  "packageManager": "yarn@1.22.19",
  "engines": {
    "node": ">=14.19 <=16.x",
    "yarn": "=1.22.19"
  },

Then install and activate the yarn version I wanted

corepack prepare yarn@1.22.19 --activate

now I’m getting success with docker-compose up

Cheers!

basically – I think something changed in the more modern yarn that broke something

I was trying to follow this work by @pi0neerpat