Faster Rails Docker Feedback Loop: Runtime Bundler, Yarn Cache, and HMR
How I reduced Docker development friction across three Rails apps by removing rebuild-heavy steps, adding runtime dependency sync, and enabling webpack HMR.
Faster Rails Docker Feedback Loop: Runtime Bundler, Yarn Cache, and HMR
When local development runs in Docker, feedback loop speed often gets worse over time.
I hit the same 3 problems repeatedly in wfb, n2r, and turtle:
bundle updateforced image rebuilds to apply Gem changes.yarn installrepeated expensive work.- asset precompile-related startup paths slowed each iteration.
This post documents the improvements I applied across all three projects, with exact patterns you can reuse.
1. What was slowing the loop
The main anti-pattern was build-time coupling:
- dependencies were treated like immutable build artifacts
- day-to-day dependency changes were frequent in development
- webpack dev flow was not isolated for true hot updates
That combination means a small Gemfile.lock or yarn.lock edit could trigger a full rebuild and stall development.
2. Design goals
I optimized for these goals:
- no image rebuild needed for normal
Gemfile.lockoryarn.lockchanges - dependency installs are incremental and cached
- web process and webpack dev server are split for HMR
- worker/scheduler do not waste time on JS dependency checks
- startup ordering is deterministic with healthchecks
3. Core implementation
3.1 Move install logic from Docker build to container entrypoint
Instead of baking bundle install and yarn install into Dockerfile, keep Dockerfile focused on runtime/toolchain setup and run dependency checks at startup.
Example (Dockerfile pattern):
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
FROM ruby:4.0.1-slim
ENV NODE_MAJOR=${NODE_MAJOR:-24}
ARG YARN_VERSION=1.22.22
RUN apt-get update -qq \
&& apt-get install -y ca-certificates curl gnupg \
&& mkdir -p /etc/apt/keyrings \
&& curl -fsSL https://deb.nodesource.com/gpgkey/nodesource-repo.gpg.key \
| gpg --dearmor -o /etc/apt/keyrings/nodesource.gpg \
&& echo "deb [signed-by=/etc/apt/keyrings/nodesource.gpg] https://deb.nodesource.com/node_$NODE_MAJOR.x nodistro main" \
| tee /etc/apt/sources.list.d/nodesource.list
RUN apt-get update -qq \
&& DEBIAN_FRONTEND=noninteractive apt-get install -yq --no-install-recommends \
build-essential libpq-dev nodejs postgresql-client python3 git libyaml-dev \
&& npm install -g yarn@${YARN_VERSION}
ARG app=/opt/wfb
WORKDIR $app
RUN adduser --disabled-login app \
&& mkdir -p $app $app/node_modules /usr/local/bundle /usr/local/share/.cache/yarn \
&& chown -R app:app $app /usr/local/bundle /usr/local/share/.cache/yarn \
&& gem install bundler --no-document
ENV BUNDLE_PATH=/usr/local/bundle \
YARN_CACHE_FOLDER=/usr/local/share/.cache/yarn \
RAILS_ENV=${RAILS_ENV:-development} \
NODE_ENV=${NODE_ENV:-development}
COPY bin/docker-entrypoint /usr/local/bin/docker-entrypoint
RUN chmod +x /usr/local/bin/docker-entrypoint
COPY --chown=app:app . ./
USER app
ENTRYPOINT ["docker-entrypoint"]
CMD ["bundle", "exec", "puma", "-b", "tcp://0.0.0.0:8080"]
3.2 Runtime dependency sync with lock and change detection
I added bin/docker-entrypoint in each project to:
- run
bundle check || bundle install - run
yarn installonly when needed - avoid races using lock directories
Example entrypoint:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
#!/bin/sh
set -e
with_lock() {
lock_name="$1"
shift
lock_dir="/tmp/${lock_name}.lock"
lock_timeout="${INSTALL_LOCK_TIMEOUT:-120}"
waited=0
while ! mkdir "$lock_dir" 2>/dev/null; do
sleep 1
waited=$((waited + 1))
if [ "$waited" -ge "$lock_timeout" ]; then
echo "Timed out waiting for lock: $lock_name" >&2
return 1
fi
done
trap 'rmdir "$lock_dir" 2>/dev/null || true' EXIT INT TERM
"$@"
status=$?
rmdir "$lock_dir" 2>/dev/null || true
trap - EXIT INT TERM
return "$status"
}
ensure_bundle() {
bundle config set path "${BUNDLE_PATH:-/usr/local/bundle}"
bundle check || bundle install -j "${BUNDLE_JOBS:-4}" --retry "${BUNDLE_RETRY:-5}"
}
ensure_yarn() {
yarn install --frozen-lockfile --check-files --prefer-offline --no-progress
}
if [ -f Gemfile ] && [ "${SKIP_BUNDLE_INSTALL:-0}" != "1" ] && [ "${AUTO_BUNDLE_INSTALL:-1}" = "1" ]; then
with_lock app-bundle-install ensure_bundle
fi
if [ -f yarn.lock ] && [ "${SKIP_YARN_INSTALL:-0}" != "1" ] && [ "${AUTO_YARN_INSTALL:-1}" = "1" ]; then
integrity_file="node_modules/.yarn-integrity"
if [ ! -d node_modules ] || [ ! -f "$integrity_file" ] || [ yarn.lock -nt "$integrity_file" ] || [ package.json -nt "$integrity_file" ]; then
with_lock app-yarn-install ensure_yarn
fi
fi
exec "$@"
Notes:
- In
turtle, Yarn required--ignore-enginesto match existing constraints. - In
turtle,BUNDLE_APP_CONFIGwas set to$app/.bundleto avoid permissions friction with mounted bundle paths.
3.3 Persist dependency state in named Docker volumes
Named volumes keep expensive installs across container restarts:
1
2
3
4
5
6
7
8
9
x-app-service-template: &app
build:
context: .
dockerfile: ${DOCKERFILE:-Dockerfile}
volumes:
- .:/opt/app
- bundle:/usr/local/bundle
- node_modules:/opt/app/node_modules
- yarn_cache:/usr/local/share/.cache/yarn
This is the biggest practical win for day-to-day commands.
3.4 Separate webpack dev server for HMR
I added a dedicated webpack service:
1
2
3
4
5
6
7
8
9
webpack:
<<: *app
command: bundle exec ./bin/webpack-dev-server
ports:
- "3035:3035"
environment:
- "RAILS_ENV=development"
- "NODE_ENV=development"
- "WEBPACKER_DEV_SERVER_POLL=1000"
And updated webpacker.yml development settings:
1
2
3
4
5
6
7
8
9
10
11
development:
compile: true
dev_server:
host: webpack
port: 3035
public: localhost:3035
hmr: true
inline: true
watch_options:
poll: <%= ENV.fetch('WEBPACKER_DEV_SERVER_POLL', 0) %>
ignored: '**/node_modules/**'
Key detail: host: webpack maps Rails to the compose service name inside Docker networking.
3.5 Healthchecks and smarter dependency startup
I added healthchecks for DB/Redis and switched depends_on to service_healthy.
For non-web services (worker, scheduler) I set:
1
2
environment:
- "AUTO_YARN_INSTALL=0"
This avoids unnecessary JS install checks where they are not needed.
4. What changed in each app
wfb
- Dockerfile now installs Node/Yarn toolchain but defers dependency installation to runtime.
- Added
bin/docker-entrypointlock-based install flow. - Added
webpackservice, HMR config, healthchecks, and cache volumes.
n2r
- Applied the same Dockerfile + entrypoint + compose + webpacker pattern.
- Preserved project-specific env keys (
DATABASE_HOSTetc.) while adopting shared dev-loop improvements.
turtle
- Applied the same pattern with two project-specific adaptations:
ensure_yarnincludes--ignore-engines.BUNDLE_APP_CONFIGuses$app/.bundleto prevent permission issues.
5. Operational checklist (copy/paste)
Use this to migrate another Rails + Docker project.
- Add
bin/docker-entrypointwith lock-basedbundle/yarnchecks. - Set Docker
ENTRYPOINTto that script. - Remove build-time dependency install commands from
Dockerfile. - Add named volumes for
bundle,node_modules, andyarn_cache. - Add dedicated
webpackservice and expose3035. - Update
webpacker.ymldev server host towebpackand enablehmr. - Add
healthcheckto stateful dependencies and usedepends_on: condition: service_healthy. - Disable unnecessary Yarn startup checks on worker-like services with
AUTO_YARN_INSTALL=0.
6. Trade-offs and caveats
- Startup still performs dependency checks, so first boot can be slow.
- Native gem compile time (for example
grpc) can still dominate some workflows. - Long-term reproducibility still depends on lockfiles and pinned tool versions.
- For production images, keep precompile/build optimization in a dedicated production Dockerfile path.
7. Result
The workflow is now optimized for iterative development instead of immutable image rebuilding.
Practical impact:
Gemfile.lockchanges apply on next container start without image rebuild.- Yarn reuse is significantly better with persisted caches and install gating.
- Frontend edits get immediate feedback through dedicated webpack HMR service.
For Rails teams using Docker as the default dev runtime, this pattern is low-risk and high-impact.