Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Supplemental development process for Docker #1319

Merged
merged 9 commits into from Aug 24, 2022
1 change: 1 addition & 0 deletions .gitignore
Expand Up @@ -27,3 +27,4 @@ sphinx_rtd_theme/static/js/html5shiv.min.js
sphinx_rtd_theme/static/js/html5shiv-printshiv.min.js
.direnv/
.envrc
.container_id
52 changes: 52 additions & 0 deletions Dockerfile
@@ -0,0 +1,52 @@
# python:3-alpine contains node 18 so has to go first
# in order to get overwritten
FROM python:3-alpine
FROM node:14-alpine
benjaoming marked this conversation as resolved.
Show resolved Hide resolved

# Do not use --update since that will also fetch the
# latest node-current package
# 'make' is needed for building documentation
RUN apk add npm make py3-pip py3-wheel

# Add an extra verification that we have the right node
# because the above caused issues
RUN node -v && node -v | grep -q v14

RUN pip install pip --upgrade

RUN mkdir -p /project/src/ &&\
mkdir -p /project/docs/ &&\
mkdir -p /project-static-copy

WORKDIR /project

COPY package.json /project/

# COPY package-lock.json /project/

COPY bin/preinstall.js /project/bin/preinstall.js

RUN cd /project

# It matters that the node environment is installed into the same
# folder, i.e. /project where we will run the environment from
RUN npm install --package-lock-only &&\
npm audit fix &&\
npm install

# This is strictly speaking not necessary, just makes
# running the container faster...
# Install dependencies, then uninstall project itself
COPY . /project-static-copy
RUN cd /project-static-copy &&\
pip install ".[dev]" &&\
/usr/bin/yes | pip uninstall sphinx_rtd_theme
humitos marked this conversation as resolved.
Show resolved Hide resolved


# Copy in stuff we need to run the project
COPY webpack.common.js webpack.dev.js webpack.prod.js /project/

COPY docker-entrypoint.sh /entrypoint.sh
RUN chmod +x /entrypoint.sh

ENTRYPOINT ["/entrypoint.sh"]
19 changes: 19 additions & 0 deletions Makefile
@@ -0,0 +1,19 @@
SHELL := /bin/bash
CWD := $(shell cd -P -- '$(shell dirname -- "$0")' && pwd -P)

docker-images:
docker build -t sphinx_rtd_theme:latest .

docker-run:
rm -f .container_id
docker run --cidfile=.container_id -t -i -p 1919:1919 \
--network host \
--mount type=bind,source="$(CWD)",target=/project-readonly,readonly \
--mount type=volume,dst=/project-readonly/sphinx_rtd_theme.egg-info,volume-driver=local \
--mount type=bind,source="$(CWD)/src",target=/project/src,readonly \
--mount type=bind,source="$(CWD)/docs",target=/project/docs \
sphinx_rtd_theme:latest $(command)

docker-copy-assets:
docker cp "$(shell cat .container_id):/project/sphinx_rtd_theme" .
docker cp "$(shell cat .container_id):/project/package-lock.json" .
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And discussed elsewhere already, but just noting here, this homebrew configuration seems to overlap with what docker-compose provides. I'd put some preference on using standard tooling, as it's more approachable.

@humitos would have the most input/opinions here

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know anything about homebrew, it's a MacOS thing right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The idea of docker cp is to avoid a write operation on a bind mount because file system permissions are usually troublesome, so it can be better to be explicit about what you expect to get from the docker runtime.

I'm already fixing this on your request @agjohnson

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is now removed, the docker-compose command mounts the current git repo and builds inside that one.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know anything about homebrew, it's a MacOS thing right?

I meant this in the sense that this is a homebrewed process -- it is a manual, one-off creation, as opposed to a standardized, repeatable process using docker-compose

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure to understand what this step is for. Why are we copying these files from inside the container into the host?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had to backtrack again from the approach where a host directory was mounted and written to by the container.

The reason was that I had issues with a mix of files owned by the user and files owned by root. I also needed to run npm cache clear and I never understood why.

But the fundamental reason was that I was mounting host system stuff into the container and writing irrelevant files there.

The best approach IMO is to keep everything containerized and copy out the artifacts directly from their known location. If you can separate out a bind mount and ensure that ONLY intended artifacts are written there, then that's also a fine approach.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Gotcha! Yeah, creating files inside the container that then are required outside the container usually generates permission problems. The only way to do that correctly is mapping the UID/GID between the host and the container and that is where things get more complicated. We've been there and I didn't enjoyed it 😄

I like the pattern you used here with docker cp for this requirement, where files generated inside the container are required to be distributed/used outside the container.

We currently have this problem when creating migrations on Read the Docs, for example, where the resulting .py files is owned by root and left in a directory owned by "your user" in the host. Then you have to run sudo chown to fix the permissions on that file.

11 changes: 11 additions & 0 deletions docker-entrypoint.sh
@@ -0,0 +1,11 @@
#!/bin/sh

# Install the readonly project in editable mode and make sure
# all dependencies are upgrade. This is mounted in from the
# outside, but it is on purpose that it is readonly!
cd /project-readonly
pip install --upgrade -e ".[dev]"

cd /project

npm run $@
36 changes: 36 additions & 0 deletions docs/contributing.rst
Expand Up @@ -6,6 +6,9 @@ This project follows the Read the Docs :doc:`code of conduct
<rtd:code-of-conduct>`. If you are not familiar with our code of conduct policy,
take a minute to read the policy before starting with your first contribution.

.. tip::
There is a new dockerized build environment, see :ref:`dockerized-build`.

Modifying the theme
===================

Expand Down Expand Up @@ -62,6 +65,39 @@ can be used to test built assets:
.. _Wyrm: http://www.github.com/snide/wyrm/
.. _Sphinx: http://www.sphinx-doc.org/en/stable/


_dockerized-build::

Dockerized development
======================

If you have Docker available on your platform, you can get started building CSS and JS artifacts a bit faster and won't have to worry about any of the setup spilling over into your general environment.

When building with Docker, we create an image containing the build dependencies. Some of these are quite outdated and therefore ideal to isolate a container. The image is tagged as ``sphinx_rtd_theme:latest``.

Inside the running docker image, we mount the working copy of the repository, build the artifacts and finally copy out those artifacts into your external environment.

Use the following steps:

.. code-block:: console

# Builds an updated version of the docker image
$ make docker-images

# Runs the docker environment and builds the assets. The container exits after completing the build.
$ make docker-run command=build

# Runs the development webserver
$ make docker-run command=dev

# Copies out the assets from the most recent docker-run
$ make docker-copy-assets


Every time you change the Node or Python requirements, you will need to rebuild images with ``make docker-images``. If you change SASS or JS, you will need to rebuild assets.

If you need a different setup, refer to ``Makefile`` to see the exact method used to invoke the Docker environment.

Testing
=======

Expand Down
4 changes: 2 additions & 2 deletions package.json
@@ -1,7 +1,7 @@
{
"name": "sphinx_rtd_theme",
"main": "js/theme.js",
"version": "1.0.1alpha1",
"version": "1.1.0-alpha.0",
"scripts": {
"dev": "webpack-dev-server --open --config webpack.dev.js",
"build": "webpack --config webpack.prod.js",
Expand Down Expand Up @@ -30,7 +30,7 @@
"jquery": "^3.6.0",
"lato-font": "^3.0.0",
"mini-css-extract-plugin": "^0.6.0",
"node-sass": "^4.13.1",
"node-sass": "^4.14.1",
"optimize-css-assets-webpack-plugin": "^5.0.4",
"roboto-fontface": "^0.10.0",
"sass-loader": "^7.3.0",
Expand Down