Marry UI and Backend with a container
Reliability. Reproducibility. Safety.
These three adjectives described the minimum we ask from the Development Environment. And we can get closer to achieving all of these with the help of Docker containers.
As applications of today often have separate development environment for frontend and backend service, configuring and orchestrating the two often requires a bit of engineering.
The goal is to create a set-up where:
- rebuild of the whole environment will be possible with a single command,
- development environment same or very similar to final deploment, irrespective of the development machine or its OS.
Disclaimer: we focus only on development workflow; not on production deployment.
Docker Linux Containers
Nowadays, we have multiple options when it comes to running Linux containers. The most developer-friendly option is, IMHO, the Docker. Docker has been around for years, and there is plenty of official images available. It runs on Windows, OS X and of course, Linux. It is a workhorse.
Public vs Custom images
The Docker Hub contains ready to use images with sane defaults. There is already a:
- Ubuntu image,
- OpenJDK image, and
- a Node JS image.
The official images are safe and can one can use these out of the box.
The benefit of creating custom images is that you have full control over the environment. Your customization are part of the image which you do not have recreate each time your application is built. Yielding faster reload times.
The downside is that you will need to refresh the base image once in a while. Otherwise, there is risk running an old image without official patches and security fixes. This risk, while cannot be dismissed, is of lesser importance when used purely in the development environment.
We start with the base image of the distribution of choice. In this example, we are using Ubuntu, Focal Fossa being the most recent LTS release at the time of writing.
Now we can use the base Linux image to create purpose-specific child images. One for OpenJDK, one for Node JS.
Firstly, the Node JS image:
And, secondly OpenJDK:
With shipyard full of containers, we can proceed to build images:
These images are now locally cached on the developers workstation. We will re-use these in the next section when we deploy our Node and Java build and development environments.
Dockerized Development Environment
An example application we will be using uses:
- Ember.js for frontend, and
- Spring Boot for the backend.
The folders and their placement looks as follows:
Dockerized Spring Boot application
The service application is standard Spring Boot application generated at https://start.spring.io with the addition of
spring-boot-devtools to enable auto-reloading after the build. In this example we opted for Gradle based build.
The base image of the application inherits from baseline OpenJDK image we created in the previous section.
We also create and assume non-root user. This way, our application will not run with
root privileges in the container. It is advised to provide an application with the least access necessary. Linux containers are sharing the host system Kernel, and increase the risk of bad actor leveraging application root access should there be a container/kernel vulnerability to exploit.
At this point, we have an image with OpenJDK and
worker user ready to run our application. However, running above image will yield nothing. We are still missing and entry point script.
As we are using Gradle with Spring Boot, it is straighforward to script the application startup:
With above entry script, we can attach it as the entry point of our image:
Running the image with kick of
/opt/scripts/server.sh script. The last piece of the puzzle is mapping the project development folder to the docker image:
Opening project in editor/IDE, modifying and building code will result the application inside of the container to reload and refresh.
We have concluded the dockerization of the service part.
Dockerized Ember.js application
We follow similar steps with ember application, finally producing
Dockerfile which fulfils the following requirements:
- Has access to Google Chrome to run automated Ember.js tests
- Runs application without root privileges
- Configures NPM to install global packages to the local home folder of non-root user
- Install Ember CLI , and finally
- defines default entry point script.
The script uses as an entry point is responsible for installing NPM dependencies as defined by the application and starting Ember app. In adition, in oder to leverage API implemented by service hosted in previously defined container, it passes proxy configuration:
The variable $PROJECT_SERVICE stores location and port of the API proxy. This information can be passed as an argument to
However, as we now have two containers running in tandem, it high time to combine the two into single development environment.
Putting Service and UI applications together
The simplest way to orchestrate two or more containers is with
docker-compose. Composer simplifies building and instrumenting multiple containers.
The example configuration of
./project/docker-compose.yaml can looks as follows:
In this configuration, we have specified build context for both projects. Whenever we make changes to any of the
Dockerfile files, all that is necessary is to run
$ docker-compose build inside of
Composition of Docker images as defined in the example above solves the following:
- exposes default ports, such that both Ember app and Spring endpoints are accessible through the
- enable resolution of service API by internal hostname, thus allowing Ember.js to proxy request to Spring Boot application running in
- predefines volumes such that both, the
-serviceprojects are mounted to
/opt/appin their respective containers.
Starting docker images with composer will result in frontend and backend applications being bootstraped.
The frontend can be accessed at
Creating application containers requires a bit of effort. However, that effort pays off by creating a reproducible, isolated development environment.
It is not much more work to add PostgreSQL, RabbitMQ, Kafka or other service container when needed.
The added benefit is improved security, making a bit harder for rouge NPM package to, for example, deploy crypto miner script onto our development workstation.
Once everything is set-up, a single command is an entry to running a full stack of the application, streamlining development, testing and product iterations.