May 18, 2020

1443 words 7 mins read

Marry UI and Backend with a container

Marry UI and Backend with a container

Reliability. Reproducibility. Safety.

These three adjectives described the minimum we ask from the Development Environment. And we can get closer to achieving all of these with the help of Docker containers.

As applications of today often have separate development environment for frontend and backend service, configuring and orchestrating the two often requires a bit of engineering.

The App

As an example, let’s use an application that has frontend, a SPA application developed in Javascript. In this case, this will be an Ember.js application. The backend service is a Spring Boot based RESTful service.

The goal is to create a set-up where:

  • updating either Javascript or Java/Kotlin, will result in application reload in the development environment,
  • rebuild of the whole environment will be possible with a single command,
  • development environment same or very similar to final deploment, irrespective of the development machine or its OS.

Disclaimer: we focus only on development workflow; not on production deployment.

Docker Linux Containers

Nowadays, we have multiple options when it comes to running Linux containers. The most developer-friendly option is, IMHO, the Docker. Docker has been around for years, and there is plenty of official images available. It runs on Windows, OS X and of course, Linux. It is a workhorse.

Public vs Custom images

The Docker Hub contains ready to use images with sane defaults. There is already a:

  • Ubuntu image,
  • OpenJDK image, and
  • a Node JS image.

The official images are safe and can one can use these out of the box.

The benefit of creating custom images is that you have full control over the environment. Your customization are part of the image which you do not have recreate each time your application is built. Yielding faster reload times.

The downside is that you will need to refresh the base image once in a while. Otherwise, there is risk running an old image without official patches and security fixes. This risk, while cannot be dismissed, is of lesser importance when used purely in the development environment.

Baseline

We start with the base image of the distribution of choice. In this example, we are using Ubuntu, Focal Fossa being the most recent LTS release at the time of writing.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
# ./shipyard/ubuntu/Dockerfile
FROM ubuntu:focal

# Necessary to avoid interactive prompt during installation
ENV DEBIAN_FRONTEND=noninteractive 

RUN apt-get update && \        
    apt-get -qqy upgrade && \
    apt-get -qqy install -y git && \
    apt-get clean && \
    rm -rf /var/lib/apt/lists/*

Now we can use the base Linux image to create purpose-specific child images. One for OpenJDK, one for Node JS.

Firstly, the Node JS image:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
# ./shipyard/node/Dockerfile
FROM company/ubuntu

# Necessary to avoid interactive prompt during installation
ENV DEBIAN_FRONTEND=noninteractive 

RUN apt-get update && \        
    apt-get -qqy upgrade && \
    apt-get -qqy install nodejs npm && \
    apt-get clean && \
    rm -rf /var/lib/apt/lists/*

And, secondly OpenJDK:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
 # ./shipyard/openjdk/Dockerfile
FROM company/ubuntu

# Necessary to avoid interactive prompt during installation
ENV DEBIAN_FRONTEND=noninteractive 

RUN apt-get -qqy update && \
    apt-get -qqy update && \
    apt-get -qqy install openjdk-11-jdk-headless && \
    apt-get -qqy clean && \
    rm -rf /var/lib/apt/lists/*

ENV JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64

With shipyard full of containers, we can proceed to build images:

1
2
3
$ docker build -t company/ubuntu ./shipyard/ubuntu
$ docker build -t company/node ./shipyard/node
$ docker build -t company/openjdk ./shipyard/openjdk

These images are now locally cached on the developers workstation. We will re-use these in the next section when we deploy our Node and Java build and development environments.

Dockerized Development Environment

An example application we will be using uses:

  • Ember.js for frontend, and
  • Spring Boot for the backend.

The folders and their placement looks as follows:

1
2
3
./project
  ./project-ui        // Ember.js/Node application
  ./project-service   // Spring Boot/Java

Dockerized Spring Boot application

The service application is standard Spring Boot application generated at https://start.spring.io with the addition of spring-boot-devtools to enable auto-reloading after the build. In this example we opted for Gradle based build.

The base image of the application inherits from baseline OpenJDK image we created in the previous section.

1
2
3
4
5
6
# ./project/project-service/Dockerfile
FROM company/openjdk

RUN useradd -ms /bin/bash worker
USER worker
WORKDIR /home/worker

We also create and assume non-root user. This way, our application will not run with root privileges in the container. It is advised to provide an application with the least access necessary. Linux containers are sharing the host system Kernel, and increase the risk of bad actor leveraging application root access should there be a container/kernel vulnerability to exploit.

At this point, we have an image with OpenJDK and worker user ready to run our application. However, running above image will yield nothing. We are still missing and entry point script.

As we are using Gradle with Spring Boot, it is straighforward to script the application startup:

1
2
#!/bin/bash
./gradlew clean bootRun

With above entry script, we can attach it as the entry point of our image:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
# ./project/project-service/Dockerfile
FROM company/openjdk

COPY scripts /opt/scripts
ENV ENTRY_SCRIPTS="/opt/scripts"

RUN useradd -ms /bin/bash worker
USER worker

# This where we will expect the application project to be found
WORKDIR /opt/app

ENTRYPOINT ["/opt/scripts/server.sh"]

Running the image with kick of /opt/scripts/server.sh script. The last piece of the puzzle is mapping the project development folder to the docker image:

1
2
$ docker build -t company/project-service ./project/project-service
$ docker run -it -v ./project/project-service:/opt/app:rw company/project-service

Opening project in editor/IDE, modifying and building code will result the application inside of the container to reload and refresh.

We have concluded the dockerization of the service part.

Dockerized Ember.js application

We follow similar steps with ember application, finally producing Dockerfile which fulfils the following requirements:

  • Has access to Google Chrome to run automated Ember.js tests
  • Runs application without root privileges
  • Configures NPM to install global packages to the local home folder of non-root user
  • Install Ember CLI , and finally
  • defines default entry point script.
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
# ./project/project-ui/Dockerfile
FROM company/node

# Install Google Chrome for testing
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - \
    && echo "deb http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google-chrome.list \
    && apt-get -qqy update \
    && apt-get -qqy install \
      google-chrome-stable \
      --no-install-recommends

COPY scripts /opt/scripts
ENV ENTRY_SCRIPTS="/opt/scripts"

RUN useradd -ms /bin/bash worker
USER worker
WORKDIR /home/worker

ENV NPM_PACKAGES="/home/worker/.npm-packages"
ENV PATH="$NPM_PACKAGES/bin:$PATH"
RUN echo "prefix=$NPM_PACKAGES" > .npmrc

RUN npm install -g yarn
RUN npm install -g ember-cli@3.17

WORKDIR /opt/app

ENTRYPOINT ["/opt/scripts/server.sh"]

The script uses as an entry point is responsible for installing NPM dependencies as defined by the application and starting Ember app. In adition, in oder to leverage API implemented by service hosted in previously defined container, it passes proxy configuration:

1
2
3
#!/bin/bash
yarn install
ember server --proxy $PROJECT_SERVICE

The variable $PROJECT_SERVICE stores location and port of the API proxy. This information can be passed as an argument to docker command.

However, as we now have two containers running in tandem, it high time to combine the two into single development environment.

Putting Service and UI applications together

The simplest way to orchestrate two or more containers is with docker-compose. Composer simplifies building and instrumenting multiple containers.

The example configuration of ./project/docker-compose.yaml can looks as follows:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
version: "3.3"

services:
  project-ui:
    build:
      context: ./project-ui
    volumes:
      - ./project-ui:/opt/app:rw
    ports:
      - 4200:4200
    environment: 
      PROJECT_SERVICE: "http://project-service:8080"
  project-service:
    build:
      context: ./project-service
    volumes:
      - ./project-service:/opt/app:rw
    ports:
      - 8080:8080

In this configuration, we have specified build context for both projects. Whenever we make changes to any of the Dockerfile files, all that is necessary is to run $ docker-compose build inside of ./project directory.

Composition of Docker images as defined in the example above solves the following:

  • exposes default ports, such that both Ember app and Spring endpoints are accessible through the localhost,
  • enable resolution of service API by internal hostname, thus allowing Ember.js to proxy request to Spring Boot application running in project-service container,
  • predefines volumes such that both, the -ui and -service projects are mounted to /opt/app in their respective containers.

Starting docker images with composer will result in frontend and backend applications being bootstraped.

1
$ docker-compose up

The frontend can be accessed at http://localhost:4200.

Summary

Creating application containers requires a bit of effort. However, that effort pays off by creating a reproducible, isolated development environment.

It is not much more work to add PostgreSQL, RabbitMQ, Kafka or other service container when needed.

The added benefit is improved security, making a bit harder for rouge NPM package to, for example, deploy crypto miner script onto our development workstation.

Once everything is set-up, a single command is an entry to running a full stack of the application, streamlining development, testing and product iterations.

Copyright © 2017–2020, Input Objects GmbH; all rights reserved.