• services
  • development
  • Kubernetes

Dockerization of Standalone Applications

author.label vshosting~
Step-by-step guide to dockerizing virtually any application type and ensuring its operation in a Docker container.

Most manuals for application dockerization that you’ll find online are written for a specific language and environment. We will, however, look into general guidelines meant for virtually any type of application and show you, how to ensure their operation in Docker containers.

Base Image Selection

For issue-free operation and further simple edits and upgrades, choosing the most ideal (and author-supported) base image is critical. Considering that absolutely anyone can upload an image to the Docker Hub, it is advisable to take a close look at your selected image and make sure that it contains no malicious software or e.g. outdated library versions with security issues. 

Images labeled as “Docker certified” are a good choice for the start as that status is a certain guarantee that the image is legitimate and regularly updated. Good examples of such images are PHP or Node.js.

Furthermore, we can recommend the Bitnami company collection that contains a number of ready-made image applications and development environments. 

Additional Software Installation

Depending on the image you have chosen for your project, you can install extra software so that all prerequisites necessary for smooth application operation are fulfilled.  

The best solution is the use of a package distribution system, on which the image is based (usually Ubuntu/Debian, Alpine Linux, or CentOS). It is also very important to maintain the narrowest possible list of installed software, e.g. not install text editors, compilators, and other development tools into the containers.

Own Files in the Docker Image

You’ll also want to add your own files into the final image - be it configuration, source codes, or binary files from the app. In Dockerfile, the commands ADD or COPY are used, COPY being more transparent but not allowing for some more advanced functions such as archive unpacking into the image.

Authorisation Definition

Despite it being the easiest way, avoid running the app in a container as the root user. This poses many security risks and increases the chance of container leak if the application becomes compromised or if a security error in third-party software you’re using is exploited.

Service Port Definition

If your application doesn’t use the root user or has no enhanced capabilities (CAP_NET_ADMIN), it is not possible to utilise the so-called privileged ports. (1-1024). However, that is not necessary for Docker. Use any higher port (e.g. 8080 and 8443 in place of 80/443 with a web server) and conduct port mapping via the Docker parameters.

Running the Application in the Container

However easy it is to directly run the binary file of your application(or web server, Node.js, etc.), the much more sophisticated way is to create your own so-called entrypoint - that is a script, which will conduct the initial application configuration, can react to a variable environment etc. We can find a good example of this solution in the official PostgreSQL image.

Configuration Methods

Most applications require correct configuration to run properly. It is certainly possible to directly use a configuration file (e.g. in a mounted directory on the outside of the container) but in most cases, it is better to use a prepared entry point script, which will prepare proper configuration for running the application using a template and the variable environment of the container.

Application Data

Avoid saving data to the container filesystem - in the standard configuration, all the data will be deleted after the container is restarted. Use bind mounts (addressbook outside the container directory on the outside of the container) or mounted volume.

In addition, it is necessary to figure out how to save/send logs. The best option is certainly using centralised logging for all of your applications (ELK stack), however, even a basic remote syslog does a good enough job.

What next?

There is always room for improvement. Beyond the scope of this article is considering different configuration management options, ELK stack for logging, application and system metrics collection via Prometheus, and the option of reaching load balancing and high-availability for your application using Kubernetes - which at vshosting~, we will gladly build for you and tailor it to your application’s needs :)