Introduction

Docker is an open source software developed by Docker Inc. that allows building containers that can run applications within a specific environment and dependencies. A Docker container can then run on any host that runs the Docker tool – compatible versions -, thus, making it agnostic of its host. An image defines a container. An image becomes a container when running. A container is not meant to be eternal; on the contrary, they offer high flexibility and makes it easy to deploy applications into various environments.

Docker provides a set of images that you can load to build your containers. You can then add your source code – and many other things – and run it within the container.


Figure 1: Very simple interpretation of running an application within a Docker container

For instance: running a simple website can be done by building a container from the Nginx image and adding the Nginx server configuration and your website code to the container.

Difference between virtual machines and containers

A virtual machine is an emulation of a computer running a specific OS – Operating System – on a single or pool of hardware like Hypervisors, whereas a container is a piece of software that is managed by a container engine – like Docker – and runs directly on the host operating system. Each VM needs an underlying OS whereas containers run on the OS. A VM is to virtualise the server, but the container is to virtualise the application runtime environment. Hence, the support for your container engine can be a virtual machine.

Figure 2: Difference between virtual machines and containers

Virtual machines are widely used. It was an effective way to reduce costs in data centers since server virtualisation became more and more affordable and it allowed a more adaptive resource usage.

By definition, a container does not embed virtual computer components, and each container shares the host OS kernel making it lighter and faster to start. Not loading multiple virtual computer components leaves a lot more resources to run additional containers/applications than you would have using virtual machines.

Quick overview

The following paragraphs are not meant to dive deep into the technology but merely give a hint on how Docker works and maybe help to troubleshoot your first steps launching containerised applications.

Docker images

A docker image is a file used by Docker to run containers. A docker image is composed of layers. Most of the time, the base layer is a base image provided either from Docker HUB or from other sources. Every component added by a user to a base image is committed as an additional layer to that image.

By default you don’t have any images loaded on your computer, they are pulled from the default registry: Docker HUB when running your first containers. Alternatively, you can choose to pull images from another registry manually.

You can test the process by loading Docker’s default image hello-world.

$: docker run hello-world

Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
ca4f61b1923c: Pull complete
Digest: sha256:ca0eeb...
Status: Downloaded newer image for hello-world:latest

When running the hello-world image, Docker loads the image from Docker HUB as it is not on your computer yet. You can view images saved on your computer with :

$: docker image list
REPOSITORY TAG IMAGE ID CREATED SIZE
hello-world latest fce289e99eb9 6 weeks ago 1.84kB

When a container is created from an image, it copies the image to add a top layer called the container layer that is writable to hold changes performed by the container at run-time. This top layer is destroyed with the container.

Figure 3: Simplified view of image’s layers

View image layers

It is possible to see all changes performed on an image, let’s inspect the Hello-World image we just loaded. First, we get the image id from the docker image list. Then we use the history command :

$: docker history [image id]
IMAGE CREATED CREATED BY SIZE
fce289e99eb9 6 weeks ago /bin/sh -c #(nop) CMD ["/hello"] 0B
<missing> 6 weeks ago /bin/sh -c #(nop) COPY file:f77… 1.84kB

This command outputs the different layers for the hello-world image. We can see each command used to commit a new layer. The statement is not an error; it indicates that the image was built locally and then made available to other users through a registry. This article covers the topic in details for those interested.

Docker HUB

Docker HUB is the official Docker’s registry that holds the most extensive library of container images. With a Docker HUB account, you can push your custom images on the registry.

Dockerfiles

A dockerfile is a file containing an instruction set readable by Docker to build images. With dockerfile, users can create automated builds. To build an image from a docker file run :

docker build -f filename

A dockerfile is a list of instructions, followed by arguments :

#dockerfile
FROM nginx
RUN echo 'Building image from nginx base image'

Here is the list of all available commands with a short description : 

FROM: The only mandatory instruction of the dockerfile. It is used to specify the base image used for the build. If the image does not exist locally, it pulls it from a registry – Docker HUB by default if no other specified.

FROM image:tag

MAINTAINER: Takes any string as a valid argument, this instruction is to specify the name of the author of the build.

MAINTAINER author name

LABEL: It is possible to add custom metadata to a docker image by using the label instruction. A label is a key-value pair.

LABEL "Label_Key=Label_Value"

EXPOSE: Tells docker that the running container will be listenning on a port or list of ports. It does not :

  • Bind the container port on any host port
  • Alter the host port configuration

Exposing a port makes it accessible to other containers on the same Docker network.

EXPOSE 80

ADD: Copies files from host or remote URL to the specified destination on container. It is different from the COPY instruction because it can extract and copy compressed files too.

ADD /path/to/sources /destination/on/container

COPY: Also copies files from host or remote URL to the specified destination on the container. It will not extract compressed files.

COPY /path/to/sources /destination/on/container

RUN: Runs a command on top of the current image. Any operation performed by RUN commits a new layer in the image.

RUN command

CMD & ENTRYPOINT:

CMD command
ENTRYPOINT command

A CMD instruction executes on the container at run-time. So is ENTRYPOINT. When running a container using docker run, it is possible to specify a command to run. When specified, the docker run command overrides the CMD instruction whereas it would be ignored and replaced by the ENTRYPOINT instruction.

VOLUME : Used to create a volume that will be mounted on the specified path of the container. This instruction orders a volume creation but it does not :

  • Bind it on a host folder
  • Creates a named volume
VOLUME /volume/path/on/container

USER:Specifies the user to use when running the container. It can also define the user group and uses root group if left unspecified.

USER username

WORKDIR: Navigates to the specified directory on container.

WORKDIR /path/on/container

ENV: Defines environment variables in container.

ENV variable_name variable_value

ARG: Defines environment variables meant to be used during image building stage, not in running container.

ARG variable_name variable_value

ONBUILD: Takes another dockerfile instruction to be executed if the built image is used as a base image in another build.

ONBUILD instruction arguments

Volumes

As discussed in the Docker image section. A running container manipulates data on a writable layout on top of its image. Because of this mechanism, data is not persistent within the docker container. Everything is dropped once the container is destroyed. Also, because everything happens in this writable layout, the container is tightly coupled to the host, you could not move this layer to be on another node. To add flexibility docker provides a way to share the data written within the container with two methods :

Bind / Mount : a file or a folder that is stored on the host file-system is bound/mounted into the container. The container then has write permission over the host file/folder. Every modification made from the container is available on the host, and every modification made from the host is available in the container. This scenario offers the possibility to edit a container’s folder content from another process than Docker. Binding / Mounting is not recommended for a production environment has the container will rely on host file-system to function correctly, messing with the core concept of container autonomy and portability.

Figure 4: Bind / Mount scheme

Volume : A volume is a file or folder that is also on the host file-system but is managed exclusively by Docker. A volume cannot be updated outside the Docker engine and is persisted when a container is destroyed. It is the preferred way to store persistent data.

Figure 5: Volume scheme

Docker Compose

Docker Compose is a tool for defining and running multi-container applications. It uses YAML syntax to load Dockerfiles and set up containers. Docker Compose helps you build complex applications and launch them with a single & simple command line. To configure a containerised application using Docker Compose, one must write a docker-compose.yml file. Here is an example of a simple docker-compose:

#docker-compose.yml
version: '3'
services:
    web:
        image: nginx:latest
        ports:
            - "9090:80"
        volumes:
            - ./project:/app
    php:
        image: php:7-fpm
        volumes:
            - ./project:/app

The docker-compose file references a set of services that will be containerised at run-time. Because we are defining our containers, we provide a base image – here from Docker HUB, but it can be any other registry. If we look closely at every service definition, we can recognise the options for the docker run command, for instance: use a volume, specify a port configuration, and on …

To run the previously defined docker-compose file use :

docker-compose up

The up option as opposed to the down option, deletes the previously launched containers:

docker-compose down
Figure 6: docker-compose up process

Similarly to docker build, there is a docker-compose build command:

docker-compose build

The build option is looking for a build scenario in the docker-compose.yml file.

#docker-compose.yml
...
services:
php:
image: [point to registry]
build:
context: .
dockerfile: php.dockerfile
...

Here is how you define a build scenario in your docker-compose file. In run mode, the build instruction will be ignored. In build mode the image instruction will be ignored. The context specifies a folder on host file-system and the name of the dockerfile to load from that directory. The dockerfile will be processed by Docker and the resulting image will be hosted on the registry defined after the image instruction. So that next time you run the docker-compose up, the last image build will be loaded from the registry.

Figure 7: docker-compose build process
  • Define the app environment with a Dockerfile
  • Define the services that make up the application in docker-composer.yml so they can be run together in an isolated environment
  • Run docker compose up for compose to start running your application

For full documentation of docker-compose, see the official documentation.

Network modes

We defined a container as a ready-to-run piece of software that exists on our host. We now know that a container can interact with the host file-system, but what about interactions between containers? Or interactions between a container and an external system? A solution : Docker networks!

Docker defines networks that are virtual networks holding a specific configuration. The default network created by Docker is called Docker0. You can get the list of networks generated by Docker using :

$: docker network list
NETWORK ID NAME DRIVER SCOPE
14bfab05a731 bridge bridge local

The bridge is the default network. We can inspect its configuration using :

docker inspect [network-id]

This will output network information such as network driver, IP subnet, running containers, etc … Inspecting the default network should show the docker0 :

The Docker0 network is the default bridge network that every container launched on host will join if not configured otherwise. What is bridge mode ?

Bridge mode

Bridge is the default mode, it is a single host configuration mode, meaning that it will only exist on the host it is running on. Every container joining the same bridge network are reachable from the others. It is possible to create a user-defined-bridge that behaves a bit differently.

Figure 8: Bridge mode scheme

Host mode

Host mode removes network isolation to use the host network driver directly. This option is only available for Linux so far. Every configuration made to run containers on a host mode network will affect the host. For instance, is the container is listening on port 80, host will be listening on port 80 as well.

Overlay mode

This mode is used when you wish to connect containers running on separate hosts. Thanks to technologies such as VXLAN it is possible to extend a local network over distributed nodes. This mode is used when running Docker in Swarm mode.

Figure 9: Overlay mode scheme

Macvlan mode

Using Macvlan mode means giving access to the physical network to a docker container. By assigning a mac address, it will appear as a device on the network.

None

The None option tells Docker to disable the network abstraction on a container. Only the loop-back interface will remain on the said container.

WORKSHOP – Run a fresh Lumen install on Docker containers

We will now try to run a fresh Lumen install on Docker containers. The following commands will be executed on an OVH VPS 1 SSD – 2.99€ – running Debian 9. Except for a simple SSH user configuration, UFW & GIT install, no operations were performed. Let’s start by ssh login into our server with a sudoer user and run the classic:

apt-get update

Install prerequisite packages :

There are a few packages to install before installing Docker, here is the command line :

apt-get install apt-transport-https ca-certificates curl gnupg2 software-properties-common 

For the curious out there, here is a short description of what you are loading:

  • apt-transport-https: enables https access for packages and metadata to package manager using the libapt-pkg.
  • ca-certificates: allows SSL-based applications to check for the authenticity of SSL connections.
  • curl: loads the Curl utility.
  • gnupg2: a free encryption software compliant with OpenPGP, Allow to encrypt and decrypt files containing sensitive data. To use : need to create a unique encryption key.
  • software-properties-common: allows to easily manage your distribution and independent software vendor sources.

Add the Docker repository to APT sources

We will be installing Docker from the official Docker repository that we need to add to our APT program sources. We can manage repositories using the apt-key utility. apt-key is a program that handles keys from repositories used for apt packages. These keys allow a verification of the installed packages to prevent fetching packages from not trustworthy sources.

Firstly, we download the official Docker repository key and we pass it to apt-key that will save it on our system.

curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add - 

Now that we have the key to check for packages from the official Docker repository, we can add it to our APT sources:

add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable" 

Once completed, we can update the package database:

apt-get udpate

To make sure that you’ll be installing Docker from Docker repository run :

apt-cache policy docker-ce

And check that the candidate package will be loaded from the Docker repository:

Figure 10: The candidate package is 5:18.09.1~3-0~debian~stretch which is pointing to docker.com

It’s all set, we can now install docker!

Install and run Docker

apt install docker-ce

Once completed, the docker service should have started. You can check its status using systemctl :

systemctl status docker
Figure 11: the Docker service is running!

Install Docker Compose

We will also need to run Docker Compose as we will be creating multiple containers loaded though a docker-compose file.

curl -L "https://github.com/docker/compose/releases/download/1.23.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose

In order to run docker-compose command, make sure it is executable with:

chmod +x /usr/local/bin/docker-compose

This chmod option tells your system to add the execute privilege to docker-compose. See this topic for more details.

Make sure everything works fine by typing docker-compose. This should print the command options:

Figure 12: Docker-compose’s truncated help prompt

Set the Lumen project

I created a simple Lumen project that does not do much to be honest. We will use this project just to show how to run a basic Laravel / Lumen install on containers. The Git is accessible here. Once cloned, go to the project folder. A Lumen project needs to be initialized with a composer install command so that it can fetch dependencies into the /vendor folder – which is currently missing from as it is mentioned in the .gitignore file.

In spite of downloading composer on our server just to run the command, let’s use Docker to create a temporary container that will run composer and use it to install our dependencies! Such flexibility is a great example of how Docker can be time-saving when developing / testing environments. Let’s type our first docker command:

docker run --rm -v $(pwd):/app prooph/composer:7.2 install

If you check the folder you just cloned, you’ll find the /vendor folder filled with all of our project dependencies! How is that? Command line autopsy:

  • –rm : indicate that the container we requested should be destroyed once exited, meaning it will be running only for the command line execution time.
  • -v [local folder]:[container folder]: indicate to the container that it’s /app folder should be mounted over current directory – $(pwd) – meaning :
    • The container will find the composer.json file from host filesystem
    • The container will load modules in /vendor that will be persisted on host filesystem
  • prooph/composer:7.2 : Specifies the image to use for container initialization. Here we use the 7.2 vesion of prooph/composer since it has php 7.2 which is a requirement for lumen 5.7
Figure 13: The temporary container is loading vendors dependencies on bound folder.

With all the dependencies loaded, we can move towards setting up our website environment. We will need the following components:

  • Nginx : for web server features, being able to serve pages on a given address on specific port
  • php-fpm : to handle php execution

Creating a simple docker-compose file

Let’s start by creating a docker-compose file. You can choose to create this file anywhere on your host, as long as you can run the docker-compose command on it.

touch docker-compose.yml

We’ll begin with a simple docker-compose structure:

#docker-compose.yml
version: '3'
services:
...

The version specifies which syntax to expect from our docker-compose engine. The services directive will hold our component list – Nginx and php-fpm. For now, let’s try to build a simple container running a default Nginx image:

#docker-compose.yml
version: '3'
services:
web:
image: nginx:latest
ports:
- "9090:80"

The above definition tells Docker to create a service called web built from the latest Nginx image. The ports directive specifies to bind host port 9090 on the container port 80. Thus, we will be able to access our Nginx from the following address : [host URL / IP]:9090.

Figure 14: Our current setup, running an Nginx container on our host.

To launch your container using docker-compose, just run the following command line:

docker-compose up

The docker-compose command looks for a file named docker-compose.yml by default. If you wish to run a different file, use the -f option. You should be prompted with a message similar to:

Starting root_web_1 ... done
Attaching to root_web_1

If so, you can test the result on your host’s port’s 9090:

Figure 15: Nginx default page should be displayed from host

When refreshing the page, the access logs should be visible on your host terminal. If you have any trouble during the docker-compose launch, try to get a look at the errors displayed in the terminal. If the container is running but you can’t access the 9090 port, make sure your configuration allows TCP traffic on that port.

Troubleshooting : check if port 9090 is open on your host

$: apt-get install net-tools
$: netstat -tuplen

Netstat can help you investigate your host port configuration. The combo -tuplen gives a list of open ports per Program name. Check if you see a docker-proxy program. If not : your docker-compose was interupted, else check the corresponding port to track an eventual typing error.

$: netstat -tuplen
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State User Inode PID/Program name
...

tcp6 0 0 :::9090 :::* LISTEN 0 36510548 20209/docker-proxy
...

note : Docker should update your host configuration to open the port 9090 from outside if not already opened. The port will be closed when container is stopped.

If the port is open but you still can’t reach your host, try a curl from host :

curl localhost:9090

If you get a response from Nginx, it means that your container works fine, at least from the localhost. Maybe your host IP address is not reachable from your local computer. Make sure you can ping/curl it.

Adding a php handler

The PHP handler will be used to, well, handle PHP. We’ll be loading php-fpm for our PHP execution. For more information about PHP handlers, you can refer to this topic. Let’s update the docker-compose file with the following:

#docker-compose.yml
version: '3'
    services:
        web:
            image: nginx:latest
            ports:
                - "9090:80"
        php:
            image: php:7-fpm

We define another service ‘php’ that will run in a separate container built from the php image. We add the -d option to run the containers in detached mode – so you can still use your prompt:

docker-compose up -d

Again you should see the services status as:

Starting root_web_1 ... done
Creating root_php_1 ... done
Figure 16: Current setup, both containers running

No changes so far. We have another container running php-fpm, that listens on port 9000 – by default – but it does not interact with our Nginx service yet.

Get the list of running containers

docker container list
Figure 17: Displaying the two containers we just created

Stop and destroy running containers

$: docker-compose down
Stopping root_php_1 ... done
Stopping root_web_1 ... done
Removing root_php_1 ... done
Removing root_web_1 ... done

These operations docker-compose up and down can be performed as many times as you want.

Mounting containers on host project directory

Now that we know how to launch containers that can run a Lumen application, let’s add some code. In our scenario, the project source code should be accessible by both services. The Nginx needs to access the code to have resources to serve, and pass the PHP execution to the php-fpm service. Let’s rewrite our docker-compose file a bit:

#docker-compose.yml
version: '3'
services:
    web:
        image: nginx:latest
        ports:
            - "9090:80"
        volumes:
            - ./:/[target directory on your container]
    php:
        image: php:7-fpm
        volumes:
            - ./:/[target directory on your container]

note : the destination folder on both containers must match, if not, the server will output a "File not found." error. Why ? When nginx is calling php-fpm it will use path from the project directory that FPM will be required to access.


Figure 18: Current setup, both containers running & sharing a same volume containing the Lumen application code

Warning : This method provides an access to host filesystem in WRITE mode to both containers. Any changes made from a container will be saved on host. Any changes made from the host will be available to containers.

Adding Nginx configuration file

If we run docker-compose up, the prompt tells us that both services are running, but when trying to access our web page, nothing changed. We are missing a piece of nginx configuration. Because the purpose of this workshop is not nginx configuration, let's just copy paste the following file :

#site.conf
server {

    index index.php
    server_name [your server name]
    error_log /var/log/nginx/error.log;
  access_log /var/log/nginx/access.log;
  root /[your project directory]/public;

# Handles routing
location / {
try_files $uri $uri/ /index.php$is_args$args;
}
    
    location ~ \.php$ {
        try_files $uri =404;
        fastcgi_split_path_info ^(.+\.php)(/.+)$;
        fastcgi_pass php:9000;
        fastcgi_index index.php;
        include fastcgi_params;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_param PATH_INFO $fastcgi_path_info;
    }

# Example rule to disable access to files beginning with ".ht"
location ~ /.ht {
deny all;
}

}

note : this is not meant to be a reference, do not use such file for production environment.

note bis : the root directory in Nginx configuration must match the target directory on container written in docker-compose.yml file.

note ter : I don't have a domain name pointing to my VPS, so I defined the server name on the host file from my local computer. You can do the same if you wish to, or simply replace this line with

server_name _;

These directives describe a root directory, a server name, logs for access and errors. The way to pass execution to php-fpm is described in the location part. Here we pass the executions to our other container that we called php in the docker-compose file with the port 9000 that is used by default. The other container is accessible using its host-name, thus : php:9000.

fastcgi_pass php:9000

Updating the docker-compose.yml file to include nginx configuration:

#docker-compose.yml
version: '3'
services:
    web:
        image: nginx:latest
        ports:
            - "9090:80"
        volumes:
            - ./:/[target directory on your container]
            - ./site.conf:/etc/nginx/conf.d/default.conf
    php:
        image: php:7-fpm
        volumes:
            - ./:/[target directory on your container]

/etc/nginx/conf.d/default.conf : is the default file for nginx configuration, therefore it is where we need to place our file in order to override the default nginx configuration.

Figure 19: Final setup

Launching our simple Lumen app using Docker

We are all settled! We finally have a valid docker-compose file, the only thing left to do to start our Lumen application is to run the docker-compose up command and head to our host on port 9090:

Figure 20: Our Lumen application is served from Docker containers!

Here is it, our Lumen App is online. Now, if we wish to deploy to another computer, it just needs to run Docker engine and have docker-compose installed. All environments installation & configuration are now scripted, and handled by Docker.

Workshop - Troubleshooting

Check if port 9000 is open on your php-fpm container

Identify the container running your php-fpm, get the id and open a bash from it with docker exec -it [container-id] bash. Then install netstat and run netstat -tuplen. You should see the port in state LISTEN :

Check if containers can access each other

By default, when creating an application using docker-compose, it creates a new Network bridge configuration. Every container created from the compose file will join this network and be assigned a host name. Every container from this network by default should have access to the others. To check this out please refer to the next section "Test Docker container name resolution".

To do more

Here are a few examples of things you can try if you wish to manipulate Docker a bit more, based on the previous Workshop.

Test Docker container name resolution

To check that a container has access to another we are going to log into the nginx container, install the ping utility and try to ping the php container using its hostname 'php'. To get a terminal from inside a container, first identify the container you want to log in :

$: docker container list
CONTAINER ID IMAGE
9d8531acd2f5 nginx:latest

Once you have identified the container you wish to log into, run the following command :

docker exec -it [container ID] bash

Your prompt should change, indicating that you are inside a container:

user@9d8531acd2f5:/# 

We'll need to run the ping command, which is not available yet in our container. Just download it:

apt-get update
...
apt-get install iputils-ping

When try to ping self using container host name - here logged into web container :

:/# ping web
PING web (172.20.0.2) 56(84) bytes of data.
64 bytes from 9d85 (172.20.0.2): icmp_seq=1 ttl=64 time=0.037 ms
64 bytes from 9d85 (172.20.0.2): icmp_seq=2 ttl=64 time=0.216 ms
...

Here, we can see our container IP address : 172.20.0.2, reachable using its hostname web. Let's now try to ping the php container!

:/# ping php
PING php (172.20.0.3) 56(84) bytes of data.
64 bytes from root_php_1.root_default (172.20.0.3): icmp_seq=1 ttl=64 time=0.121 ms
64 bytes from root_php_1.root_default (172.20.0.3): icmp_seq=2 ttl=64 time=0.160 ms
...

There, we can see that the php container is accessible from the nginx container using hostname resolution. Also, php container IP address is 172.20.0.3.

Inspect Docker network

We have seen IP addresses from inside the containers, lets find this piece of information using docker network. Firstly, we can display the list of Network configurations created by Docker with :

$: docker network list
NETWORK ID NAME DRIVER SCOPE
...
4dbf2d34d81a root_default bridge local

Here we have information about the name, the driver used and the scope. You have an overview of all network modes in the first section of the article. The scope can either by local for configuration only on current host, or swarm for a network configuration for a pool of docker nodes. Using the ID, we can inspect the network configuration generated by Docker when launching our workshop :

$: docker network inspect 4dbf2
[
{
"Name": "root_default",
"Id": "4dbf...",
"Created": "2019-02-13T17:08:54.982251675+01:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.20.0.0/16",
"Gateway": "172.20.0.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"0584...": {
"Name": "root_web_1",
"EndpointID": "91bb",
"MacAddress": "02:42:ac:14:00:02",
"IPv4Address": "172.20.0.2/16",
"IPv6Address": ""
},
"a5f7": {
"Name": "root_php_1",
"EndpointID": "6f4e",
"MacAddress": "02:42:ac:14:00:03",
"IPv4Address": "172.20.0.3/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {
"com.docker.compose.network": "default",
"com.docker.compose.project": "root",
"com.docker.compose.version": "1.23.2"
}
}
]

The network created for our Lumen app is in Bridge mode, the subnet is 172.20.0.0/16. Both containers are listed under the names root_web_1 and root_php_1, with IP 172.20.0.2 and 172.20.0.3 respectively .

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s