README.md

    Docker

    Getting our App

    Get zip file

    Building the App's Container Image

    FROM node:12-alpine
    RUN apk add --no-cache python g++ make
    WORKDIR /app
    COPY . .
    RUN yarn install --production
    CMD ["node", "src/index.js"]
    docker build -t getting-started .

    Starting an App Container

    docker run -dp 3000:3000 getting-started

    After a few seconds, open your web browser to http://localhost:3000. You should see our app!

    Create a Repo

    • Go to Docker Hub and log in if you need to.
    • Click the Create Repository button.
    • For the repo name, use getting-started. Make sure the Visibility is Public.
    • Click the Create button!

    Pushing our Image

    • Login to the Docker Hub
    docker login -u YOUR-USER-NAME
    • Use the docker tag command to give the getting-started image a new name. Be sure to swap out YOUR-USER-NAME with your Docker ID.
    docker tag getting-started YOUR-USER-NAME/getting-started
    • Now try the push command. If you're copying the value from Docker Hub, you can drop the tagname portion, as we didn't add a tag to the image name. If you don't specify a tag, Docker will use a tag called latest.
    docker push YOUR-USER-NAME/getting-started

    Persisting our Todo Data

    • Create a volume by using the docker volume create command.
    docker volume create todo-db
    • Start the todo app container, but add the -v flag to specify a volume mount. We will use the named volume and mount it to /etc/todos, which will capture all files created at the path.
    docker run -dp 3000:3000 -v todo-db:/etc/todos getting-started

    Diving into our Volume

    A lot of people frequently ask "Where is Docker actually storing my data when I use a named volume?" If you want to know, you can use the docker volume inspect command.

    docker volume inspect todo-db
    [
        {
            "CreatedAt": "2019-09-26T02:18:36Z",
            "Driver": "local",
            "Labels": {},
            "Mountpoint": "/var/lib/docker/volumes/todo-db/_data",
            "Name": "todo-db",
            "Options": {},
            "Scope": "local"
        }
    ]

    The Mountpoint is the actual location on the disk where the data is stored. Note that on most machines, you will need to have root access to access this directory from the host. But, that's where it is!

    Starting a Dev-Mode Container

    To run our container to support a development workflow, we will do the following:

    • Mount our source code into the container
    • Install all dependencies, including the "dev" dependencies
    • Start nodemon to watch for filesystem changes

    So, let's do it!

    1. Make sure you don't have any previous getting-started containers running.

    2. Run the following command. We'll explain what's going on afterwards:

      docker run -dp 3000:3000 \
          -w /app -v "$(pwd):/app" \
          node:12-alpine \
          sh -c "yarn install && yarn run dev"

      If you are using PowerShell then use this command.

      docker run -dp 3000:3000 `
          -w /app -v "$(pwd):/app" `
          node:12-alpine `
          sh -c "yarn install && yarn run dev"
      • -dp 3000:3000 - same as before. Run in detached (background) mode and create a port mapping
      • -w /app - sets the "working directory" or the current directory that the command will run from
      • -v "$(pwd):/app" - bind mount the current directory from the host in the container into the /app directory
      • node:12-alpine - the image to use. Note that this is the base image for our app from the Dockerfile
      • sh -c "yarn install && yarn run dev" - the command. We're starting a shell using sh (alpine doesn't have bash) and running yarn install to install all dependencies and then running yarn run dev. If we look in the package.json, we'll see that the dev script is starting nodemon.
    3. You can watch the logs using docker logs -f <container-id>. You'll know you're ready to go when you see this...

      docker logs -f <container-id>
      $ nodemon src/index.js
      [nodemon] 1.19.2
      [nodemon] to restart at any time, enter `rs`
      [nodemon] watching dir(s): *.*
      [nodemon] starting `node src/index.js`
      Using sqlite database at /etc/todos/todo.db
      Listening on port 3000

      When you're done watching the logs, exit out by hitting Ctrl+C.

    Multi-Container Apps

    Container Networking

    Remember that containers, by default, run in isolation and don't know anything about other processes or containers on the same machine. So, how do we allow one container to talk to another? The answer is networking. Now, you don't have to be a network engineer.

    Starting MySQL

    There are two ways to put a container on a network: 1) Assign it at start or 2) connect an existing container. For now, we will create the network first and attach the MySQL container at startup.

    1. Create the network.

      docker network create todo-app
    2. Start a MySQL container and attach it to the network. We're also going to define a few environment variables that the database will use to initialize the database (see the "Environment Variables" section in the MySQL Docker Hub listing).

      docker run -d \
          --network todo-app --network-alias mysql \
          -v todo-mysql-data:/var/lib/mysql \
          -e MYSQL_ROOT_PASSWORD=secret \
          -e MYSQL_DATABASE=todos \
          mysql:5.7

      If you are using PowerShell then use this command.

      docker run -d `
          --network todo-app --network-alias mysql `
          -v todo-mysql-data:/var/lib/mysql `
          -e MYSQL_ROOT_PASSWORD=secret `
          -e MYSQL_DATABASE=todos `
          mysql:5.7
    3. To confirm we have the database up and running, connect to the database and verify it connects.

      docker exec -it <mysql-container-id> mysql -p

      When the password prompt comes up, type in secret. In the MySQL shell, list the databases and verify you see the todos database.

      mysql> SHOW DATABASES;

    Running our App with MySQL

    The todo app supports the setting of a few environment variables to specify MySQL connection settings. They are:

    • MYSQL_HOST - the hostname for the running MySQL server
    • MYSQL_USER - the username to use for the connection
    • MYSQL_PASSWORD - the password to use for the connection
    • MYSQL_DB - the database to use once connected

    With all of that explained, let's start our dev-ready container!

    1. We'll specify each of the environment variables above, as well as connect the container to our app network.

      docker run -dp 3000:3000 \
      -w /app -v "$(pwd):/app" \
      --network todo-app \
      -e MYSQL_HOST=mysql \
      -e MYSQL_USER=root \
      -e MYSQL_PASSWORD=secret \
      -e MYSQL_DB=todos \
      node:12-alpine \
      sh -c "yarn install && yarn run dev"

      If you are using PowerShell then use this command.

      docker run -dp 3000:3000 `
      -w /app -v "$(pwd):/app" `
      --network todo-app `
      -e MYSQL_HOST=mysql `
      -e MYSQL_USER=root `
      -e MYSQL_PASSWORD=secret `
      -e MYSQL_DB=todos `
      node:12-alpine `
      sh -c "yarn install && yarn run dev"
    2. If we look at the logs for the container (docker logs <container-id>), we should see a message indicating it's using the mysql database.

      # Previous log messages omitted
      $ nodemon src/index.js
      [nodemon] 1.19.2
      [nodemon] to restart at any time, enter `rs`
      [nodemon] watching dir(s): *.*
      [nodemon] starting `node src/index.js`
      Connected to mysql db at host mysql
      Listening on port 3000
    3. Open the app in your browser and add a few items to your todo list.

      Connect to the mysql database and prove that the items are being written to the database. Remember, the password is secret.

      docker exec -it <mysql-container-id> mysql -p todos

    Using Docker Compose

    Installing Docker Compose

    If you installed Docker Desktop/Toolbox for either Windows or Mac, you already have Docker Compose! Play-with-Docker instances already have Docker Compose installed as well. If you are on a Linux machine, you will need to install Docker Compose using the instructions here.

    After installation, you should be able to run the following and see version information.

    docker-compose version

    Creating our Compose File

    1. At the root of the app project, create a file named docker-compose.yml.

    2. In the compose file, we'll start off by defining the schema version. In most cases, it's best to use the latest supported version. You can look at the Compose file reference for the current schema versions and the compatibility matrix.

      version: "3.7"
    3. Next, we'll define the list of services (or containers) we want to run as part of our application.

      version: "3.7"
      
      services:

    And now, we'll start migrating a service at a time into the compose file.

    Defining the App Service

    version: "3.7"
    
    services:
      app:
        image: node:12-alpine
        command: sh -c "yarn install && yarn run dev"
        ports:
          - 3000:3000
        working_dir: /app
        volumes:
          - ./:/app
        environment:
          MYSQL_HOST: mysql
          MYSQL_USER: root
          MYSQL_PASSWORD: secret
          MYSQL_DB: todos

    Defining the MySQL Service

    version: "3.7"
    
    services:
      app:
        # The app service definition
      mysql:
        image: mysql:5.7
        volumes:
          - todo-mysql-data:/var/lib/mysql
        environment: 
          MYSQL_ROOT_PASSWORD: secret
          MYSQL_DATABASE: todos
    
    volumes:
      todo-mysql-data:

    Running our Application Stack

    Now that we have our docker-compose.yml file, we can start it up!

    1. Make sure no other copies of the app/db are running first (docker ps and docker rm -f <ids>).

    2. Start up the application stack using the docker-compose up command. We'll add the -d flag to run everything in the background.

      docker-compose up -d

      When we run this, we should see output like this:

      Creating network "app_default" with the default driver
      Creating volume "app_todo-mysql-data" with default driver
      Creating app_app_1   ... done
      Creating app_mysql_1 ... done

      You'll notice that the volume was created as well as a network! By default, Docker Compose automatically creates a network specifically for the application stack (which is why we didn't define one in the compose file).

    3. Let's look at the logs using the docker-compose logs -f command. You'll see the logs from each of the services interleaved into a single stream. This is incredibly useful when you want to watch for timing-related issues. The -f flag "follows" the log, so will give you live output as it's generated.

      If you don't already, you'll see output that looks like this...

      mysql_1  | 2019-10-03T03:07:16.083639Z 0 [Note] mysqld: ready for connections.
      mysql_1  | Version: '5.7.27'  socket: '/var/run/mysqld/mysqld.sock'  port: 3306  MySQL Community Server (GPL)
      app_1    | Connected to mysql db at host mysql
      app_1    | Listening on port 3000

      The service name is displayed at the beginning of the line (often colored) to help distinguish messages. If you want to view the logs for a specific service, you can add the service name to the end of the logs command (for example, docker-compose logs -f app).

    4. At this point, you should be able to open your app and see it running. And hey! We're down to a single command!

    Tearing it All Down

    When you're ready to tear it all down, simply run docker-compose down or hit the trash can on the Docker Dashboard for the entire app. The containers will stop and the network will be removed.

    Once torn down, you can switch to another project, run docker-compose up and be ready to contribute to that project! It really doesn't get much simpler than that!

    Image Building Best Practices

    Image Layering

    Did you know that you can look at what makes up an image? Using the docker image history command, you can see the command that was used to create each layer within an image.

    1. Use the docker image history command to see the layers in the getting-started image you created earlier in the tutorial.

      docker image history getting-started
    2. You'll notice that several of the lines are truncated. If you add the --no-trunc flag, you'll get the full output.

      docker image history --no-trunc getting-started

    Layer Caching

    Let's look at the Dockerfile we were using one more time...

    FROM node:12-alpine
    WORKDIR /app
    COPY . .
    RUN yarn install --production
    CMD ["node", "src/index.js"]

    Going back to the image history output, we see that each command in the Dockerfile becomes a new layer in the image. You might remember that when we made a change to the image, the yarn dependencies had to be reinstalled. Is there a way to fix this? It doesn't make much sense to ship around the same dependencies every time we build, right?

    To fix this, we need to restructure our Dockerfile to help support the caching of the dependencies. For Node-based applications, those dependencies are defined in the package.json file. So, what if we copied only that file in first, install the dependencies, and then copy in everything else? Then, we only recreate the yarn dependencies if there was a change to the package.json. Make sense?

    1. Update the Dockerfile to copy in the package.json first, install dependencies, and then copy everything else in.

      FROM node:12-alpine
      WORKDIR /app
      COPY package.json yarn.lock ./
      RUN yarn install --production
      COPY . .
      CMD ["node", "src/index.js"]
    2. Create a file named .dockerignore in the same folder as the Dockerfile with the following contents.

      node_modules

      .dockerignore files are an easy way to selectively copy only image relevant files. You can read more about this here. In this case, the node_modules folder should be omitted in the second COPY step because otherwise, it would possibly overwrite files which were created by the command in the RUN step. For further details on why this is recommended for Node.js applications and other best practices, have a look at their guide on Dockerizing a Node.js web app.

    3. Build a new image using docker build.

      docker build -t getting-started .

    项目简介

    当前项目暂无项目简介

    发行版本

    当前项目没有发行版本

    贡献者 1

    J Jeff Fox @Jeff Fox

    开发语言

    • JavaScript 92.1 %
    • HTML 4.4 %
    • CSS 2.8 %
    • Dockerfile 0.7 %