# Intro Welcome to the Docker on Windows workshop :) --- ## Agenda The workshop starts with an overview presentation. I'll talk about Docker and Windows containers, and tailor the content to the group - it can be a Docker 101 for folks new to containers, or a comparison of Windows and Linux containers for folks already comfortable with Docker on Linux. --- ## Connect to your VM You'll be given the connection details for your Windows Server 2016 VM. - [Check your setup](https://github.com/sixeyed/docker-windows-workshop/blob/devsum18/setup.md) Then we'll work together through the tasks in the workshop. --- ## Goals for the day The morning workshop covers Parts 1-4: - [Part 1](part-1.md) - running Docker on Windows - [Part 2](part-2.md) - packaging an existing ASP.NET app in Docker - [Part 3](part-3.md) - modernizing the ASP.NET app with Docker - [Part 4](part-4.md) - preparing for production with instrumentation The full day workshop continues in the afternoon with parts 5-7. - [Part 5](part-5.md) - resilience and scalability with Docker Compose - [Part 6](part-6.md) - containerized CI (and CD) with Docker - [Part 7](part-7.md) - production-ready orchestration with Docker swarm mode --- ## Workshop conventions Exercises: .exercise[ - This is something you do yourself... ``` copy and paste this code ```] Optional steps: .extra-details[ Are shown like this. We may skip over them, but you can follow through if you're running ahead.] --- # Part 1 - Docker on Windows We'll start with the basics and get a feel for running Docker on Windows. ## Goals * Learn how to run interactive, background and task containers * Learn how to connect to containers from your Docker host * Learn how applications run inside Windows containers * Learn how to share your images by pushing them to Docker Hub --- ## Run a task in a Nano Server container .exercise[ - This is the simplest kind of container to start with. In PowerShell run: ``` docker container run microsoft/nanoserver hostname ```] You'll see the output written from the `hostname` command. --- ## Check for running containers .exercise[ - List all containers and you'll see your Nano Server container in the `Exited` state: ``` docker container ls --all ```] > Note that the container ID *is* the hostname that the container displayed. ??? Docker keeps a container running as long as the process it started inside the container is still running. In this case the `hostname` process completes when the output is written, so the container stops. The Docker platform doesn't delete resources by default, so the container still exists. --- ## About task containers .extra-details[ Containers which do one task and then exit can be very useful. You could build a Docker image which installs the Azure PowerShell module and bundles a set of scripts to create a cloud deployment.] .extra-details[ Anyone can execute that task just by running the container - they don't need the scripts or the right version of the Azure module, they just need to pull the Docker image.] ??? I use a task container to create all the VMs for this workshop, using my [azure-vm-provisioner](https://github.com/sixeyed/dockerfiles-windows/tree/master/azure-vm-provisioner) image. That image packages up a set of Terraform scripts in an image with Terraform installed, so anyone can use it to provision VMs, using their own Azure subscription details. --- ## Run an interactive Windows Server Core container .exercise[ - Run this to start a Windows Server Core container and connect to it: ``` docker container run --interactive --tty --rm microsoft/windowsservercore powershell ```] When the container starts you'll drop into a PowerShell session with the default prompt `PS C:\>`. ??? Docker has attached to the console in the container, relaying input and output between your PowerShell window and the PowerShell session in the container. The [microsoft/windowsservercore](https://hub.docker.com/r/microsoft/windowsservercore) image is effectively a full Windows Server 2016 OS, without the UI. --- ## Explore Windows Server Core .exercise[ - Run some commands to see how the Windows Server Core image is built: ``` ls C:\ Get-Process Get-WindowsFeature ```] Now run `exit` to leave the PowerShell session, which stops the container process. ??? Using the `--rm` flag means Docker now removes that container (if you run `docker container ls --all` again, you won't see the Windows Server Core container). --- ## About interactive containers .extra-details[ Interactive containers are useful when you are putting together your own image. You can run a container and verify all the steps you need to deploy your app, and capture them in a Dockerfile.] .extra-details[ You *can* [commit](https://docs.docker.com/engine/reference/commandline/commit/) a container to make an image from it - but you should avoid that wherever possible. It's much better to use a repeatable [Dockerfile](https://docs.docker.com/engine/reference/builder/) to build your image. You'll see that shortly.] --- ## Run a background SQL Server container Background containers are how you'll run most applications. .exercise[ - Run SQL Server in the background as a detached container: ``` docker container run --detach --name sql ` --env ACCEPT_EULA=Y ` --env sa_password=DockerCon!!! ` microsoft/mssql-server-windows-express:2016-sp1 ```] > The workshop VM pre-loads a set of Docker images. If you don't have a local copy of an image, Docker will pull it when you first run a container. ??? This example uses another image from Microsoft - [microsoft/mssql-server-windows-express](https://hub.docker.com/r/microsoft/mssql-server-windows-express/) which builds on top of the Windows Server Core image and comes with SQL Server Express installed. --- ## Exploring SQL Server As long as the SQL Server process keeps running, Docker keeps the container running in the background. .exercise[ - Check what's happening by viewing the logs from the container, and seeing the process list: ``` docker container logs sql docker container top sql ```] --- ## Connecting to SQL Server .extra-details[ The SQL Server instance is isolated in the container, because no ports have been made available to the host. Traffic can't get into Docker containers from the host, unless ports are explicitly published.] .extra-details[ You can't connect an external client - like SQL Server Management Studio - to this container (we'll see how to do that later on). Other containers in the same Docker network can access the SQL Server container, and you can run commands inside the container through Docker.] --- ## Running SQL commands inside the container .exercise[ - Check what the time is inside the database container: ``` docker container exec sql ` powershell "Invoke-SqlCmd -Query 'SELECT GETDATE()' -Database Master" ```] > You can execute SQL statements using PowerShell cmdlets which have been packaged inside the container image. --- ## Connect to a background container The SQL Server container is stil running in the background. .exercise[ - Connect an interactive PowerShell session to the container by running `exec`: ``` docker container exec --interactive --tty sql powershell ```] --- ## Explore the filesystem and users in Windows containers .exercise[ - Look at the `Program Files` directory, and then drill down into the SQL Server default file locations: ``` ls 'C:\Program Files' cd 'C:\Program Files\Microsoft SQL Server' ls .\MSSQL13.SQLEXPRESS\MSSQL\data ```] --- ## Data stored in containers .extra-details[ The `.mdf` and `.ldf` files are stored inside the container. You can run SQL statements to store data, but when you remove the container, the data is lost. ] .extra-details[ For stateful services like database, you'll want to run them with the data physically stored outside of the container, so you can replace the container but retain the data. I'll cover that later in the workshop.] --- ## Processes in the SQL Server container .exercise[ - Check the processes running in the container: ``` Get-Process ```] > One is `sqlservr`, which is the database engine. There are also two `powershell` processes, one is the container startup process and the other is this PowerShell session. --- ## Windows users in the SQL Server container .exercise[ - Compare the user accounts for the PowerShell processes: ``` Get-Process -Name sqlservr -IncludeUserName Get-Process -Name powershell -IncludeUserName ```] > The normal Windows user groups are there in the container, along with a special account for container processes. --- ## Accounts and user groups in Windows containers .extra-details[ The SQL Server process runs under the normal `NT AUTHORITY\SYSTEM` account. All the default user groups and accounts are present in the Windows Server Core Docker image, with all the usual access permissions. ] .extra-details[ The PowerShell processes are running as `User Manager\ContainerAdministrator`. That's the default account for processes running in Windows Docker containers, and it has admin privileges.] --- ## Check processes on the Windows host On Windows Server 2016, those processes are actually running in isolated environments on the host. .exercise[ - Open **another PowerShell terminal** and list all the PowerShell processes running on the server: ``` Get-Process -Name powershell -IncludeUserName ```] > You'll see the PowerShell sessions from the container processes - with the same process IDs but with a blank username. The container user doesn't map to any user on the host. ??? There are two important takeaways from this: - Windows Server container processes run natively on the host, which is why they are so efficient - container processes run as an unknown user on the host, so a rogue container process wouldn't be able to access host files or other processes. --- ## Disconnect from the container .exercise[ - Close the second PowerShell window, and exit the interactive Docker session in the first PowerShell window: ``` exit ```] The container is still running. .exercise[ - Now clean up, by removing all containers: ``` docker container rm --force $(docker container ls --quiet --all) ```] --- ## Package and run a custom app using Docker Next you'll learn how to package your own Windows apps as Docker images, using a [Dockerfile](https://docs.docker.com/engine/reference/builder/). The Dockerfile syntax is straightforward. In this task you'll walk through two Dockerfiles which package websites to run in Windows Docker containers. The first example is very simple, and the second is more involved. By the end of this task you'll have a good understanding of the main Dockerfile instructions. --- ## ASP.NET apps in Docker Have a look at the [Dockerfile for this app](part-1/hostname-app/Dockerfile), which builds a simple ASP.NET website running that displays the host name of the server. There are only two instructions: - [FROM](https://docs.docker.com/engine/reference/builder/#from) specifes the image to use as ther starting point for this image. `microsoft/aspnet` is an image owned by Microsoft, that comes with IIS and ASP.NET installed on top of Windows Server Core - [COPY](https://docs.docker.com/engine/reference/builder/#copy) copies a file from the host into the image, at a known location. The Dockerfile copies a simple `.aspx` file into the content directory for the default IIS website. --- ## Build a simple website image .exercise[ - Run `docker image build` to execute the steps in the Dockerfile and package the app: ``` cd "$env:workshop\part-1\hostname-app" docker image build --tag "$env:dockerId/hostname-app" . ```] > The output shows Docker executing each instruction in the Dockerfile, and tagging the final image with your Docker ID. --- ## Run the new app .exercise[ - Run your website in a detached container, just like you did with SQL Server, but this time publishing the HTTP port so traffic can be passed from the host into the container: ``` docker container run --detach --publish 80:80 ` --name app "$env:dockerId/hostname-app" ```] > Any external traffic coming into the server on port 80 will now be directed into the container. --- ## Browse to the app When you're connected to the host, to browse the website you can use the local (virtual) IP address of the container. .exercise[ - Get the container IP address: ``` $ip = docker container inspect --format '{{ .NetworkSettings.Networks.nat.IPAddress }}' app ``` - Open the browser at the container's IP address and see the ASP.NET site: ``` firefox "http://$ip" ```] ??? You need to use the container IP address locally because Windows doesn't have a full loopback networking stack. You can read more about that [on my blog](https://blog.sixeyed.com/published-ports-on-windows-containers-dont-do-loopback/). --- ## Run multiple instances of the website in containers Let's see how lightweight the containerized application is. .exercise[ - Run a PowerShell loop which starts five containers from the same website image: ``` for ($i=0; $i -lt 5; $i++) { & docker container run --detach --publish-all --name "app-$i" "$Env:dockerId/hostname-app" } ```] > The `publish-all` flag publishes all the ports from the container to random ports on the host. ??? Only one process can listen on a port, and we're already using port 80 in the previous container. Using random host ports means we can start multiple containers, and access them using port 80 on the container. --- ## Check all the containers .exercise[ - List running containers: ``` docker container ls ```] .exercise[ - Now this loop will fetch the IP address of each container and browse to it: ``` for ($i=0; $i -lt 5; $i++) { $ip = & docker container inspect --format '{{ .NetworkSettings.Networks.nat.IPAddress }}' "app-$i" firefox "http://$ip" } ```] > When the browsers have finished loading, you'll see that each site displays a different hostname, which is the container ID Docker generates. --- ## See how much compute resources containers use .extra-details[ On the host you have six `w3wp` processes running, which are the IIS worker processes for each container. You can see the memory and CPU usage with `Get-Process`:] ``` Get-Process -Name w3wp | select Id, Name, WorkingSet, Cpu ``` .extra-details[ On my Azure VM, the worker processes average around 50MB of RAM and 5 seconds of CPU time.] --- ## Some issues to fix... This is a simple ASP.NET website running in Docker, with just two lines in a Dockerfile. But there are two issues we need to fix: - It took a few seconds for the site to load on first use - We're not getting any IIS logs from the container ??? The cold-start issue is because the IIS service doesn't start a worker process until the first HTTP request comes in. The first website user takes the hit of starting the worker process. --- ## Logs inside containers .extra-details[ Run `docker container logs app-0` and you'll see there are no logs from the container.] .extra-details[ IIS stores request logs in the container filesystem, but Docker is only listening for logs on the standard output from the startup program.] .extra-details[ There's no automatic relay from the log files to the console output, so there are no HTTP access log entries in the containers.] --- ## Build and run a more complex website image The next [Dockerfile](part-1/tweet-app/Dockerfile) is a better representation of a real-world script. These are the main features: - it is based [FROM](https://docs.docker.com/engine/reference/builder/#from) `microsoft/iis:windowsservercore`, a clean Windows Server 2016 image, with IIS already installed - it uses the [SHELL](https://docs.docker.com/engine/reference/builder/#shell) instruction to switch to PowerShell when building the Dockerfile, so the commands to run are all in PowerShell - it configures IIS to write all log output to a single file, using the `Set-WebConfigurationProperty` cmdlet - it copies the [start.ps1](part-1/tweet-app/start.ps1) startup script and [index.html](part-1/tweet-app/index.html) files from the host into the image - it specifies `start.ps1` as the [ENTRYPOINT](https://docs.docker.com/engine/reference/builder/#entrypoint) to run when containers start. The script starts the IIS Windows Service and relays the log file entries to the console - it adds a [HEALTHCHECK](https://docs.docker.com/engine/reference/builder/#healthcheck) which makes an HTTP GET request to the site and returns whether it got a 200 response code --- ## Build the Tweet app .exercise[ - Build an image from the Dockerfile in the `tweet-app` directory: ``` cd "$env:workshop\part-1\tweet-app" docker image build --tag "$env:dockerId/tweet-app" . ```] > You'll see output on the screen as Docker runs each instruction in the Dockerfile. Once it's built you'll see a `Successfully built...` message. ??? If you repeat the `docker image build` command again, it will complete in seconds. That's because Docker caches the [image layers](https://docs.docker.com/engine/userguide/storagedriver/imagesandcontainers/) and only runs instructions if the Dockerfile has changed since the cached version. --- ## Browse to the new app .exercise[ - When the build completes, run the new app in the same way as the last app: ``` docker container run --detach --publish 8080:80 ` --name tweet-app "$env:dockerId/tweet-app" ``` - Find the container IP address and browse to it: ``` $ip = docker container inspect --format '{{ .NetworkSettings.Networks.nat.IPAddress }}' tweet-app firefox "http://$ip" ```] > Feel free to hit the Tweet button, sign in and share your workshop progress :) --- ## List your images .exercise[ - List the images and filter on your Docker ID, you'll see the images you've built today, with the newest at the top: ``` docker image ls -f reference="$env:dockerId/*" ```] > Those images are only stored in your Azure VM, and that VM will be deleted after the workshop. Next we'll push the images to a public repository so you can run them from any Windows machine with Docker. ??? My Docker ID is `sixeyed`, so my output looks like this: ``` REPOSITORY TAG IMAGE ID CREATED SIZE sixeyed/tweet-app latest 64fcfbceea4b About a minute ago 10.8GB sixeyed/hostname-app latest bf41287f7762 35 minutes ago 13.5GB sixeyed/docker-workshop-verify latest dcf4c3874c4e 41 minutes ago 10.4GB ``` --- ## Storing images in Docker registries .extra-details[ Distribution is built into the Docker platform. You can build images locally and push them to a public or private [registry](https://docs.docker.com/registry/), making them available to other users.] .extra-details[ Anyone with access can pull that image and run a container from it. The behavior of the app in the container will be the same for everyone, because the image contains the fully-configured app - the only requirements to run it are Windows and Docker.] --- ## Push images to Docker Hub [Docker Hub](https://hub.docker.com) is the public registry for Docker images. You've already logged in using `docker login`, so now upload your images to the Hub: .exercise[ ``` docker image push $env:dockerId/hostname-app docker image push $env:dockerId/tweet-app ```] > You'll see the upload progress for each layer in the Docker image. ??? The `hostname-app` image uploads quickly as it only adds one small layer on top of Microsoft's ASP.NET image. The `tweet-app` image takes longer to push - there are more layers, and the configured IIS layer runs to 40MB. --- ## How big are the Docker images? .extra-details[ The logical size of those images is over 10GB each, but the bulk of that is in the Windows Server Core base image.] .extra-details[ Those layers are already stored in Docker Hub, so they don't get uploaded - only the new parts of the image get pushed. And Docker shares layers between images, so every image that builds on Windows Server Core will share the cached layers for that image.] .extra-details[ You can browse to [Docker Hub](https://hub.docker.com), login with your Docker ID and see your newly-pushed Docker images. These are public repositories, so anyone can pull the image - you don't even need a Docker ID to pull public images.] --- ## Next Up That's it for Part 1. Next in [Part 2](part-2.md) we'll get stuck into modernizing an old ASP.NET app, by bringing it to a modern application platform. .exercise[ - Before we move on, let's clear up all the running containers: ``` docker container rm --force $(docker container ls --quiet --all) ```] --- # Part 2 - Modernizing .NET apps - the platform In this section we have an existing app, already packaged as an MSI. We'll Dockerize a few versions of the app using different approaches, seeing how to do service updates and the benefits of Dockerfiles over MSIs. ## Goals * Learn how to package .NET apps to run in Docker * Learn how to use Docker Compose to manage distributed applications * Learn how Docker uses healthchecks to test your application's status * Learn how multi-stage Dockerfiles make your application portable --- ## Package an ASP.NET MSI as a Docker image Version 1.0 of our demo app is ready to go - check out the [Dockerfile](part-2/web-1.0/Dockerfile). .exercise[ - Build the image with a version number in the tag: ``` cd "$env:workshop\part-2\web-1.0" docker image build --tag $env:dockerID/signup-web:1.0 . ```] ??? It's easy to package an MSI into a Docker image - use `COPY` to copy the MSI into the image, and `RUN` to install the application using `msiexec`, which is already bundled in the Windows base image. --- ## Start the app with Docker Compose The [version 1.0 compose file](app/docker-compose-1.0.yml) specifies the database and application containers. .exercise[ - Start all the containers in detached mode: ``` cd "$env:workshop\app" docker-compose -f docker-compose-1.0.yml up -d ```] > Compose starts the SQL Server container and then the web app container. ??? The app uses SQL Server in a Windows container, but rather than start individual containers, you'll use [Docker Compose](https://docs.docker.com/compose/) to organize all the parts of the solution. The `dependency` field in the compose file is used to work out the startup order. --- ## Open the app The app is running now, and is connected to SQL Server. .exercise[ - Get the web app's IP address and browse to it: ``` $ip = docker container inspect --format '{{ .NetworkSettings.Networks.nat.IPAddress }}' app_signup-web_1 firefox "http://$ip/SignUp" ```] > You may see an error from the ASP.NET site saying it can't access SQL Server. This is a timing issue with version 1.0 of the app - the web container may have launched before the SQL Server container is ready to open a connection. Refresh the page and it will load correctly. **We'll fix this in the next part of the workshop.** ??? The container name `app_signup-web_1` is always the same - Docker Compose computes it from the name of the service in the compose file, and the name of the folder where the file lives. --- ## Save some data .exercise[ - Click on the 'Sign up' button and register your details - Check the data has been saved in the SQL container: ``` docker container exec app_signup-db_1 powershell ` "Invoke-SqlCmd -Query 'SELECT * FROM Prospects' -Database SignUp" ```] > Version 1.0 has a pretty basic UI. Next you'll upgrade to a new app release. --- ## Update the ASP.NET site with a new image version For the new app version there's a new MSI. The [Dockerfile](part-2/web-1.1/Dockerfile) is exactly the same as v1.0, just using a different MSI. This scenario is where you have a new application release but you want to keep the same underlying Windows version. .exercise[ - Build the new app version: ``` cd "$env:workshop\part-2\web-1.1" docker image build --tag $env:dockerId/signup-web:1.1 . ```] --- ## Upgrade the app with Docker Compose [The version 1.1 compose file](app/docker-compose-1.1.yml) uses the same database definition, but updates the web app image to version 1.1. .exercise[ - Upgrade the app - compose will replace the web container: ``` cd "$env:workshop\app" docker-compose -f docker-compose-1.1.yml up -d ```] > You'll see in the output that compose compares the current state of the application resources against the desired state in the YAML file. Here the SQL server container definition hasn't changed, so only the web application container is replaced. ??? You'll see this output from compose: ``` app_signup-db_1 is up-to-date Recreating app_signup-web_1 ... Recreating app_signup-web_1 ... done ``` --- ## Browse to the new version The new version of the app has the same **name** as the previous one but it's a new container with a new IP address. .exercise[ - Get the new container's IP address and browse to it: ``` $ip = docker inspect --format '{{ .NetworkSettings.Networks.nat.IPAddress }}' app_signup-web_1 firefox "http://$ip/SignUp" ```] --- ## Check the data in SQL Server .exercise[ - Sign up with another set of details - Repeat the SQL query: ``` docker container exec app_signup-db_1 powershell ` "Invoke-SqlCmd -Query 'SELECT * FROM Prospects' -Database SignUp" ```] > You'll see that the new data is there, along with the original data --- ## Why MSIs are bad .extra-details[ The app is looking better, but the Dockerfile isn't very useful. It should describe everything that's needed to package the app, but most of the work is done in the MSI.] .extra-details[ That's opaque, it could be doing anything - to find out what actually happens in the MSI you need to trawl through this huge [Wix script](signup/src/SignUp.Web.Setup/Product.wxs).] .extra-details[ Next you'll see how to package the same content in a different way, and upgrade the app container to a new version of Windows at the same time.] --- ## Use Docker to build the source and package without an MSI The [v1.2 Dockerfile](part-2/web-1.2/Dockerfile) has two stages. Stage 1 uses a generic MSBuild image to compile and publish the Web project. Stage 2 packages the published output into an application image. .exercise[ - Build the image from the root directory, specifying the path to the Dockerfile: ``` cd "$env:workshop" docker image build --tag $env:dockerId/signup-web:1.2 --file part-2\web-1.2\Dockerfile . ```] > This image gets built from the base directory. That's so the full `src` folder gets is used in the build and the source code is available to the build stage. ??? The Dockerfile approach removes the need for an MSI, or any build pre-requisites. Anyone with Docker can build and run the application, you don't need Visual Studio, MSBuild or even .NET installed on your machine. The toolchain to compile the app is built into the MSBuild image. It's also clear from the Dockerfile exactly how the app is built and installed, and this new version has a `HEALTHCHECK` which is good practice for production workloads. This build will take a few minutes - it uses NuGet to restore all the packages the app uses, and then compiles and publishes the web app with MSBuild. You'll see all the output in the PowerShell window. --- ## Upgrade the app to v1.2 When the build is done, you can upgrade the running application using [version 1.2](app/docker-compose-1.2.yml) of the compose file. .exercise[ - Upgrade the web app deployment: ``` cd "$env:workshop\app" docker-compose -f docker-compose-1.2.yml up -d ```] > Now the whole solution is portable, and the deployment process is much cleaner. Anyone can build and run the app from source, all you need on your laptop or CI server is Docker- you don't need Visual Studio, MSBuild or even .NET installed. --- ## Browse to the new app This version also upgrades Windows with the latest hotfixes and security patches, just by bumping the Windows version in the `FROM` image. Browse to the app, and you'll see the UI is the same: .exercise[ ``` $ip = docker container inspect --format '{{ .NetworkSettings.Networks.nat.IPAddress }}' app_signup-web_1 firefox "http://$ip" ```] The UX is the same too, so when you sign up you'll see a new set of details in the SQL container. ??? Repeat the same `exec` command on the SQL container to see all the data you've saved: ``` docker container exec app_signup-db_1 powershell ` "Invoke-SqlCmd -Query 'SELECT * FROM Prospects' -Database SignUp" ``` --- ## Next Up That's it for Part 2. In [Part 3](part-3.md) we'll modernize the app architecture, making use of the Docker platform to break features out of the monolith, and run them in lightweight containers. --- # Part 3 - Modernizing .NET apps - the architecture In this section we'll take the ASP.NET app and modernize it using Docker. We'll take a feature-driven approach to breaking up the monolithic application, splitting the functionality across multiple containers and using Docker for the plumbing. ## Steps * [1. Fix database bottleneck by making the save asynchronous](#1) * [2. Add self-service analytics with Elasticsearch and Kibana](#2) * [3. Replace the home page with a refreshed design](#3) --- ## Fix a bottleneck by making the save asynchronous The current code makes a synchronous call to SQL Server to insert a row when a user signs up. That's a bottleneck which will stop the app performing if there's a peak in traffic. We'll fix that by using a message queue instead - running in a Docker container. When you sign up the web app will publish an event message on the queue, which a message handler picks up and actions. The message handler is a .NET console app running in a container. --- ## First you need to change some code Open `$env:workshop\signup\src\Signup.Web\SignUp.aspx.cs` in VS Code (or Notepad or whatever editor you like). Comment out the `SavePropsect` call at line 74, and uncomment the `PublishProspectSignedUpEvent` call at line 78. The section should look like this: .exercise[ ``` /* synchronous */ // SaveProspect(prospect); /* aynchronous */ PublishProspectSignedUpEvent(prospect); ```] > That replaces the synchronous SQL insert with message publishing. --- ## Check out the SQL Server message handler .extra-details[ You can see the code for the message handler which subscribes to the message in [Program.cs](signup/src/SignUp.MessageHandlers.SaveProspect/Program.cs) - it uses the exact same `SaveProspect` code lifted from the web app.] .extra-details[ The message handler will be packaged into a new image with this [Dockerfile](part-3/save-handler/Dockerfile).] --- ## Build the new images You need to build a new version of the web image, and a new message handler image. .exercise[ ``` cd "$env:workshop" docker image build --tag $env:dockerId/signup-web:1.3 -f part-3\web-1.3\Dockerfile . docker image build --tag $env:dockerId/signup-save-handler -f part-3\save-handler\Dockerfile . ```] --- ## Now upgrade the application The [v1.3 Docker Compose file](app/docker-compose-1.3.yml) replaces the web app container and creates new containers for the message queue and the handler. .exercise[ - Upgrade the app with Docker Compose from the root directory: ``` cd "$env:workshop\app" docker-compose -f .\docker-compose-1.3.yml up -d ```] > We're using the [official image](https://hub.docker.com/_/nats/) for the [NATS](https://nats.io/) message queue, which is a high-performance in-memory queue. ??? The output from compose shows new containers started for the message queue and the message handler; the app container gets updated and the database container stays the same: ``` app_signup-db_1 is up-to-date Creating app_message-queue_1 ... Creating app_message-queue_1 ... done Recreating app_signup-web_1 ... Creating app_signup-save-handler_1 ... Recreating app_signup-web_1 Recreating app_signup-web_1 ... done ``` --- ## Browse to the new application container You'll see the UI and UX haven't changed: .exercise[ ``` $ip = docker container inspect --format '{{ .NetworkSettings.Networks.nat.IPAddress }}' app_signup-web_1 firefox "http://$ip" ```] When you save your details this time, they still get saved in SQL Server, but the processing is done by the message handler. --- ## Check the logs of the message handler .exercise[ ``` docker container logs app_signup-save-handler_1 ```] > Now when there are spikes in traffic, the message queue will smooth them out. The web app won't slow down waiting for SQL Server, and SQL Server doesn't need to scale up to deal with load. We can scale up by running more web containers during peak time, and process the input by running more handler containers off-peak. --- ## Add self-service analytics The app performs better now, but all the data is stored in SQL Server which isn't very friendly for business users to get reports. Next we'll add self-service analytics, using more enterprise-grade open-source software on Docker Hub. We'll be running [Elasticsearch](https://www.elastic.co/products/elasticsearch) for storage and [Kibana](https://www.elastic.co/products/kibana) to provide an accessible front-end. To populate Elasticsearch with data when a user signs up, we just need to add another message handler, which will listen to the same messages published by the web app. ??? The code for that is in another [Program.cs](signup/src/SignUp.MessageHandlers.IndexProspect/Program.cs). --- ## Build the analytics message handler The message handler uses a similar [Dockerfile](part-3/index-handler/Dockerfile). .exercise[ ``` cd $env:workshop docker image build --tag $env:dockerId/signup-index-handler --file part-3\index-handler\Dockerfile . ```] --- ## Upgrade the app to v1.4 In the [v1.4 Docker Compose file](app/docker-compose-1.4.yml), none of the existing containers get replaced - their configuration hasn't changed. Only the new containers get created: .exercise[ ``` cd "$env:workshop\app" docker-compose -f .\docker-compose-1.4.yml up -d ```] --- ## Refresh your browser Go back to the sign-up page in your browser. **It's the same IP address** because the app container hasn't been replaced here. Add another user and you'll see the data still gets added to SQL Server, but now both message handlers have log entries showing they handled the event message. --- ## Check the new data is stored And the logs in the message handlers: .exercise[ ``` docker container exec app_signup-db_1 powershell ` "Invoke-SqlCmd -Query 'SELECT * FROM Prospects' -Database SignUp" docker container logs app_signup-save-handler_1 docker container logs app_signup-index-handler_1 ```] > You can add a few more users with different roles and countries, if you want to see a nice spread of data in Kibana. --- ## Explore the data in Kibana Kibana is also a web app running in a container. .exercise[ - Get the Kibana container's IP address and browse to port 5601: ``` $ip = docker container inspect --format '{{ .NetworkSettings.Networks.nat.IPAddress }}' app_kibana_1 firefox "http://$($ip):5601" ```] > The Elasticsearch index is called `prospects`, and you can navigate around the data from Kibana. ??? Kibana has a great feature set and it's easy to pick up for power users. They can do their own analytics or build dashboards for other users - no more IT requests to get reports out from SQL Server! --- ## Replace the app homepage The last update we'll do is to replace the design of landing page, rendering it from a dedicated container. That allows for rapid iteration from the design team - the homepage can be replaced without regression testing the whole of the app. --- ## Build the homepage image The homepage component is just a static HTML site running on IIS in Nano Server. It's built with this [Dockerfile](part-3/homepage/Dockerfile). .exercise[ ``` cd "$env:workshop\part-3\homepage" docker image build --tag $env:dockerId/signup-homepage . ```] --- ## Upgrade to version 1.5 In the [v1.5 Docker Compose file](app/docker-compose-1.5.yml) there's a new environment variable for the web application. That's used as a feature switch - the app already has the code to fetch homepage content from a separate component, if this variable is set. .exercise[ ``` cd "$env:workshop\app" docker-compose -f .\docker-compose-1.5.yml up -d ```] > Because the web app configuration has changed, there will be a new web container. --- ## Check out the awesome new design .exercise[ ``` $ip = docker container inspect --format '{{ .NetworkSettings.Networks.nat.IPAddress }}' app_signup-web_1 firefox "http://$ip" ```] > You can still click through to the original sign up page, and the details are saved in SQL Server and Elasticsearch. --- ## Next Up That's it for Part 3. In [Part 4](part-4.md) we'll get ready for production by adding monitoring to the solution (and we'll go back to the previous homepage). --- # Part 4 - Preparing for Production with Instrumentation The app is ready to be promoted to production now, but we'll have problems when we run at scale. For production load you may run dozens of web containers and message handler containers, and currently the only instrumentation we have is text-based log entries. In Docker all containers look the same, whether they're running ASP.NET WebForms apps in Windows or .NET Core console apps in Linux - and you can expose metrics from containers to give you a single dashboard for the performance of all your containers. --- ## Prometheus and Grafana In this section we'll add metrics to the solution using [Prometheus](http://prometheus.io) - a popular open-source monitoring server, and [Grafana](https://grafana.com) - a dashboard that plugs into Prometheus. We'll run those new components in Docker Windows containers too. ## Steps * [1. Expose custom metrics from the message handlers](#1) * [2. Expose IIS metrics from the web application](#2) * [3. Run the solution with Prometheus and Grafana](#3) * [4. Import the dashboard for the solution](#4) --- ## Expose custom metrics from the message handlers You can add instrumentation to your apps in two ways. The first is to record custom metrics in your code, which gives you clear insight into the specific events that interest you. The message handlers already have code to record metrics when they handle messages. In this step we'll expose those metrics on an HTTP endpoint, so Prometheus can scrape them. --- ## Export metrics from the message handlers You'll need to change the `Program.cs` files to uncomment the lines which start the metrics server. .exercise[ - Open `.\signup\src\SignUp.MessageHandlers.IndexProspect\Program.cs`. Uncomment lines **23-25**. - Open `.\signup\src\SignUp.MessageHandlers.SaveProspect\Program.cs`. Uncomment lines **25-27**.] In both cases the `Main` method should now start like this: ``` var server = new MetricServer(50505, new IOnDemandCollector[] { new DotNetStatsCollector() }); server.Start(); Console.WriteLine($"Metrics server listening on port 50505"); ``` --- ## Build the v2 message handlers The Dockerfiles for the handlers haven't changed, so you can rebuild them with a version 2 tag. .exercise[ ``` cd $env:workshop docker image build --tag $env:dockerId/signup-index-handler:2 -f part-3\index-handler\Dockerfile . docker image build --tag $env:dockerId/signup-save-handler:2 -f part-3\save-handler\Dockerfile . ```] > When the v2 handlers run, they will have a Prometheus-compatible endpoint listening on port `50505`, which provides key .NET metrics as well as custom app metrics. --- ## Expose IIS metrics from the web application The other way to add metrics to your app is to export Windows Performance Counters from the container. This way gives you core information without having to change your app code, but the metrics you get are only generic. In this step you'll expose IIS performance counters from the web app container. In the [Dockerfile](part-4/web-1.4/Dockerfile) for version 1.4 of the app, there are additional steps to package a console app alongside the web application. The console app exports the performance counter values from IIS as Prometheus-formatted metrics. --- ## Build v1.4 of the web app The new version includes the metrics exporter: .exercise[ ``` cd $env:workshop docker image build --tag $env:dockerId/signup-web:1.4 -f part-4\web-1.4\Dockerfile . ```] > When the app container runs, it will also have a Prometheus-compatible endpoint listening on port `50505`, providing performance counter metrics from the IIS Windows service hosting the app. --- ## About Prometheus Prometheus is a metrics server. It runs a time-series database to store instrumentation data, polls configured endpoints to collect data, and provides an API (and a simple Web UI) to retrieve the raw or aggregated data. Prometheus uses a simple configuration file, listing the endpoints it should scrape for metrics. We'll use an existing Prometheus Docker image which is bundled with a custom config file for our app, in [prometheus.yml](part-4/prometheus/prometheus.yml). --- ## Build the Prometheus image .exercise[ ``` cd "$env:workshop\part-4\prometheus" docker image build --tag $env:dockerId/signup-prometheus . ```] --- ## About Grafana Grafana is a dashboard server. It can connect to various data sources and provide rich dashboards to show the overall health of your app. There isn't an official Windows variant of the Grafana image, but it's easy to build your own. The [Dockerfile for Grafana](part-4/grafana/Dockerfile) is a good example of how to package third-party apps to run in containers. --- ## Build the Grafana image .exercise[ ``` cd "$env:workshop\part-4\grafana" docker image build --tag $env:dockerId/signup-grafana . ```] --- ## Upgrade the app Now you can deploy the updated application. Use Docker Compose to update the containers to [version 1.6](app/docker-compose-1.6.yml) of the solution: .exercise[ ``` cd "$env:workshop\app" docker-compose -f .\docker-compose-1.6.yml up -d ```] --- ## Use the app to record some metrics Browse to the new application container, and send some load - refresh the homepage a few times, and then submit a form: .exercise[ ``` $ip = docker container inspect --format '{{ .NetworkSettings.Networks.nat.IPAddress }}' app_signup-web_1 firefox "http://$ip" ```] --- ## Check the data in Prometheus The web application and the message handlers are collecting metrics now, and Prometheus is scraping them. You can see the metrics data collected in the basic Prometheus UI: .exercise[ ``` $ip = docker container inspect --format '{{ .NetworkSettings.Networks.nat.IPAddress }}' app_prometheus_1 firefox "http://$($ip):9090" ```] --- ## CPU metrics in Prometheus Try looking at the `process_cpu_seconds_total` metric in Graph view:  This shows the amount of CPU in the message handlers, which is exported from a standard .NET performance counter. The Prometheus UI is good for sanity-checking the metrics collection. Prometheus itself records metrics, so you can look at the `scrape_samples_scraped` metric to see how many times Prometheus has polled the container endpoints. But the Prometheus UI isn't featured enough for a dashboard - for that we'll set up Grafana. ## Browse to Grafana First browse to the Grafana container: .exercise[ ``` $ip = docker container inspect --format '{{ .NetworkSettings.Networks.nat.IPAddress }}' app_grafana_1 firefox "http://$($ip):3000" ```] --- ## Set up the Grafana data source - Login with credentials `admin` / `admin` - Select _Add data source_ and configure a new Prometheus data source as follows:  - Name: `Sign Up` - Type: `Prometheus` - Url: `http://prometheus:9090` - Access: `proxy` That sets up Grafana so it can read the metrics collected by Prometheus. You can build your own dashboard to show whatever metrics you like, but I have one prepared for the workshop which you can import. --- ## Configure the Grafana dashboard From the main menu select _Dashboards...Import_, load the `SignUp-dashboard.json` file in `C:\scm\docker-windows-workshop\part-4\grafana` and connect it to the Prometheus data source:  You'll see an overall dashboard showing the status and performance of the web application and the message handlers. --- ## Check out the dashboard  The dashboard shows how many HTTP requests are coming in to the web app, and how many events the handlers have received, processed and failed. It also shows memory and CPU usage for the apps inside the containers, so at a glance you can see how hard your containers are working and what they're doing. --- ## Next Up For a half-day workshop, we're done! You've seen how to run Windows apps in Docker containers, add third-party components to your solution, break features out of monoliths, and add consistent instrumentation. You've done what you need to move your own apps to Docker in production. Next steps: - try one of the [Docker labs on GitHub](https://github.com/docker/labs) - follow [@EltonStoneman](https://twitter.com/EltonStoneman) and [@stefscherer](https://twitter.com/stefscherer) on Twitter - read [Docker on Windows](https://www.amazon.co.uk/Docker-Windows-Elton-Stoneman/dp/1785281658), the book - watch [Modernizing .NET Apps with Docker on Pluralsight](https://pluralsight.pxf.io/c/1197078/424552/7490?u=https%3A%2F%2Fwww.pluralsight.com%2Fcourses%2Fmodernizing-dotnet-framework-apps-docker), the video course (don't have Pluralsight? Ping @EltonStoneman on Twitter to get a free trial). For a whole-day workshop, we'll continue after lunch. In [Part 5](part-5.md) you'll learn how to add resilience and scalability to your apps with Docker Compose.