Saturday, April 23, 2016


How Linn push dotNet Dockers using Github Workflow and Travis

Summary

At Linn we've adopted the Github pull-request work flow to peer review code before being merged into master. Every commit is built and tested on a CI server and in this example we use Travis to test a .Net web application using NancyFx and build a Docker container tagged with the feature branch name. I’ll also cover how we generate a container tagged ‘latest’. 

It is assumed the user is familiar with Docker. Information about Github pull-request workflow and how we self host a .Net NancyFx application can be found in the following guides: 


Skip to the end
Assuming you’ve already installed Docker, you can run the latest version of the application by pulling the bazwilliams/serial-number-service image. 

You will need to:
  • Expose the container port 8888
  • Provide a SERIAL_NUMBER_SEED environment variable
    • docker pull bazwilliams/serial-number-service
    • docker run —rm -e SERIAL_NUMBER_SEED=42 -p 8888:8888 -ti bazwilliams/serial-number-service

You can POST to localhost:8888/serial-numbers with Content-Type application/json with the following body:


  "productName": "Sondek LP12"
}

You will get back a serial number of 42 and each subsequent will be increasing from the serial number seed you provided in the environment variable. If you provide a productType of speakers in your Json body, you will get back a pair of serial numbers. 

The example code is available at http://github.com/bazwilliams/serialnumberexample

CI on Travis
The .travis.yml script has been kept relatively build agnostic in this case to facility switching CI servers easily, the .travis.yml seems to be a standard and there is a plugin for Jenkins should you wish to use this approach on that CI server. Travis runs a build as a series of blocks and any script which exits with a non zero code will fail the build. We use Makefile to do the actual build and separate the concern of a CI server and the build tools. Makefile is simple and provides excellent and straight forward dependency management. 

  • We use before_install to install necessary tooling required by the build environment. 
  • We use the before_script to login to Dockerhub
  • The script block is used to compile and run tests, anything that fails in this part will mark the build as failed. We will also build the Docker images within this block tagged with the Travis build number and push that to Dockerhub. 
  • If the script was successful, the after_success block is run where we will tag the Docker images appropriately and push them to Dockerhub. We want to tag using the following conditions:
    • If success and the branch is master; tag latest
    • If success and the build is for a Github pull request; tag PR_
    • If success and the build is a feature branch; tag with the feature branch name. 
    • In all cases tag with the travis build number. 

Makefile

The Makefile has some basic entry points clean, compile and test which are expected to be used by the CI. 
  • clean removes all build artefacts, this scripts just deletes everything in the bin or obj folders, but might be modifiable to use git clean instead. 
  • nuget installs the latest version of Nuget as the version provided by mono is version 2 and we ran into some difficulties with some packages requiring nuget version 3. 
  • nuget-restore restores all the packages that are required by the solution, we don’t check our packages into the repository. 
  • appconfig creates an app.config file which is required for the build (more on this later). 
  • compile depends on clean, nuget-restore and appconfig and uses build to build the entire solution with a Release configuration, we aren’t interested in debug symbols being included in our Docker. 
  • testrunner will use nuget to install an NUnit console runner
  • test will use the NUnit console runner to run our unit and integration tests. At Linn we tend to have multiple test packages which are all organised under a Tests/ folder and have an Assembly name ending in .Tests. The NUnit console runner will accept a project file to find tests, but not multiple projects so rather than run the tests for each individual project, we use find to locate all the dlls under Tests/ which end in Tests.dll and aren’t in the obj folder. The NUnit console runner will accept a list of assemblies of find test fixtures within. 

We have some Docker specific targets within the Makefile, all-the-dockers, docker-push and docker-tag
  • all-the-dockers doesn’t have a list of tasks, but is used as a way to group all the individual docker tasks together, at Linn we tend to build micro services so an individual solution may more than 1 Docker containers. 
  • each docker to be built should be listed as a dependency of all-the-dockers and will use docker to build an image in a specific project folder tagged with the current travis build number. We store the Dockerfile at the root of the project folder, not solution root. We also depend on compile here to be sure the code we are putting in is fresh - it is plausible code may have been instrumented for code coverage so we recompile. 
  • docker-tag will invoke a Makefile macro which uses the environment variables provided by Travis to apply the right tag to the named Docker image tagged with the current Travis build number. Docker tags are not allowed to use a # symbol which can be used with git branches wth Github as a way to associate issues, as a result this Makefile will create a cleansed version of the branch name and use that to tag the Docker image. 
  • docker-push will push the Docker container and tags to Dockerhub. 

12 Factor Config
I mentioned earlier how Makefile will create App.config files as part of the appconfig target. This is required because we don’t check in our App.config files if we are targeting Docker; the App.config files are in the .gitignore. .Net applications use App.config to set environment specific configuration which may include database connection strings, message broker connection details or credentials. We want to build a Docker image and run it anywhere, with any configuration - we definitely do not want to build a Docker image per environment. 

Instead we have an App.config.template file which is our regular App.config, but with any sensitive or environment specific details replaced by a ${} marker. In the serial number example, the starting serial number is defined by a ${SERIAL_NUMBER_SEED} and when we run the Docker image, the environment variable SERIAL_NUMBER_SEED will be substituted in. 


App.config will not be copied into the Docker container, only the App.config.template will be added, when the Docker is run, the App.config file will be dynamically generated from the template and passed in environment. 

Dockerfile

This assumes the assembly name is App for the Service.App which will produce an App.exe file. The build target is assumed to be ./bin/Release in this example as we’re building for the Release configuration. 

We copy in all the required assemblies in the build target folder. 

Since this is a Nancy application we will also copy in the files required for Nancy which may or may not include Views and any data required or a client application such as Content, fonts and scripts

We install the package gettext-base which will install the envsubst application

When run, we pass in App.config.template into envsubst and output the results to App.exe.config and then run our application using mono. 

Putting it together in a Scenario
This is a fake example where a new requirement to support a certain type of products which require more than one consecutive serial number. 

Assuming we already have a master branch and an image pushed to Dockerhub as a result of build 8.


Prepare changes in local branch called multiple-serial-number-request

Commit and push to a remote branch on Github

Assuming the tests passed, there will now be a Docker pushed to docker hub with a tag of multiple-serial-numbers and tagged 9

Create Pull Request

Once happy with results we create a Pull request from multiple-serial-number-request to the master branch

Code can been peer reviewed at this stage

We can still push more commits - Github won’t allow merging without an admin password until travis has passed all the tests using:
  • Latest commit of current branch (the branch build)
  • Preemptive merge of our branch into master (the pull request build)

At this point we will get two tagged Docker images ‘multiple-serial-numbers’ and PR_3
  • The Docker images from the previous build are still present, just no longer have the multiple-serial-numbers tag. 

Merge Pull Request
Once happy with the pull request, we’ll use Github to merge the Pull Request.

Travis will then be triggered into another build, this time with the master branch.

If everything passes, then the Docker image resulting from that build will be tagged latest and will have our new feature included. All we need to do is deploy the latest docker to production. 


Potential Improvements
In theory we could deploy a build number tagged docker to a real testing environment and run acceptance tests against it. 
  • At Linn we use SpecFlow for acceptance testing and I have had some success at building an environment using CloudFormation complete with users, deployment to ECS and using NUnit runners to invoke our test. 
  • We use a dedicated ECS cluster for testing (which we've named the Kobayashi Maru).
  • If this passed then we have an extremely high confidence level of what our code will do, potentially to the point we could use Travis to auto deploy to a staging environment. 
  • The remaining manual step would be reconfiguring load balancers, but that could technically be done by a cloud formation file… 

Simulating CI Mono Build on Windows



  • Sometimes it is useful to check changes to Makefile and run the build using `build` without having to commit, push and wait for the CI server. In these cases I use a solution level Dockerfile which prepares an environment within which I can run `make compile test` to check a build. With some extra changes, we could even build the Docker from this Docker...

No comments: