Saturday, April 23, 2016


How Linn push dotNet Dockers using Github Workflow and Travis

Summary

At Linn we've adopted the Github pull-request work flow to peer review code before being merged into master. Every commit is built and tested on a CI server and in this example we use Travis to test a .Net web application using NancyFx and build a Docker container tagged with the feature branch name. I’ll also cover how we generate a container tagged ‘latest’. 

It is assumed the user is familiar with Docker. Information about Github pull-request workflow and how we self host a .Net NancyFx application can be found in the following guides: 


Skip to the end
Assuming you’ve already installed Docker, you can run the latest version of the application by pulling the bazwilliams/serial-number-service image. 

You will need to:
  • Expose the container port 8888
  • Provide a SERIAL_NUMBER_SEED environment variable
    • docker pull bazwilliams/serial-number-service
    • docker run —rm -e SERIAL_NUMBER_SEED=42 -p 8888:8888 -ti bazwilliams/serial-number-service

You can POST to localhost:8888/serial-numbers with Content-Type application/json with the following body:


  "productName": "Sondek LP12"
}

You will get back a serial number of 42 and each subsequent will be increasing from the serial number seed you provided in the environment variable. If you provide a productType of speakers in your Json body, you will get back a pair of serial numbers. 

The example code is available at http://github.com/bazwilliams/serialnumberexample

CI on Travis
The .travis.yml script has been kept relatively build agnostic in this case to facility switching CI servers easily, the .travis.yml seems to be a standard and there is a plugin for Jenkins should you wish to use this approach on that CI server. Travis runs a build as a series of blocks and any script which exits with a non zero code will fail the build. We use Makefile to do the actual build and separate the concern of a CI server and the build tools. Makefile is simple and provides excellent and straight forward dependency management. 

  • We use before_install to install necessary tooling required by the build environment. 
  • We use the before_script to login to Dockerhub
  • The script block is used to compile and run tests, anything that fails in this part will mark the build as failed. We will also build the Docker images within this block tagged with the Travis build number and push that to Dockerhub. 
  • If the script was successful, the after_success block is run where we will tag the Docker images appropriately and push them to Dockerhub. We want to tag using the following conditions:
    • If success and the branch is master; tag latest
    • If success and the build is for a Github pull request; tag PR_
    • If success and the build is a feature branch; tag with the feature branch name. 
    • In all cases tag with the travis build number. 

Makefile

The Makefile has some basic entry points clean, compile and test which are expected to be used by the CI. 
  • clean removes all build artefacts, this scripts just deletes everything in the bin or obj folders, but might be modifiable to use git clean instead. 
  • nuget installs the latest version of Nuget as the version provided by mono is version 2 and we ran into some difficulties with some packages requiring nuget version 3. 
  • nuget-restore restores all the packages that are required by the solution, we don’t check our packages into the repository. 
  • appconfig creates an app.config file which is required for the build (more on this later). 
  • compile depends on clean, nuget-restore and appconfig and uses build to build the entire solution with a Release configuration, we aren’t interested in debug symbols being included in our Docker. 
  • testrunner will use nuget to install an NUnit console runner
  • test will use the NUnit console runner to run our unit and integration tests. At Linn we tend to have multiple test packages which are all organised under a Tests/ folder and have an Assembly name ending in .Tests. The NUnit console runner will accept a project file to find tests, but not multiple projects so rather than run the tests for each individual project, we use find to locate all the dlls under Tests/ which end in Tests.dll and aren’t in the obj folder. The NUnit console runner will accept a list of assemblies of find test fixtures within. 

We have some Docker specific targets within the Makefile, all-the-dockers, docker-push and docker-tag
  • all-the-dockers doesn’t have a list of tasks, but is used as a way to group all the individual docker tasks together, at Linn we tend to build micro services so an individual solution may more than 1 Docker containers. 
  • each docker to be built should be listed as a dependency of all-the-dockers and will use docker to build an image in a specific project folder tagged with the current travis build number. We store the Dockerfile at the root of the project folder, not solution root. We also depend on compile here to be sure the code we are putting in is fresh - it is plausible code may have been instrumented for code coverage so we recompile. 
  • docker-tag will invoke a Makefile macro which uses the environment variables provided by Travis to apply the right tag to the named Docker image tagged with the current Travis build number. Docker tags are not allowed to use a # symbol which can be used with git branches wth Github as a way to associate issues, as a result this Makefile will create a cleansed version of the branch name and use that to tag the Docker image. 
  • docker-push will push the Docker container and tags to Dockerhub. 

12 Factor Config
I mentioned earlier how Makefile will create App.config files as part of the appconfig target. This is required because we don’t check in our App.config files if we are targeting Docker; the App.config files are in the .gitignore. .Net applications use App.config to set environment specific configuration which may include database connection strings, message broker connection details or credentials. We want to build a Docker image and run it anywhere, with any configuration - we definitely do not want to build a Docker image per environment. 

Instead we have an App.config.template file which is our regular App.config, but with any sensitive or environment specific details replaced by a ${} marker. In the serial number example, the starting serial number is defined by a ${SERIAL_NUMBER_SEED} and when we run the Docker image, the environment variable SERIAL_NUMBER_SEED will be substituted in. 


App.config will not be copied into the Docker container, only the App.config.template will be added, when the Docker is run, the App.config file will be dynamically generated from the template and passed in environment. 

Dockerfile

This assumes the assembly name is App for the Service.App which will produce an App.exe file. The build target is assumed to be ./bin/Release in this example as we’re building for the Release configuration. 

We copy in all the required assemblies in the build target folder. 

Since this is a Nancy application we will also copy in the files required for Nancy which may or may not include Views and any data required or a client application such as Content, fonts and scripts

We install the package gettext-base which will install the envsubst application

When run, we pass in App.config.template into envsubst and output the results to App.exe.config and then run our application using mono. 

Putting it together in a Scenario
This is a fake example where a new requirement to support a certain type of products which require more than one consecutive serial number. 

Assuming we already have a master branch and an image pushed to Dockerhub as a result of build 8.


Prepare changes in local branch called multiple-serial-number-request

Commit and push to a remote branch on Github

Assuming the tests passed, there will now be a Docker pushed to docker hub with a tag of multiple-serial-numbers and tagged 9

Create Pull Request

Once happy with results we create a Pull request from multiple-serial-number-request to the master branch

Code can been peer reviewed at this stage

We can still push more commits - Github won’t allow merging without an admin password until travis has passed all the tests using:
  • Latest commit of current branch (the branch build)
  • Preemptive merge of our branch into master (the pull request build)

At this point we will get two tagged Docker images ‘multiple-serial-numbers’ and PR_3
  • The Docker images from the previous build are still present, just no longer have the multiple-serial-numbers tag. 

Merge Pull Request
Once happy with the pull request, we’ll use Github to merge the Pull Request.

Travis will then be triggered into another build, this time with the master branch.

If everything passes, then the Docker image resulting from that build will be tagged latest and will have our new feature included. All we need to do is deploy the latest docker to production. 


Potential Improvements
In theory we could deploy a build number tagged docker to a real testing environment and run acceptance tests against it. 
  • At Linn we use SpecFlow for acceptance testing and I have had some success at building an environment using CloudFormation complete with users, deployment to ECS and using NUnit runners to invoke our test. 
  • We use a dedicated ECS cluster for testing (which we've named the Kobayashi Maru).
  • If this passed then we have an extremely high confidence level of what our code will do, potentially to the point we could use Travis to auto deploy to a staging environment. 
  • The remaining manual step would be reconfiguring load balancers, but that could technically be done by a cloud formation file… 

Simulating CI Mono Build on Windows



  • Sometimes it is useful to check changes to Makefile and run the build using `build` without having to commit, push and wait for the CI server. In these cases I use a solution level Dockerfile which prepares an environment within which I can run `make compile test` to check a build. With some extra changes, we could even build the Docker from this Docker...

Friday, January 29, 2016

12 Factor Config for .Net on Mono in a Docker

Recently I've been thinking about how to run a .Net application using Mono inside a Docker, along with this we'd like to use Twelve Factor config.

Whilst it is possible to use some library to access environment variables instead of using appSettings it becomes difficult for database and message broker URLs which other libraries may expect to find within the App.config or Web.config file.

The solution I came up with follows:

Build the application without an App.config moving the existing configuration file renaming it App.config.template

Substitute any environment variables you want with ${}.  E.g:

<amqp> <connections> <connection name="rabbit" server="${RABBITMQ_URL}" password="${RABBITMQ_PASSWORD}" /> </connections> </amqp>

Include within the Docker image a script named sub.sh similar to this and add it to /conf:

#!/bin/sh
while read line
do
    eval echo "$line"
done
Add your config template to /conf

Then in the DockerFile, if your CMD is:

CMD ["mono", "/app/foo.exe" ]
Change it to:

CMD /conf/sub.sh /conf/App.config.template > /app/App.config; mono /app/foo.exe
Now when you run your Docker image, pass in your environment variables as usual. In this case, I'd set the RABBITMQ_URL and the RABBITMQ_PASSWORD

You can also configure these in a file and pass it into the docker run command using --env-file

Thursday, January 21, 2016

Howto: Node-RED Change Hue Lights Temporarily On Door Open

We fitted a LightwaveRF magnetic sensor to a door to detect whether it is open or closed. The main driver behind this was to increase the level of the hallway lights for a predetermined period of time whenever the door was opened. Since we’re using coloured lights it makes sense to put the light back to how it was once the time period as elapsed. 

Node-RED does not offer a node as a means of storing state between, but it does offer functions which have access to a context object which can be used to store state between flows. 

I’ve created a flow which listens to the `domoticz/out` MQTT topic filters messages based on their Domoticz IDX value and furthers filters on the door switches `nvalue` where 1 indicates door open and 0 indicates door closed. 


On Door Open:
  1. Retrieve state of current light and if nothing already stored, store the current bulb state in the context. 
  2. Change the state of the bulb to a bright white. 
  3. Reset the timer in place if a door close event was received whilst the light is already bright. This would occur if the door is opened again whilst the light is in a brightened state. By not replacing the stored light state, we also avoid overwriting the original colour with the temporary brighter state. 

On Door Close:
  1. Instruct the ‘Record or Despatch’ node to restore the original state which sends the original light colour to the timer. 
  2. If the timer has expired, the original state is sent onto the hallway bulb
  3. A reset context message sent to the ‘Record or Despatch’ node to clear any saved light state. 

The ‘Record or Despatch’ node is a function with the following code:

if (msg.restore) {
    node.send({payload: context.previousState});
} if (msg.reset) {
    delete context.previousState;  
} else {
    if (!context.previousState) {
        context.previousState = msg.payload;
    }
}


Feel free to download the full flow here: 

Howto: Node Red Subflow for Hue Bulbs

When writing some flows for Node-RED controlling my Philips Hue bulbs; I wanted to read from a bulb do some work then write that same colour back to the bulb. When you perform a GET on a bulb, you get all of the information, whilst good this means it is a bit more inconvenient to PUT that back into a bulb as that API has a slightly different resource shape where the colour mode is determined by which values are submitted.

After using a group of nodes in one flow which did exactly this, I needed the same block in another flow. So I created a sub flow by selected those nodes, clicking the hamburger menu in the top right and choosing 'Subflows > Selection to Subflow'. This replaces the selected nodes with a single node which you can edit in a 'subflows' menu group in the left pane.

Download this selection and make your own subflow:
https://dl.dropboxusercontent.com/u/174818/HueInputToOutput.json?dl=1

Wednesday, January 20, 2016

Automatic Kitchen Lights



We recently purchased Hue bulbs for our Kitchen they’re great, but since we already have an RFXCom 433 unit (as per my blog on automating hifi), we wanted to automate the lights and invested in a Lightwave PIR motion sensor. 


I downloaded Domoticz [http://domoticz.com/] on an SD Card for my Raspberry PI and also installed node-red [http://nodered.org/] a tool for filtering and processing MQTT messages. 

Domoticz is used to store the state of the motion sensor and is configured to emit an MQTT message whenever the state changes. 

We also wanted the lights to have a different colour depending on the time of day we enter the kitchen, for example during the evening we might want a relaxing light, meal times a white light above the dining table and during the day some multi coloured lights, but not change them if they've already on. 

For each of the time slots of the day, a template has been created containing the settings of each of the main and dining lights and will send these on to the relevant light bulbs. 

When motion is triggered, the node-red flow decides which template to use and as an additional requirement, switches the lights off after a configurable delay once no motion has been detected. 

The flow is pictured below (click to download):



Download the flow!

No Motion:

The PIR will send an 'off' signal when it detects no motion, a switch on the back can be used to configure this and I've set it to 2 seconds. If Domoticz receives an 'off' signal, then an MQTT message is sent and all messages received on ‘domoticz/out’ are received by this flow. The messages are filtered by the IDX value of the switch in Domoticz and a filter detecting motion indicated by the payload.nvalue. When the nvalue is 1, the message is directed to a trigger node which will delay it for 5 minutes. After 5 minutes, the trigger node will emit the same message to a 'turn light off' template the result of which is sent to both the dining table and main kitchen light.

Motion:

The PIR will send an 'on' signal when it detects motion. If Domoticz receives an 'on' signal, then an MQTT message is sent and received by this flow filtered by the IDX value of the kitchen sensor in Domoticz and a filter detecting motion indicated by the payload.nvalue. When the nvalue is 0; this does 2 things.

1: The message is directed to the 'Reset Timer' template node which generates a new message with a 'reset' attribute set to true, this is passed into the trigger node cancelling any timeout in process. This is useful if someone leaves the kitchen, the timer will start a countdown to switching the lights off after 5 minutes, but then cancel that countdown if someone walks back in before the lights have switched off. 

2: The message is also directed to the 'Scene Chooser' which runs a function evaluating the current time and emitting a message on each of the applicable channels to template nodes which will generate some JSON to control the relavant dummy switches in Domoticz switching them off. The function has 5 outputs representing each scene we want:

00 - 06: Evening
06 - 09: Breakfast
09 - 12: Coloured
12 - 14: Lunch
14 - 17: Coloured
17 - 20: Dinner
20 - 22: Evening
22 - 00: Coloured

I've made the flow available to download from here:
https://dl.dropboxusercontent.com/u/174818/TimedKitchenLights.json

Resources: