Saturday, April 23, 2016

How Linn push dotNet Dockers using Github Workflow and Travis


At Linn we've adopted the Github pull-request work flow to peer review code before being merged into master. Every commit is built and tested on a CI server and in this example we use Travis to test a .Net web application using NancyFx and build a Docker container tagged with the feature branch name. I’ll also cover how we generate a container tagged ‘latest’. 

It is assumed the user is familiar with Docker. Information about Github pull-request workflow and how we self host a .Net NancyFx application can be found in the following guides: 

Skip to the end
Assuming you’ve already installed Docker, you can run the latest version of the application by pulling the bazwilliams/serial-number-service image. 

You will need to:
  • Expose the container port 8888
  • Provide a SERIAL_NUMBER_SEED environment variable
    • docker pull bazwilliams/serial-number-service
    • docker run —rm -e SERIAL_NUMBER_SEED=42 -p 8888:8888 -ti bazwilliams/serial-number-service

You can POST to localhost:8888/serial-numbers with Content-Type application/json with the following body:

  "productName": "Sondek LP12"

You will get back a serial number of 42 and each subsequent will be increasing from the serial number seed you provided in the environment variable. If you provide a productType of speakers in your Json body, you will get back a pair of serial numbers. 

The example code is available at

CI on Travis
The .travis.yml script has been kept relatively build agnostic in this case to facility switching CI servers easily, the .travis.yml seems to be a standard and there is a plugin for Jenkins should you wish to use this approach on that CI server. Travis runs a build as a series of blocks and any script which exits with a non zero code will fail the build. We use Makefile to do the actual build and separate the concern of a CI server and the build tools. Makefile is simple and provides excellent and straight forward dependency management. 

  • We use before_install to install necessary tooling required by the build environment. 
  • We use the before_script to login to Dockerhub
  • The script block is used to compile and run tests, anything that fails in this part will mark the build as failed. We will also build the Docker images within this block tagged with the Travis build number and push that to Dockerhub. 
  • If the script was successful, the after_success block is run where we will tag the Docker images appropriately and push them to Dockerhub. We want to tag using the following conditions:
    • If success and the branch is master; tag latest
    • If success and the build is for a Github pull request; tag PR_
    • If success and the build is a feature branch; tag with the feature branch name. 
    • In all cases tag with the travis build number. 


The Makefile has some basic entry points clean, compile and test which are expected to be used by the CI. 
  • clean removes all build artefacts, this scripts just deletes everything in the bin or obj folders, but might be modifiable to use git clean instead. 
  • nuget installs the latest version of Nuget as the version provided by mono is version 2 and we ran into some difficulties with some packages requiring nuget version 3. 
  • nuget-restore restores all the packages that are required by the solution, we don’t check our packages into the repository. 
  • appconfig creates an app.config file which is required for the build (more on this later). 
  • compile depends on clean, nuget-restore and appconfig and uses build to build the entire solution with a Release configuration, we aren’t interested in debug symbols being included in our Docker. 
  • testrunner will use nuget to install an NUnit console runner
  • test will use the NUnit console runner to run our unit and integration tests. At Linn we tend to have multiple test packages which are all organised under a Tests/ folder and have an Assembly name ending in .Tests. The NUnit console runner will accept a project file to find tests, but not multiple projects so rather than run the tests for each individual project, we use find to locate all the dlls under Tests/ which end in Tests.dll and aren’t in the obj folder. The NUnit console runner will accept a list of assemblies of find test fixtures within. 

We have some Docker specific targets within the Makefile, all-the-dockers, docker-push and docker-tag
  • all-the-dockers doesn’t have a list of tasks, but is used as a way to group all the individual docker tasks together, at Linn we tend to build micro services so an individual solution may more than 1 Docker containers. 
  • each docker to be built should be listed as a dependency of all-the-dockers and will use docker to build an image in a specific project folder tagged with the current travis build number. We store the Dockerfile at the root of the project folder, not solution root. We also depend on compile here to be sure the code we are putting in is fresh - it is plausible code may have been instrumented for code coverage so we recompile. 
  • docker-tag will invoke a Makefile macro which uses the environment variables provided by Travis to apply the right tag to the named Docker image tagged with the current Travis build number. Docker tags are not allowed to use a # symbol which can be used with git branches wth Github as a way to associate issues, as a result this Makefile will create a cleansed version of the branch name and use that to tag the Docker image. 
  • docker-push will push the Docker container and tags to Dockerhub. 

12 Factor Config
I mentioned earlier how Makefile will create App.config files as part of the appconfig target. This is required because we don’t check in our App.config files if we are targeting Docker; the App.config files are in the .gitignore. .Net applications use App.config to set environment specific configuration which may include database connection strings, message broker connection details or credentials. We want to build a Docker image and run it anywhere, with any configuration - we definitely do not want to build a Docker image per environment. 

Instead we have an App.config.template file which is our regular App.config, but with any sensitive or environment specific details replaced by a ${} marker. In the serial number example, the starting serial number is defined by a ${SERIAL_NUMBER_SEED} and when we run the Docker image, the environment variable SERIAL_NUMBER_SEED will be substituted in. 

App.config will not be copied into the Docker container, only the App.config.template will be added, when the Docker is run, the App.config file will be dynamically generated from the template and passed in environment. 


This assumes the assembly name is App for the Service.App which will produce an App.exe file. The build target is assumed to be ./bin/Release in this example as we’re building for the Release configuration. 

We copy in all the required assemblies in the build target folder. 

Since this is a Nancy application we will also copy in the files required for Nancy which may or may not include Views and any data required or a client application such as Content, fonts and scripts

We install the package gettext-base which will install the envsubst application

When run, we pass in App.config.template into envsubst and output the results to App.exe.config and then run our application using mono. 

Putting it together in a Scenario
This is a fake example where a new requirement to support a certain type of products which require more than one consecutive serial number. 

Assuming we already have a master branch and an image pushed to Dockerhub as a result of build 8.

Prepare changes in local branch called multiple-serial-number-request

Commit and push to a remote branch on Github

Assuming the tests passed, there will now be a Docker pushed to docker hub with a tag of multiple-serial-numbers and tagged 9

Create Pull Request

Once happy with results we create a Pull request from multiple-serial-number-request to the master branch

Code can been peer reviewed at this stage

We can still push more commits - Github won’t allow merging without an admin password until travis has passed all the tests using:
  • Latest commit of current branch (the branch build)
  • Preemptive merge of our branch into master (the pull request build)

At this point we will get two tagged Docker images ‘multiple-serial-numbers’ and PR_3
  • The Docker images from the previous build are still present, just no longer have the multiple-serial-numbers tag. 

Merge Pull Request
Once happy with the pull request, we’ll use Github to merge the Pull Request.

Travis will then be triggered into another build, this time with the master branch.

If everything passes, then the Docker image resulting from that build will be tagged latest and will have our new feature included. All we need to do is deploy the latest docker to production. 

Potential Improvements
In theory we could deploy a build number tagged docker to a real testing environment and run acceptance tests against it. 
  • At Linn we use SpecFlow for acceptance testing and I have had some success at building an environment using CloudFormation complete with users, deployment to ECS and using NUnit runners to invoke our test. 
  • We use a dedicated ECS cluster for testing (which we've named the Kobayashi Maru).
  • If this passed then we have an extremely high confidence level of what our code will do, potentially to the point we could use Travis to auto deploy to a staging environment. 
  • The remaining manual step would be reconfiguring load balancers, but that could technically be done by a cloud formation file… 

Simulating CI Mono Build on Windows

  • Sometimes it is useful to check changes to Makefile and run the build using `build` without having to commit, push and wait for the CI server. In these cases I use a solution level Dockerfile which prepares an environment within which I can run `make compile test` to check a build. With some extra changes, we could even build the Docker from this Docker...

Friday, January 29, 2016

12 Factor Config for .Net on Mono in a Docker

Recently I've been thinking about how to run a .Net application using Mono inside a Docker, along with this we'd like to use Twelve Factor config.

Whilst it is possible to use some library to access environment variables instead of using appSettings it becomes difficult for database and message broker URLs which other libraries may expect to find within the App.config or Web.config file.

The solution I came up with follows:

Build the application without an App.config moving the existing configuration file renaming it App.config.template

Substitute any environment variables you want with ${}.  E.g:

<amqp> <connections> <connection name="rabbit" server="${RABBITMQ_URL}" password="${RABBITMQ_PASSWORD}" /> </connections> </amqp>

Include within the Docker image a script named similar to this and add it to /conf:

while read line
    eval echo "$line"
Add your config template to /conf

Then in the DockerFile, if your CMD is:

CMD ["mono", "/app/foo.exe" ]
Change it to:

CMD /conf/ /conf/App.config.template > /app/App.config; mono /app/foo.exe
Now when you run your Docker image, pass in your environment variables as usual. In this case, I'd set the RABBITMQ_URL and the RABBITMQ_PASSWORD

You can also configure these in a file and pass it into the docker run command using --env-file

Thursday, January 21, 2016

Howto: Node-RED Change Hue Lights Temporarily On Door Open

We fitted a LightwaveRF magnetic sensor to a door to detect whether it is open or closed. The main driver behind this was to increase the level of the hallway lights for a predetermined period of time whenever the door was opened. Since we’re using coloured lights it makes sense to put the light back to how it was once the time period as elapsed. 

Node-RED does not offer a node as a means of storing state between, but it does offer functions which have access to a context object which can be used to store state between flows. 

I’ve created a flow which listens to the `domoticz/out` MQTT topic filters messages based on their Domoticz IDX value and furthers filters on the door switches `nvalue` where 1 indicates door open and 0 indicates door closed. 

On Door Open:
  1. Retrieve state of current light and if nothing already stored, store the current bulb state in the context. 
  2. Change the state of the bulb to a bright white. 
  3. Reset the timer in place if a door close event was received whilst the light is already bright. This would occur if the door is opened again whilst the light is in a brightened state. By not replacing the stored light state, we also avoid overwriting the original colour with the temporary brighter state. 

On Door Close:
  1. Instruct the ‘Record or Despatch’ node to restore the original state which sends the original light colour to the timer. 
  2. If the timer has expired, the original state is sent onto the hallway bulb
  3. A reset context message sent to the ‘Record or Despatch’ node to clear any saved light state. 

The ‘Record or Despatch’ node is a function with the following code:

if (msg.restore) {
    node.send({payload: context.previousState});
} if (msg.reset) {
    delete context.previousState;  
} else {
    if (!context.previousState) {
        context.previousState = msg.payload;

Feel free to download the full flow here: 

Howto: Node Red Subflow for Hue Bulbs

When writing some flows for Node-RED controlling my Philips Hue bulbs; I wanted to read from a bulb do some work then write that same colour back to the bulb. When you perform a GET on a bulb, you get all of the information, whilst good this means it is a bit more inconvenient to PUT that back into a bulb as that API has a slightly different resource shape where the colour mode is determined by which values are submitted.

After using a group of nodes in one flow which did exactly this, I needed the same block in another flow. So I created a sub flow by selected those nodes, clicking the hamburger menu in the top right and choosing 'Subflows > Selection to Subflow'. This replaces the selected nodes with a single node which you can edit in a 'subflows' menu group in the left pane.

Download this selection and make your own subflow:

Wednesday, January 20, 2016

Automatic Kitchen Lights

We recently purchased Hue bulbs for our Kitchen they’re great, but since we already have an RFXCom 433 unit (as per my blog on automating hifi), we wanted to automate the lights and invested in a Lightwave PIR motion sensor. 

I downloaded Domoticz [] on an SD Card for my Raspberry PI and also installed node-red [] a tool for filtering and processing MQTT messages. 

Domoticz is used to store the state of the motion sensor and is configured to emit an MQTT message whenever the state changes. 

We also wanted the lights to have a different colour depending on the time of day we enter the kitchen, for example during the evening we might want a relaxing light, meal times a white light above the dining table and during the day some multi coloured lights, but not change them if they've already on. 

For each of the time slots of the day, a template has been created containing the settings of each of the main and dining lights and will send these on to the relevant light bulbs. 

When motion is triggered, the node-red flow decides which template to use and as an additional requirement, switches the lights off after a configurable delay once no motion has been detected. 

The flow is pictured below (click to download):

Download the flow!

No Motion:

The PIR will send an 'off' signal when it detects no motion, a switch on the back can be used to configure this and I've set it to 2 seconds. If Domoticz receives an 'off' signal, then an MQTT message is sent and all messages received on ‘domoticz/out’ are received by this flow. The messages are filtered by the IDX value of the switch in Domoticz and a filter detecting motion indicated by the payload.nvalue. When the nvalue is 1, the message is directed to a trigger node which will delay it for 5 minutes. After 5 minutes, the trigger node will emit the same message to a 'turn light off' template the result of which is sent to both the dining table and main kitchen light.


The PIR will send an 'on' signal when it detects motion. If Domoticz receives an 'on' signal, then an MQTT message is sent and received by this flow filtered by the IDX value of the kitchen sensor in Domoticz and a filter detecting motion indicated by the payload.nvalue. When the nvalue is 0; this does 2 things.

1: The message is directed to the 'Reset Timer' template node which generates a new message with a 'reset' attribute set to true, this is passed into the trigger node cancelling any timeout in process. This is useful if someone leaves the kitchen, the timer will start a countdown to switching the lights off after 5 minutes, but then cancel that countdown if someone walks back in before the lights have switched off. 

2: The message is also directed to the 'Scene Chooser' which runs a function evaluating the current time and emitting a message on each of the applicable channels to template nodes which will generate some JSON to control the relavant dummy switches in Domoticz switching them off. The function has 5 outputs representing each scene we want:

00 - 06: Evening
06 - 09: Breakfast
09 - 12: Coloured
12 - 14: Lunch
14 - 17: Coloured
17 - 20: Dinner
20 - 22: Evening
22 - 00: Coloured

I've made the flow available to download from here:


Thursday, February 26, 2015

Smart Watches

The announcement of the new Pebble Time reminded me to post a blog about my LG G Watch experience since I purchased it in September. Something I now find invaluable.

At the time, I'd switched off many of my phone apps' notifications because I found them distracting and would often venture off into other apps on the phone when acting on them. However with the watch, I could see the notification and be presented with a very narrow set of options. Somehow I found this much less distracting and as a result, I've since increased the number of notifications apps can emit on my phone, I find even when coding, my phone beeps and I briefly look at my watch, swipe it away and carry on. If I do need to act, then I can save the notification for later if needed.

I still find the voice part a little embarrassing to use in public, but in the car or by myself I find dictating to the watch very useful. I've dictated at least half the text messages I've sent over the past few months. I also quite liked the default replies selectable via the watch. Usually it gets the dictation spot on.

A great recent feature is Google Music integration, having synched my music to Google it's nice to be able to ask Google to 'play some Mogwai' for example. Though I've never found the ability to store music on the watch useful as I've always got my phone with me.

The watch ties in very well if you use Google Keep or Google Inbox where you can create reminders for certain locations. I.e. an email for collection at a specific store or simply a photograph/note of something to buy next time you're at a shop.

There is a variety of watch faces available, I've settled on Weather Watchface which shows me the current weather, temperature with highs and lows. It also provides useful feedback if I walk away from my phone.

Navigation is pretty cool too, getting a vibration alert on your wrist to tell you a junction is coming up is useful, however the display is useless if it's a complicated junction like a double roundabout or 'left then right' type junction. You might be shown a 'turn right' arrow at a junction or roundabout, but you're never entirely sure as the screen doesn't reflect your current direction or show you a local map view.

Over time many of my initial niggles have disappeared, there has been a stream of system updates. Some apps still don't take full advantage such as Facebook only ever showing 'open on phone' instead of providing a bit more information.

The battery appears to have improved lasting about 2.5 days now, though still on one or two times I've forgotten to charge my watch or hadn't seated it properly in the dock and worn a brick on the wrist for the day. In fact on one day I wore my old 'traditional' watch and found myself looking at that watch when my phone made noises!

I didn't like the watch strap that came with the LG G Watch, but since it has a 22mm watch strap fitting I could fit a Milanese Mesh watch strap instead.

One of the main features I bought the watch for was Google Now's 'time to leave' card; but this is actually a bit hit and miss. Sometimes it forgets to notify you until it is too late and other times it notifies you way too early leaving me with a dilemma about dismissing the notification for a few hours orhaving to remember to look at it again.

Many of the other Google Now cards aren't supported and don't appear which is fair enough, but means I tend not to see them at all as I'll rarely access the feature on the phone.

Answering calls on the watch is very useful, I get to see the caller and options to abort the call or answer, but unfortunately there is no option to answer on speaker making the feature more limited than it could be. Thankfully Wear Speaker comes to the rescue by putting a big button on the watch when a call is connected to enable speaker.

So after Pebble's Time announcement I thought I'd write a list of the things I've come to really like about Android Wear:

  • I like dictating text messages and creating reminders for myself. I particularly like the geo based reminders. E.g. "Ok Google, remind me to write this blog post when I get home"
  • I like the 'time to leave' functionality
  • I like seeing who called me on my watch. 
  • I like reading SMS messages on my watch. 
  • I like seeing email message summaries on my watch. 
  • Replaceable 22mm watch strap. 

All of the above should be possible on the new Pebble Time and probably in a more useful format using the time line function.

How about the feature's I'd lose:

  • The touchscreen is attractive and seems great for things like Google Keep lists, but I've never used it for that. It's more of a hindrance as you must look at the screen to ensure you touch the right place. 
  • The OLED screens looks fantastic, but it spends most of its time switched off to save battery power and the watch face I've chosen doesn't have fancy graphics. 
  • Having to charge my watch every 2 days. 

I've pledged for two Pebble Time watches and look forward to trying one out alongside my LG G Watch for a week in May. I wonder which will go on eBay after?

Saturday, January 17, 2015

Linn DS Homebrew RCU with Playlist Support

After installing a hidden Linn DS for my bathroom, I found controlling it tricky, whilst using the phone is okay, I'd still like the convenience of a remote to switch off the music on the way out.

I worked around this problem by extending the alarm clock functionality already in use for my kitchen to include a 'set standby' option. This would place the bathroom DS into standby mode at known times. Not ideal, but more useful.

The Linn solution to this is a control panel on the wall which offers volume, power and a remote IR sensor. Whilst this gives me the ability to switch off on the way out. It would mean having a remote control in the bathroom to change source and not something I'd like. Since I'd added functionality to schedule stored playlists to start as well as the radio I convinced myself I also wanted the ability to preload a known selection of music at the press of a button. I'm a coder and after watching the RI Christmas lectures on 'hacking your home' I felt this was something I should do myself.

After some online digging around I was going to go for a home grown button panel solution hooked up to a RaspberryPi in the loft; until a work colleague @kylegordon suggested a home automation solution might be an alternative. After discovering the RFXCom433 and the LightwaveRF Mood Switch I realised I could make this happen. Kyle also recommended a Node.js library which had support for everything I needed and would advertise the LightwaveRF button clicks as events.

I ordered an RFXcom433 and a LightwaveRF Mood Switch/LightwaveRF inline relay bundle; the original idea was to replace the bathroom light switch tying the 'on' and 'off' buttons of the Mood switch to trigger the in-line relay (switching the bathroom light).

On Friday I received all the bits.

I initially installed the Windows drivers and upgraded the firmware for the RFXcom, although it already had support for everything I wanted. I had a few issues getting the node-rfxcom package installed on windows due to a serialport dependency node-gyp which required python. Updating the package.json for node-rfxcom to a newer version of serialport (~1.4) solved this problem on windows.

I then wrote a quick node.js app using the node-rfxcom library to send a LightwaveRF switch command on initialisation so I could pair the lightswitch, then display a console message on each button click afterwards.

Once paired and working I transferred the RFXcom to my Debian Linux Microserver, where I found I needed no drivers for the RFXcom transceiver. The node.js app was updated to send HTTP calls to the DS playlist service to toggle standby and preload 1 of 3 named playlists.

The code I wrote is on Github

After trying it out, I realised I didn't want to just toggle standby, I wanted to override the standby button to play radio if it was already off.

'GroupOff' = Switch Off or Switch On and Play Radio
'Mood 1' = Play 'preset1' playlist
'Mood 2' = Play 'preset2' playlist
'Mood 3' = Play 'preset3' playlist

After installing this on the wall next to the lightswitch I realised, the top buttons could be refashioned as volume controls and I could leave the existing 'manual' light switch. I added some new endpoints to the upnp-playlist-service to increase or decrease the volume of a Upnp device via a RESTful HTTP call and reconfigured the home-automation app to call these when the buttons are pressed.

After just a day, I've got a fully functional control pad on the wall, controlling my bathroom DS giving me, power, volume, radio and preset playlist support!

I've also got a spare radio controlled relay burning a hole in my pocket for installation somewhere - I'm thinking my outdoor light could do with some geofence based control!

Monday, May 19, 2014

Qobuz One Week On

After 1 week of use, I have decided not to pay £20 per month for Qobuz HiFi streaming.

The audio quality is superb and long may this type of service continue! But, Qobuz do not offer a DLNA solution which means I can't easily integrate with my existing HiFi; the result of this means I can either stream exclusively from Qobuz or exclusively from my own collection of music. Since Qobuz isn't as complete as it could be, it makes the exclusively Qobuz experience less desirable, even less if I own studio master albums. In many cases I could purchase the music directly from Qobuz and therefore get the rights to stream it, but this would mean repurchasing many albums which would be crazy! It would be good to be able to upload my existing CD rips or flac purchases to Qobuz (Google Play Music style) for me to listen exclusively through their app.

Their desktop app is okay, I covered most in my previous post, the playlist management is very poor I found moving tracks about in the playlist almost impossible and in some cases managed to duplicate the playlist items when removing tracks from it.

Building on the lack of DLNA and incomplete music library, for the price of £20 per month I can get quite a few CDs so another question asked could be how much new music do I discover each month? The answer to this is quite low, 1 album a week a week would be very generous!

Therefore on the basis of cost, lack of integration with my existing music system and incomplete audio collection, I have decided their paid for HiFi streaming service is not for me.