GUI testing with TestStack.White

Monday, March 31, 2014 | Posted in Regression testing UI test TestStack.White Apollo

In the previous post I explained how I created the regression tests for a script running application that belongs to the Apollo project. In this post I will explain how I created the regression tests for an Apollo application with a graphical user interface (UI).

My initial attempt to write a regression test suite for the GUI application was done using the NUnit and ScriptCS with the idea that NUnit would provide the test execution and validation methods and ScriptCs would provide an easy way to write the test scripts without requiring a complete IDE to be used. After some trial-and-error it became clear that this approach was not the most suitable solution for the following reasons:

  • Using NUnit, or any unit test framework for that matter, complicated things because there is no way to take control of the order in which the regression tests are executed. While the ordering of unit tests is irrelevant, the ordering of regression tests may not be in that it is accepted that regression tests involve the execution multiple, ordered tests for one single activation of the application under test (AUT).
  • A secondary drawback of using a unit test framework is that there is normally no way to re-run a test case if it fails, something which is more acceptable for a regression test than for a unit test.
  • The test code got rather complicated due to the many helper methods, script files etc.. While ScriptCs coped beautifully with this it turned out to hard for me to work with without the organizing features of a complete IDE.

In the end I wrote a custom (console) application that handles the ordering and execution of the different test cases in the way that makes sense for my current requirements.

Testing application

The testing application executes the tests, collects the results and logs all the outputs. When executing the tests each test is executed a maximum of three times before it is marked as a fail. The following code is used by the application to execute the test steps and to keep track of the number of times a test step is executed.

The reason for having multiple attempts to complete a test is that the nature of the GUI automation testing is that it is not completely consistent. By allowing a test to fail twice it is possible to work around the problem of inconsistent handling of controls.

The second part of the application keeps track of the test results while the tests are running. The application will always try to execute all the tests, irrespective of their final success or failure. That way the user will always have a complete report of the state of all the tests.

Finally the application provides utility methods for logging which should be used liberally by both the application and the test steps.

Tests

In order for the tests to interact with the GUI I chose to use the TestStack.White library mainly because it is a mature open source library that has a number of active contributors. One thing to keep in mind when selecting a GUI automation library is that the underlying technology has some tricky hooks to it that cannot be completely hidden by the automation library. One example is that all controls can be found based on their automation ID but windows cannot, even if the window in question has an automation ID.

The main piece of advice for writing UI automation tests is always to write some helper methods to hide the underlying complexity of the UI access technology. In my case I took the following steps:

  • Control very carefully how your helper methods return information to the caller, either through a return value or through exceptions. In my case if a helper method returns a value it should either return the requested object or null if it fails to get the requested object. The calling code should then always check for null and handle the case of a null return value, e.g. by retrying the method call. The only time the method will throw an exception is if the assumed conditions for the method are wrong, e.g. the application under test (AUT) has crashed. If the helper method has no return value then it may throw if it fails to execute.
  • All actions that get a control will try to get the control several times if they fail. Due to the nature of UI automation it is possible that the control is not available the first time the code tries to get hold of the control. This may be due tot he fact that the automation tests are fast enough that they try to get the control in the 0.1 second that the control is not available yet. Note that this also makes for some very tricky debugging which requires a decent amount of logging.

Note that automation tests are usually linked strongly to a specific version of the software because the tests assume the availability of certain automation IDs and controls. Some parts are implicitly linked and others can be, but don't have to be, explicitly linked. Examples are:

  • The application, product and company names can be shared through a configuration or shared code file:
  • Automation IDs can be shared via a shared code file. Note that application ID names should be pointing to a given area / functionality, not specific controls (Also note that in my case that's not always done the right way):

When executing the tests it is a good idea to reset your application state either at the end of each test or at the start of each test. Or even better both to make sure that you always start a test from the same state irrespective of the way the previous test ended, e.g. with an application crash.

Finally do not give up if you find it hard to make the tests reliable. By continually improving the robustness of the test code eventually the tests will start to behave in the way you expect them too. Some tricks and hacks that I found necessary are:

  • Time-outs while getting controls, setting values on controls or getting values from controls. While these are ugly they are sometimes necessary.
  • For each test restart the application if at all possible so that you end up with a known application state. This will improve the repeatability of the tests. Note that if the application is slow to start up then you can either improve the start up performance of the application or if that is not possible then carefully combine tests but be aware that tests may fail due to polluted application state.
  • Note that some application state survives restarts, e.g. user and application settings. It would be good if you have a way of resetting that state in a relatively sure-fire way. This is something I have not implemented yet though.

And then the last comment must be that the development of these tests has been much more complicated than I thought it would be. The approach I used required some coding skills so this may or may not be suitable in other situations depending on the skill sets of the testers or developers.

Regression testing of a console application

Tuesday, February 18, 2014 | Posted in Regression testing Console Apollo Scripting

After setting up Sherlock you will need to create some tests that you can execute with Sherlock. In this post I will describe how I created the regression tests for the command line application of Apollo.

For this case the application-under-test (AUT) is the Apollo console application which provides the user with a way to control the capabilities of Apollo, e.g. running a fluid dynamics simulation, by executing a Python script. The goal of the regression test for the console application is to execute a large part of the API which is used by scripts to interact with the different parts of Apollo.

In order to test the scripting API I wrote several scripts that will be executed during the test. Each script exercises several parts of the API and checks after each action that the state of the application is as expected. An example of a test script is given in the following code segment:

This test script:

  1. Verifies that no project exists at start-up. If one exists the test is considered failed and the script forces the application to exit with a non-zero exit code.
  2. Creates a new project and obtains a reference to the new project. If either step fails then once again the application is forced to exit with a non-zero exit code.
  3. The name of the project is changed and the script verifies that the new name has been stored.

Obviously the script given here is a rather trivial and somewhat naive test script however it does provide an idea of what a test script should do.

In order to inform Sherlock that the script execution has failed the script can either write to the error stream or it can force the application to exit with a non-zero error code. To provide data for fault analysis a test script can write to both the standard output stream and the error data stream. Information gathered from either stream will be placed in the Sherlock log, which can be copied back to the report location by setting the correct switches. Additionally any data on the error stream will be written to the test report.

It is important to note that the test script should be robust enough to handle any kind of problem encountered because there is no guarantee that the application will behave in the appropriate manner, after all the application is being tested to see if it is fit for purpose.

Once the test scripts are written you can create a test configuration for the different test steps. An example is given below:

The important things to note in this configuration are:

  • Each test step copies back the system log files which include:
    • The Sherlock log file written on the test environment.
    • Any MSI install log files.
  • The test script executing test step also copies the log files written by the console application.

PG2 - Site number five

Friday, January 31, 2014 | Posted in PG2

Two weeks ago I managed to fulfil another criteria for my PG2 license by flying at the Mangawhai Heads for my fifth site.

Unlike the other locations the launch site at Mangawhai heads is rather small, just big enough for the glider. Because of the conditions and the amount of space at the launch location I elected to do a forward launch with some assistance of the instructor. That turned out to be a good decision because as soon as the glider was above my head it caught the wind stream and I rocketed off the ground. Unfortunately with all the pre-launch action going on I had neglected to completely check my glider which meant I ended up with a small line tangle in the top C-D line which slowed my glider down a bit during flight.

After about 20 minutes of flying I started to notice that my ground speed was reducing more and more, at which point I decided it was better to be on the ground wishing I was in the sky than being in the sky wishing I was on the ground and so I landed on the beach.

The learning points of this flight are:

  • Reducing the seat depth of my harness has made flying a lot more comfortable, at least for shorter flights. I guess I will see how comfortable it is during longer flights.
  • The addition of a stir-up makes getting into the harness much easier. All I have to do now is stand up on the stir-up and I'm in the seat. I'm still not used to staying on the stir-up during flight though so I will have to work out what to do with that, especially during turns.
  • I really should start practising my asymmetrical launches so that I can use those during high wind launch situations to reduce the pressures on the glider and myself during the initial part of the launch.

PG2 - Christmas holiday antics

Tuesday, January 21, 2014 | Posted in PG2

During the Christmas holidays a few weeks ago I was planning to get lots of flying done, after all I had two weeks off and the weather had been amazing while I was in the office so what else could be the case? Unfortunately it turned out that the weather gods had other plans and provided generous helpings of wind and rain. That meant I only managed to fly on two different days.

However that doesn't mean that the Christmas holidays were lacking paragliding activities. You see my new harness arrived just before Christmas. It was ordered three or so months ago but it was completely worth the wait. The build quality is very high and the harness felt comfortable straight away. Mind you there is still some set up to be done. There are a lot of different straps on this harness which subtly change the position and shape of the harness. I suspect it will be a while before I find all the right settings. The biggest difference so far compared to the school harnesses is that it feels like the harness reacts more to changes in the glider, which at the moment is unsettling but I guess I'll get used to it. And the second difference is that the harness is harder to get into.

Given that I was already spending money I figured I should spend some more ... Actually it was more like there were a few other things that were either legally required or very sensible to have. And so I also bought a vario and a reserve. The combination was not cheap but I do feel a lot safer now that I have a way to tell how high I am and a way to come down in case everything goes to custard.

Finally I also did my PG2 theory exam and scored 97% which I was pretty stoked with. The minimum score to pass is 90% so I managed to get that in the bag pretty easily. There is one more theoretical exam to do which is the VFR exam. For that I am trying to learn all the abbreviations at the moment which might take a bit of time.

All in all the Christmas holidays turned out to have a decent amount of paragliding related activities.

Sherlock configuration - Build server integration

Monday, January 13, 2014 | Posted in Sherlock Jenkins TeamCity MsBuild

Once the configuration of Sherlock is complete the last step needed to make use of automatic regression testing is to integrate with a build server. In this post I will explain which steps need to be taken to integrate Sherlock with a build server. Examples will be given for Jenkins, which is used at my work, and TeamCity, which I use personally.

In order to integrate with a build server you will need to:

  • Create at least one test configuration
  • Write the build scripts necessary to register a test, wait for the completion of the test and process the results.
  • Set up a job on the build server to execute the test.

Test configurations

In order to execute a test the first thing you need to decide on is the test configuration which will be executed. In Sherlock the test configuration is described by a XML file similar to the following gist:

In this configuration file replace all instances of ${SOME_TEXT}$ with the appropriate information. Note that some configuration settings should be templated so that those values can be supplied by the build system, e.g. the application version number:

  • Provide an application name and version for use in the test report.
  • Provide a compact description. This will be used as the title of the test report
  • Provide each environment with a name. It doesn't matter what that name is as long as it is consistently used throughout the configuration file. All test steps which should be executed on the same environment should be linked to the same environment name. Note that while it is possible to request multiple environments to be started in a test, it is not possible to synchronize those environments. In other words each environment will exit as soon as it completes the test steps assigned to it, no environment will wait for other environments to complete their work.
  • For each test step that needs to be executed provide the order (integer number starting at 0, incrementing for each step), the name of the environment in which the step should be executed, the failure mode, either Stop or Continue and the correct file paths. Note that the test steps use the following definitions for their file paths:
    • msi: file is the absolute path to the MSI file as on the machine that is used to register the test (i.e. the origin).
    • x-copy: destination is the absolute path to the directory that will hold all the x-copy results on the test environment.
    • x-copy: base is the absolute path to the directory that holds the files / directories to be x-copied on the origin.
    • x-copy: paths contains the absolute paths to the files / directories that should be x-copied. It is expected that these all reside in the base directory at some level.
    • script: file is the absolute path to the script file on the origin.
    • console: exe is the absolute path to the executable that should be executed on the test environment. Note that the path is also pointed to the test environment. Hence the CONSOLE test step is the only test step that doesn't copy files.
    • All test steps: All files and directories that should be included in the report are absolute paths on the test environment.
    • notification: The absolute path where the final report should be placed. This path should be accessible to both Sherlock (the master controller) and the application that will process the test results.
  • It makes sense to always copy back any logs that are written during the test. If you don't need them for test failure diagnosis then it is easy to delete them later, however you can't copy them from the test environment after the test has completed because the test environment will be reset to its original state upon shut down.

If you need tests to run in different environments or with different test steps then you will need to create a configuration file for each environment / set of test steps.

Build scripts

The easiest way to execute the test from a build server is to create a set of build scripts that can do the work for you. In this example I will be using MsBuild as the scripting language.

In order to create the configuration file from the template it is necessary to replace all the templated configuration settings with their current values. The following gist shows an inline MsBuild task that does exactly that:

In order to use this task create an ItemGroup with the identifiers of the settings and their replacement values. For example:

If this example is used on the following template file:

Then the outcome is the following output file:

The next step will be to create a build script that can register the test with the Sherlock system. This can either be done with a call to the exec task or with the following inline MsBuild task:

The next step is a little tricky in that it is now necessary to wait for Sherlock to execute the test and create the test report. The tricky bit due to the fact that Sherlock will only execute the tests when a suitable test environment is available. That means that tests could be executed almost immediately if an environment is directly available, or not for a long time if all environments are busy. On top of that MsBuild does not have the ability to watch specific files. The following inline task allows you to wait for certain files to be created. Note that you need to provide a time-out which determines how long this task will wait for the report files. How long this time-out should be depends on:

  • How long it takes for the tests to execute.
  • How many tests will be executed on the same test environments. Tests executing on different test environments can be run in parallel provided the hardware will stand up to it.
  • How many other tests may potentially be executing at the same time.

Finally the report files need to be checked for success or failure. For each test Sherlock produces a HTML and an XML report. The easiest way to find the outcome of a test is to parse the XML report and search for the result element. The value of this element will either be Passed or Failed. The following inline task will accomplish this goal:

Build jobs

The final task is to setup one or more build jobs for the build server to execute. While you can combine the testing steps with the build steps it is in general advisable to have a separate job for the testing steps. The main reason for that is that the the testing steps can take quite a while to execute, ranging from several minutes to hours, which slows down the continuous integration feedback cycle considerably. Based on the idea that the tests should be contained in their own build job the following set-up is proposed.

  • Define three different build jobs for the build, test and deploy stages of the process. This means that each job will be responsible for a specific task. Individual jobs will be use the artefacts produced by the other jobs.
  • Set up dependencies between the different builds where required. For instance the test job will depend on the artefacts produced by the build job.
  • The build job can be run both as continuous integration build, i.e. execute the build job each time a commit to the source control system is detected, and as pre-requisite to the test job.
  • Similarly the test job could be run nightly to provide relatively quick feedback on regression problems while also acting as pre-requisite to the deploy job.
  • If the version number includes either the build counter of the build job or the revision number of the source control commit then special care needs to be taken for the test and deploy jobs in order to ensure that they get the same numbers as the build job did.
  • In a similar fashion it will be necessary to control the version control settings so that all builds pull in the same revision. One solution would be to store the revision index in the first job in the chain and then transfer that to the other jobs.

For the two build systems I have specifically worked with the following additional notes can be made:

  • Jenkins
    • In addition to the three standard jobs you will probably need a Build flow job to trigger the different standard jobs in the right order. This has to be done because at the moment it does not seem possible in Jenkins to trigger pre-requisite builds without having the downstream project taking up one of the build executors. The build flow jobs live outside the actual build executors which makes it possible to optimize the use of the executors.
    • In order to deal with the build number problems as specified above the simplest way is to write the number to a file in the upstream job, archive that and then pull it out in the downstream jobs. The same goes for the version control information.
  • TeamCity
    • Downstream jobs have both a snapshot dependency and an artefact dependency on the upstream job. The snapshot dependency takes care of the synchronization of the version control revision. Note that you should create different VCS roots for each project, otherwise the projects all share a single check-out directory. By sharing a single directory it is possible that upon starting the second job the directory is cleaned which will likely remove the artefacts from the first job.
    • The build number can be copied easily by synchronizing the build numbers.

Sherlock release - V0.4.8.0

Friday, December 27, 2013 | Posted in Sherlock

Version V0.4.8.0 of the Sherlock regression testing application has been released. This release provides the following issues:

  • #13 - Provide setup verification
  • #14 - Add license and readme to all downloadable files
  • #15 - Include test purpose in final test report
  • #22 - Add NuGet symbols package for easy upload to a symbol server
  • #23 - OutOfMemoryException in Sherlock.Service.Master
  • #24 - Hyper-V: If a virtual machine is running when a new test is started terminate the current run
  • #26 - Hyper-V: Handle state change result code

Sherlock configuration - Verification

Tuesday, December 24, 2013 | Posted in Sherlock

After the configuration of the host machine and the virtual machines the final step in the configuration of Sherlock is to verify that everything is configured correctly. This can be done by using the verification package that comes with each release of Sherlock. In order to use the verification package take the following steps:

  • Unpack the verification.zip package to a directory somewhere (e.g. c:\temp\sherlock).
  • Update the configuration file Sherlock.VerificationConfiguration.msbuild with the following settings:
    • ConfigurationReportDirectory: The directory where you want the final report to be placed. Note that both the user who requests the test and the user which is used to run the Sherlock services need to have access to this directory. The report directory that was created on the host machine
    • ConfigurationServerUrl: The URL of the web service, e.g. http:\\myhostmachine\sherlock.api
    • ConfigurationOperatingSystem: The operating system name that you want to test with. This needs to match one of the operating systems that has been registered with the management website and which is installed on at least one of your test environments.
    • ConfigurationOperatingSystemServicePack: Can be left empty if the operating system has no service pack. If the operating system has a service pack then it needs to match the service pack name of the operating system as registered with the management web site.
    • ConfigurationOperatingSystemCulture: The culture as defined for the registered operating system.
    • ConfigurationOperatingSystemPointerSize: The 'bitness' or pointer size for the operating system, either 32 or 64 bits. Should again match the value that was provided when the operating system was registered.
    • ConfigurationRemotePcWorkingPath: A path where the test files will be placed on the test environment, e.g. c:\temp.
    • ConfigurationSherlockConsoleDirectory: The directory where the console application is installed, this may be a UNC path or a normal directory, e.g. \\MyHostMachine\console.
  • Execute the Sherlock.ExecuteVerificationTests.msbuild script. This script will prepare the files to be send across (a test application and some test scripts), start the console application, register the test and then wait for the reports to be produced.

Note that the reports will come back with some errors. The verification was designed this way on purpose to make it possible to test the error handling. In the end there should be three reports, one for each known test configuration version, i.e. v1.0, v1.1 and v1.2. The reports should show the following errors:

  • The script execution that executes c:\temp\Sherlock.Verification.Console.exe with the -f command line argument. The -f parameter indicates that the process should 'fail' and exit with an exit code of 1.
  • The script execution for c:\temp\Sherlock.Verfication.Console.exe with parameter -c. The -c parameter indicates that the process should exit with an unhandled exception. The exit code for the process should be -532462766.
  • The application execution c:\temp\Sherlock.Verification.Console.exe with the -f command line argument.
  • The application execution for c:\temp\Sherlock.Verfication.Console.exe with parameter -c.

Troubleshooting

If there are any problems during the verification then the first thing to look at are the debug logs which are written by the different applications. The logs are generally found in c:\ProgramData\Sherlock\<APP_NAME>\<VERSION>\logs. Notes:

  1. If there is not enough information in the logs then you can change the log level in the application configuration file under the DefaultLogLevel configuration option. The available levels are (from most verbose to least verbose):
    • Trace
    • Debug
    • Info
    • Warn
    • Error
    • Fatal
    • None
  2. The applications running in the test environment also log but those logs may be removed if the test environment is a virtual machine. However version 1.2 of the test configuration file has the possibly to copy the logs back to the host machine for inclusion into the test report. For this to work set the includesystemlog attribute of the includeinreport element to true.

Some standard problems that may occur are mentioned below.

Console and web service

  • If the console application fails to connect to the web service because it cannot find the web service then verify that the web service can be reached. The two main problems that could be the root cause of this are either the web service is not running or the user which started the console application has no access rights to the web service.
  • If the console application crashes due to a URL problem (see the log file) then it is most likely that you put a partial URL, e.g. mycoolserver\sherlock.api instead of http://mycoolserver/sherlock.api.
  • If the console successfully manages to go through nearly all the test registration steps but fails on the transfer of the test files then there is quite likely a problem with the web service permissions to write to the App_Data directory.
  • If the web service fails on the first call then there is quite likely a problem with the database connection. This is most likley either a problem with the connection string or a problem with the permissions. In case of a permission issue check if the user which is running the web service can access the stored procedures in the database.

Update service

  • If the update service is unable to verify the update packages then it is not able to access the public key file. Make sure the configuration file points to the public key file and that this file is reachable by the service.

Master controller

  • If the master controller fails to connect to the database this could point to either a problem with the connection string or a problem with the permissions. In case of a permissions problem again check that the user running the master controller can access the stored procedures in the database.
  • If the master controller is unable to start a virtual machine then the SherlockUser may not have permissions to start the virtual machines. The log will show this as an error in the environment loading. Probably a security exception.
  • If the controller is blocked by the firewall, or the firewall on the test environment is blocking, then there will be no communication between the test environment and the host. The log shows this through the fact that the host will wait an excessive amount of time for the remote 'endpoint' to connect. A normal start up takes around 2 minutes for a test environment start, operating system load and Sherlock load.

nAnicitus release - V0.1.4.0

Sunday, December 22, 2013 | Posted in nAnicitus

Version V0.1.4.0 of the nAnicitus symbol store application has been released. This release fixes issue #1. If a package is locked by the operating system, e.g. because the file write has not completed yet, then nAnicitus will try to load the package 3 times, waiting 5 seconds between each try. On top of that if the loading of a package fails then the nAnicitus will retry loading the package 3 times before giving up.

Nuclei release - V0.7.1.0

Saturday, December 21, 2013 | Posted in Nuclei

Version V0.7.1.0 of the Nuclei library has been released. This release fixes issue #9.

Sherlock configuration - Virtual machines

Wednesday, December 11, 2013 | Posted in Sherlock

After setting up the host machine the next step in the configuration of Sherlock is to set up one or more virtual machines. Each virtual machine will have the following applications installed:

  • The update service which handles the installation of the latest available Sherlock binaries.
  • The executor service which handles the communication with the master controller, handles the download and upload of test files and test reports and controls the test execution.
  • The application which executes the test steps. This has been separated from the service so that:
    • It is possible to run the application on the desktop while the executor service is running as a Windows service.
    • Any fatal errors in the test application don't affect the service, thereby maintaining communication with the master controller, and thus allowing error reports to be send.
    • It is possible to run multiple applications under different accounts (planned in future releases).

The configuration process for a virtual machine consists of the following steps:

  • Create a new virtual machine and install the operating system
  • Configure the operating system on the virtual machine
  • Install and configure the Sherlock applications on the virtual machine
  • Store information about the virtual machine in the host database

Preparing the virtual machine

The first step is to create a new virtual machine and install the operating system. This should be straight forward but there a few things to look out for:

  • The operating system must (currently) support .NET 4.5 because that is what Sherlock is needs to run on.
  • The network for the virtual machine must be set up so that it can at the very least see the host machine and the other virtual machines stored on the same host machine.
    • Note: Windows networks can deny access to the test virtual machines after being used for a while (usually 30 days) due to the fact that the test machine is reset after each test, thereby not allowing it to handle the renewal of network identification keys correctly.

Preparing the operating system

After the creation of the virtual machine and the installation of the operating system it is necessary to configure the operating system. Some of the changes are necessary for Sherlock to function while others are necessary only from a test perspective.

Lets first start with the changes required for Sherlock to function.

  • If .NET 4.5 is not already installed, then install it. Without it Sherlock won't run at all.
  • Create a new user that will be used to execute the tests. This user needs to be an administrator so that the execution of an MSI install can succeed without the need for elevation (which is a bit tricky from an application). The user can be either a machine local administrator or a domain user which is granted administration rights on the machine.
    • Note that there might a difference between a domain user with administrative rights on the computer and a local administrator. If you turn UAC off then in general a local administrator is running all applications elevated while the applications started by the domain user run with no elevation only to be silently elevated if requested to do so. Unfortunately this means that the domain user will not be able to run a silent install from a command line unless that command line is explicitly elevated (which is not possible from an application).
  • Disable User Access Control (UAC) so that Sherlock can execute MSI installations. Note: In some cases corporate network controls and/or group policies override local settings. In those cases talk to the local IT people regarding UAC. Also note that on Windows 8 it is not possible to turn off the UAC by moving the 'UAC slider' all the way down. This is because Metro applications require some form of UAC to exist in order for them to work.

Finally before installing Sherlock a few tweaks should be made to the operating system configuration so that the automated tests can proceed. Note that for all the following changes there may be group policies in place put there by your local IT service. It may be necessary to talk to the IT people about the settings you want to change and the reason you need to change those settings.

  • Interactive user: If you need to run interactive tests, e.g. UI tests, then you need to make sure a user is logged in. Use Autologon to automatically log your user in when the computer starts while still maintaining password security.
  • Windows 8 switch to desktop: If you are on Windows 8 you need to make sure that the desktop is available for interactive applications to execute on.
  • Screen saver: In order to prevent any interactive tests from failing due to a screensaver or the automatic locking of the desktop both those features need to be turned off.
  • Windows error reporting: Any tests that run on the desktop may hang eternally if the application under tests fails and displays the Windows Error Reporting (WER) dialog. In order to prevent this from happening WER should be disabled. The most efficient way of doing this is to use the group policy controls to disable the following elements:
    • In Computer Configuration\Administrative Templates\System\Internet Communication Management\Internet Communication settings enable the setting: Turn off Windows Error Reporting.
    • In Computer Configuration\Administrative Templates\Windows Components\Windows Error Reporting enable the settings Disable Windows error reporting and Prevent display of the user interface for critical errors, and disable the setting Display error notification.
  • Automatic updates: Automatic updates can be disabled because they will never be deployed due to the fact that the machine is reset after each test.

Configuring Sherlock

Once the operating system is configured Sherlock can be installed and configured.

  • Unzip the service.zip package into the c:\sherlock directory.
  • Copy the XML file containing the public key used to sign the manifests to the virtual machine and place it in the c:\sherlock directory.
  • Open the configuration file (Sherlock.Service.exe.config) and update the following settings:
    • ApplicationName: The name of the application for which updates should be tracked, in this case that is: Sherlock.Service.Executor.exe.
    • UpdateManifestUri: The URL of the manifest file, in this case: http:\\myhostmachine\appupdate\executorservice.manifest
    • ManifestPublicKeyFile: The path to the XML file containing the public key section of the manifest signing key, e.g. C:\sherlock\manifestsigningkey.public.xml
  • Set the update service to start automatically when the computer starts. This can be done as an actual service (in case no interactive tests will be executed) or by automatically starting the application when the user logs on to windows (in case interactive tests need to be executed).
  • The last step is to let the executor controller through the firewall. For this create an inbound rule that allows c:\ProgramData\Sherlock\Sherlock.Service.Executor\{VERSION}\Sherlock.Service.Executor.exe to connect to all types of protocols. Normally only Domain and private networks should be sufficient.

Testing

Once the operating system and Sherlock have been configured it is sensible to test the configurations to make sure it all works. In order to do this take the following steps:

  • Restart the virtual machine. Once the machine starts up it should (depending on your configuration)
    • Automatically log on the test user
    • On windows 8 switch to the desktop
    • Start the Sherlock update service. The service should download the latest version of the executor controller and start it. You can verify this by looking at the logs which can be found in c:\programdata\sherlock\sherlock.service\{VERSION}\logs and c:\programdata\sherlock\sherlock.service.executor\{VERSION}\logs for the update service and the executor controller respectively.
  • After both applications have started stop them both by stopping the update service (Sherlock.Service).
  • Once the applications have been stopped remove the data from the directory c:\ProgramData\Sherlock.
  • Shut down the machine.
  • Take a snapshot of the current state of the virtual machine and give it a sensible name.

Host configuration

Finally the last step in the configuration of a new test environment is to register the environment with Sherlock through the management website.

Sherlock configuration - Server side

Tuesday, December 10, 2013 | Posted in Sherlock

In this post I will explain how to configure the Sherlock host services, which handle test registration and selection and control of the test environments for a test. The set-up follows the following steps:

  • Preparing the host machine which includes installation of the OS and the required services.
  • Installation of database.
  • Installation of the web parts.
  • Creation of the update files and the update manifests.
  • Installation of the services.
  • Configuration of the firewall.
  • Verification of the configuration.

Preparing the host machine

The first step in the configuration of the host services is to prepare the machine from which the Hyper-V service will be running. Note that Sherlock does not require that all services run on this machine but for the purposes of this post I will assume that this is the case. The configuration of the host machine consists of:

  1. Install the host operating system on the machine. Both the machine and the operating system need to support Hyper-V. On top of that for the very least the master controller service has to be installed on the host machine, which means it is not possible to install the core Hyper-V version of Windows.
  2. Create or associate a user which you will use to run the Sherlock services. It is strongly recommended to not run the Sherlock services as local administrator for security reasons. It is more suitable to provide a separate user to run the services. Note that this user will need permissions to run services, but doesn't need installation permissions etc. For the remainder of this post lets call this user the SherlockUser. Note: If you have a specific user that is used for your build server then it makes sense to use that user, although that is not required.
  3. Install the Hyper-V role on the host machine.
  4. Grant the SherlockUser permissions to start, stop and reset Hyper-V virtual machines.
  5. Install the IIS role on the host machine. Note: that in theory (i.e. this has not been tested) IIS can be installed on a different machine than the host machine as long as it will be able to reach the database. The IIS install will need to have the following parts installed as a minimum:
    • ASP .NET 4.5
    • Basic authentication
    • Windows Authentication
    • Management tools
    • Http logging and tracing
  6. Install MSSQL Express 2012 on the host machine. Note: As with IIS it is again possible to install the database on any machine as long as both IIS and the Hyper-V host can connect to it.
  7. Create a directory that will hold all the files related to Sherlock, e.g. c:\testing. In that directory create the following sub-directories:
    • appupdate - Contains the application update files and manifests.
    • console - Contains the binary files for the console application which is used by the user to register a test.
    • service - Contains the binaries for the windows service that will run the controller application.
    • web.api - Contains the binaries for the web service that interacts with the database on test registration.
    • web.intranet - Contains the binaries for the management web site.
    • reports - The location where tests can place their generated reports. Note that Sherlock allows placing test reports in any location as long as the service has access to that directory.
    • temp - The temporary directory used for report generation etc.
  8. Share the reports and console directories so that they can be accessed over the network. The console directory will only need read access, the reports directory will need read and write access.
  9. Install the nAdoni manifest signing tool. This application will be used sign the update manifest files.
  10. Generate the update keys by executing the following command line (assuming that you installed nadoni in c:\tools\nadoni):
    c:\tools\nadoni\keygenerator\nadoni.keygenerator.exe --private=<PATH_TO_PRIVATE_KEY_FILE> --public=<PATH_TO_PUBLIC_KEY_FILE>
    • Where <PATH_TO_PRIVATE_KEY_FILE> points to the XML file that will contain the private and public parts of the manifest signing key and <PATH_TO_PUBLIC_KEY_FILE> points to the XML file that will contain the public part of the manifest signing key.

Database

The second step in the installation process is to create the Sherlock database.

  • Unpack the sql.zip file that is part of the Sherlock release.
  • Create database called Sherlock.
  • Apply all the SQL scripts from the sql.zip file, starting at the V1 script through to the latest upgrade script Sherlock_Upgrade_Vm_To_Vn.sql.
  • Provide permissions for the user that will be connecting to the database. You can either use the SherlockUser or you can create an SQL user. Grant this user the following access:

    • db_datareader
    • db_datawriter
    • Grant access to stored procedures. This can be done via: GRANT EXEC TO <STORED_PROCEDURE_NAME> TO <SQL_USER>. Given that Sherlock only accesses the database through stored procedures, there is no direct table access, there are quite a few stored procedures which means it makes sense to create a script in order to grant access to the stored procedures. The following SQL script can be used to generate an access script for the user:
      SELECT  'GRANT EXEC ON '+ SCHEMA_NAME(schema_id) + '.' + name + ' TO <SQL_USER>'
      from sys.procedures
      ORDER BY 1
  • Set-up a back up for the Sherlock database. Note: The test data is only useful to Sherlock and then only when tests are either running or haven't been run yet. In other words there is no useful information for the user describing test results etc.. However the information describing the available test environments should be backed-up.

IIS

Step number three for the installation of Sherlock is to configure the web service, web page and the application update location. We'll start with the application update location:

  • In IIS Manager create a new virtual directory under the default web site using the following settings
    • Alias: AppUpdate
    • Path: c:\testing\appupdate
    • Connect as: SherlockUser. Make sure the SherlockUser has read permissions for the directory.
  • Allow directory browsing through the IIS -> Directory Browsing Feature.
  • Finally test access to this directory via a browser by going to <HOST_ADDRESS>\AppUpdate. This should display the contents of the directory.

Before configuring the web site and the web service create a new application pool with the following settings:

  • Name: SherlockAppPool
  • .NET Framework version: V4.0.30319
  • Managed pipeline mode: Integrated

The next project to configure is the web site that will be used to manage the testing environments.

  • Unzip Sherlock.Web.Intranet.sql to the c:\testing\web.intranet directory.
  • In IIS Manager create a new application under the default web site using the following settings:
    • Alias: Sherlock.Intranet
    • Application pool: SherlockAppPool
    • Path: c:\testing\web.intranet
    • Connect as: SherlockUser. Again make sure that the SherlockUser has access to the directory containing the web site binaries.
  • Set the authentication for the web site to be anonymous only.
  • Define the connection string to be as given below:
  • Remove the following sections from the configuration file
    • System.Web -> Authentication
    • System.Web -> Authorization

The last web project to configure is the web service that will be used to add information about the new tests to the database.

  • Unzip Sherlock.Web.Api.zip to the c:\testing\web.api directory.
  • In IIS Manager create a new application under the default web site using the following settings:
    • Alias: Sherlock.Api
    • Application pool: SherlockAppPool
    • Path: c:\testing\web.api
    • Connect as: SherlockUser. Again make sure that the SherlockUser has access to the directory containing the web service binaries.
  • Define the connection string to be the same as given for the management website.
  • Set the authentication for the web site to be anonymous only.

Update manifests and files

The two main services for Sherlock, the master controller and the executor controller, are not directly installed on either the host machine or the test environments. The Sherlock service probes the AppUpdate directory for the binaries of these services. This greatly improves the ease with which Sherlock can be upgraded given that an upgrade of Sherlock does not require any changes to the test environments.

To create the necessary upgrade files for the Sherlock services it will be necessary to update the configuration file of the master controller which takes the following steps:

  • Unzip Service.Master.zip into a temporary directory.
  • Update the configuration file (Sherlock.Service.Master.exe.config) with the following settings:
    • TestDataDirectory: c:\testing\web.api\App_Data
    • TestReportFilesDir: c:\testing\temp
  • Define the connection string to be the same as given for the management website.
  • Repackage the binaries into a ZIP archive with the name Service.Master.zip.

The executor controller does not need any changes to the configuration file and can thus be left untouched.

The next step is to create the manifest files that are used by the update service to determine which ZIP archive to use. In order to create the manifests the nAdoni application is used. This can be done with the following command lines:

For the master controller:

C:\tools\nadoni\manifestbuilder\nAdoni.ManifestBuilder.exe -v="{VERSION}" -n="Sherlock.Service.Master.exe" -f="c:\testing\appupdate\service.master.zip" -u="http://myhostmachine/appupdate/service.master.zip" -k="C:\mykeydirectory\manifestsigningkey.private.xml" -o="c:\testing\appupdate\masterservice.manifest"

And for the executor controller:

C:\tools\nadoni\manifestbuilder\nAdoni.ManifestBuilder.exe -v="{VERSION}" -n="Sherlock.Service.Executor.exe" -f="c:\testing\appupdate\service.executor.zip" -u="http://myhostmachine/appupdate/service.executor.zip" -k="C:\mykeydirectory\manifestsigningkey.private.xml" -o="c:\testing\appupdate\executorservice.manifest"

Another way to create the manifest files is through the use of the following Msbuild script:

In this case the following changes must be made to the file:

  • ${PATH_TO_NADONI_DIRECTORY}$ - The path to the directory in which the nAdoni binaries have been placed.
  • ${URL_TO_APP_UPDATE}$ - The URL to the AppUpdate directory, e.g. http://myhostmachine/appupdate.
  • ${MANIFEST_SIGNING_KEY_PATH}$ - The path to the XML file that contains the private key information.

And the following properties must be specified on the command line:

  • Version: The version of Sherlock for which the manifest files are generated.

Services

Finally on the host machine two applications need to be configured. These are the console application that will be executed by the user to register a test and the update service that will control the master controller application.

To configure the console application take the following steps:

  • Unzip the console.zip package into the c:\testing\console directory.
  • Open the configuration file (Sherlock.Console.exe.config) and update the following settings:
    • WebServiceUrl: Point this to the Sherlock.Api site, e.g. http://myhostmachine/sherlock.api.
  • Note that the console needs to be accessed from other machines, e.g.from the build server, so make sure that the directory is shared with read-rights for all users that will need access.

To configure the update service take the following steps:

  • Unzip the service.zip package into the c:\testing\service directory.
  • Open the configuration file (Sherlock.Service.exe.config) and update the following settings:
    • ApplicationName: The name of the application for which updates should be tracked, in this case that is: Sherlock.Service.Master.exe.
    • UpdateManifestUri: The URL of the manifest file, in this case: http:\\myhostmachine\appupdate\masterservice.manifest
    • ManifestPublicKeyFile: The path to the XML file containing the public key section of the manifest signing key, e.g. C:\mykeydirectory\manifestsigningkey.public.xml
  • Install the application as a service by opening an elevated command line and navigating to the c:\testing\service directory. Then execute the following command line:

      sherlock.service.exe install -username:SherlockUser -password:SherlockPassword --delayed -servicename:Sherlock
  • Go to the Windows services control and start the service. After a short while this will grab the latest version of the master controller binaries, drop them in C:\ProgramData\Sherlock\Sherlock.Service.Master and then start the master controller application. You can verify that all is well by checking the log files which are located in C:\ProgramData\Sherlock\Sherlock.Service\{VERSION}\logs and C:\ProgramData\Sherlock\Sherlock.Service.Master\{VERSION}\logs for the service and the master controller respectively.

Firewall

The last step is to let the master controller through the firewall. For this create an inbound rule that allows Sherlock.Service.Master.exe to connect to all types of protocols. Normally only Domain and private networks should be sufficient.

And with all that done the host machine configuration is completed.

PG2 - Top landings?

Thursday, December 05, 2013 | Posted in Paragliding PG2

Two weeks ago the weather was once again favourable to learn more new skills on my way to my PG2 license. When we arrived at Maioro the wind was just enough to soar. Given that the forecast has indicated an increase in wind strength over the afternoon we decided to practise some ground handling first. After about an hour of practising reverse launches and keeping the glider stable above our heads it was time to fly. The wind was still a bit light but with a lot of scratching I still managed to get 40 - 45 minutes of flying in on my first flight, my longest flight until then.

As the afternoon progressed the wind strength slowly increased and staying up became easier which resulted in another 45 minute flight.

Later in the afternoon the instructor told me it was possible to get a set of short flights with some top landings in. For my first try I launched, gained some height and then turned in for the top landing and completely stuffed it up. As I started moving over to the landing spot everything accelerated and I tried to slow down and turn at the same time, which meant I was not actually doing either of those things so when I got close to the ground I was still moving at a decent speed and hence I face planted. Fortunately there was no actual damage to either me or the glider. After recovering from the jitters for a bit I had a second attempt at top landing which took a lot longer to set up and decent but ended in a perfect touch down. The conclusion of all of this is that I need to focus more on flying the glider till the last minute, stay in the harness to maintain the weight shift and focus on the direction of flight more. Oh so much more to learn.

The fifth and final flight of the day was flying back to the car park which was quite easy as I only had to fly along the ridges to maintain more than enough height to get to the car park.

Nuclei release - V0.7.0.0

Wednesday, December 04, 2013 | Posted in Nuclei

Version V0.7.0.0 of the Nuclei library has been released. This release adds support for grouping timing results based on their logical area.

var group1 = new TimingGroup(); 
using (Profiler.Measure(group1, "My first timing group"))
{
    var group2 = new TimingGroup();
    using (Profiler.Measure(
        group2, 
        "This timing is not a child of the first one"))
    {
        // Do stuff here ...
    }

    using (Profiler.Measure(
        group1, 
        "This timing is a child of the first one"))
    {
        // Do stuff here ...
    }
}

Regression testing with Sherlock

Wednesday, December 04, 2013 | Posted in Sherlock Regression test

Over a series of posts I hope to explain how to set up and use the Sherlock test environment system. The idea is to follow the set up procedure I used both at home for the Apollo project and the proof of concept I am working on at my work place. But first lets start with a short explanation of what Sherlock is and what it does.

What is Sherlock

Sherlock consists of a set of applications and services that provide test environment organisation and automatic execution of a set of tests on one or more test environments. The organisation part consists of:

  • Keeping track of which environments exist, which are available for testing and which are not, e.g. for maintenance reasons.
  • What operating system is loaded onto each environment.
  • Which applications are pre-loaded onto each environment.
  • Relationships between virtual machines and their (physical) host machine.

The test execution part consists of:

  • Selection of the most suitable environment for a test based on the desired combination of operating system and available applications.
  • Preparing of the environment for test. This includes loading test environments, i.e. waking up physical test environments through the Wake-on-LAN approach and starting virtual machines, and sending over the test data, e.g. installer files.
  • Triggering test execution on the active test environment.
  • Processing test status and test report information.
  • Post test environment clean-up. This currently only includes resetting virtual machine disks back to the pre-test state.
  • Accumulation of test events and generation of the final test report.

In general the Sherlock system will consist of at least one Hyper-V host machine and a set of one or more Hyper-V virtual machines. In this arrangement the host machine handles the test organisation and part of the test execution, while the virtual machines serve as test environments.

While it is possible for Sherlock to execute tests on physical machines it is advised to only execute on virtual machines because of the inability to reset a physical machine back to the pre-test state. Virtual machines on the other hand can easily be restored to the pre-test state through the use of snapshots.

The life of a test

When executing a test with Sherlock the following steps are present:

  1. The user creates a test configuration file which describes all requirements for the test environment, which steps should be executed and where the report should be placed.
  2. The user registers the test with Sherlock via the console application.
  3. Some time after the registration of the test completed the Sherlock host service loads test information and selects one or more suitable environments to execute the test steps on. The time between the completion of the test registration and the starting of the test execution depends on whether suitable test environments are available and how busy they are.
  4. The selected test environments are prepared for the test execution. This preparation includes activating the environment and transferring the test data for each environment (e.g. installer files etc.).
  5. Each environment executes the desired test steps and reports back on the success or failure of each test step.
  6. Once an environment has completed its test steps it will report the completion the test, upon which the host service will terminate the environment and restore it to the original state.
  7. Once all environments have completed their test steps the test is marked as completed and the test report is compiled and placed in the predetermined location.

Pre-requisites

The first pre-requisite in the configuration is the latest release of sherlock. A release consists of a number of ZIP packages including:

  • console.zip - Contains the binaries and configuration files for the console application which is used to register a test.
  • sherlock.web.api.zip - The web service that will store information about a new test in the database.
  • sherlock.web.intranet.zip - The management web site that can be used to add or remove test environments.
  • service.zip - The windows service that is used to run the master controller or the executor controller.
  • service.master.zip - The master controller application which handles the scheduling of tests, loading and unloading of test environments and processing of the test reports.
  • service.executor.zip - The executor controller application which controls the execution of a test on a test environment.
  • sql.zip - The SQL change scripts for the database.

The second pre-requisite is the availability of a physical machine on which a Windows version with Hyper-V can be installed.

Planned posts:

The next following posts will describe:

  1. How to set up the Hyper-V host machine.
  2. How to prepare a virtual machine for use as a testing environment.
  3. How to verify that all the environments have been configured correctly.
  4. How to integrate with a build server. This will discuss build jobs, build scripts and test configuration.
  5. A description on how I used Sherlock to perform integration tests on the console application of Apollo.
  6. And a description on how I used Sherlock to perform integration tests on a WPF application of Apollo.

Setting up nAnicitus

Tuesday, December 03, 2013 | Posted in nAnicitus Symbol server SymStore PDB

In this post I will explain how to install and configure the nAnicitus windows service. nAnicitus is a windows service that acts as a gatekeeper for the SymStore application. SymStore provides a relatively simple way to create a local / private symbol server.

This statement should bring up a few questions, cunning questions like:

  1. Why would a development team even need a symbol server?
    • The PDB files which are produced as the result of a build are just as unique as assemblies or executables. Each assembly has one specific PDB file to which it is linked via a GUID. Each time an assembly is build a new GUID is embedded, even if the source code has not changed. This means that in order to debug a given assembly for which you don't have the source code, which can happen if you debug other peoples libraries or if you are debugging a crash dump, then you will need the linked PDB file. Hence in order to enable debugging of releases one has to either place the PDB in the same location as the binaries or one has to store the PDB files in a symbol server.
  2. Why would a development team not use SymStore directly?
    • The disadvantage of SymStore is that it is not capable of processing multiple PDB files at the same time, i.e. it should really only be called by one user at the time. By providing a windows service that synchronizes the access to the SymStore application it is possible for multiple users to add symbols to the symbol server without having to worry about the integrity of the symbol server files.

Before we continue with the installation of nAnicitus lets have a quick look at how nAnicitus works.

  1. The user creates a NuGet symbol package and places that in a pre-configured upload directory.
  2. nAnicitus unpacks the symbol package and extracts the PDB files and the source files.
  3. For each PDB file it is determined which source files were used to create it. Based on that information a symbol stream is created and inserted into the PDB file which points to the source server path for the source files belonging to the PDB file.
  4. The PDB files are uploaded via SymStore.
  5. The source files are uploaded to the source server path.

Before installing nAnicitus it is necessary to install the debugging tools for windows. Make sure to install the complete set of tools so that you get all the symbol server tools as well. Once the the debugging tools are installed you can download and unzip the latest version of nAnicitus.

The next step is to update the configuration file with the directory and UNC paths:

  • DebuggingToolsDirectory: The debugging tools directory (e.g. c:\Program Files (x86)\Windows Kits\8.0\Debuggers\x64). This path may be left out if it is in the default location (as given here).
  • SourceIndexUncPath: The UNC path to the directory where the indexed sources will be placed (e.g. \\MyServer\sources).
  • SymbolsIndexUncPath: The UNC path to the directory where the indexed symbols (i.e. PDBs) will be placed (e.g. \\MyServer\symbols).
  • ProcessedPackagesPath: The directory where the NuGet symbol packages will be dropped after they are processed. The NuGet symbol packages are saved in this directory so that it is possible to reprocess the packages if for instance the location of the source path changes, e.g. after switching to a new server.
  • UploadPath: The directory where the NuGet symbol packages are placed for nAnicitus to process.

Finally to install the application as a windows service, open a command line window with administrative permissions, navigate to the nAnicitus install directory and execute the following command:

    Nanicitus.Service install

Once the service is installed use the normal windows services control to change the properties of the service.

And now you should have a working symbol server!

PG2 - Bumpy air, smooth air

Tuesday, November 19, 2013 | Posted in Paragliding PG2

Once more the PG2 group gathered for some flying over the weekend. The action this time was at Maioro with a southerly wind providing for a very bumpy ride. Because of the angle between the wind and the cliff the lift band was very narrow and so one had to work hard to stay in the air. Over the course of the day I managed to get three long flights in but unfortunately I only managed to top land one of them, on the other two I bombed out and had to do a decent bit of walking to get back to the start place. Overall it was a good learning experience flying in very bumpy air with not much lift.

As a little treat for myself I had taken the Monday off from work so that I could go flying if the weather proved to be suitable. The weather forecast for Monday showed great promise for some good flying.

On Monday a convergence hanging around over the city so our instructor decided that with some luck the wind would turn northerly which would make North head an excellent place to fly. After some para-waiting the weather gods decided that the southerly wind would break through the convergence which thereby ended our hopes for flying at North head. Fortunately there are quite a few places around Auckland where flying is possible so our instructor suggested going to Maori bay. When we got to Maori bay it turned out that the wind was smooth and from a suitable direction, so indeed the site was flyable. My first flight was a tandem flight with my instructor. While flying he showed me how to pick the right soaring lines and what to look out for. In the 30 or so minutes that we flew the tandem I learned heaps, and I did most of the flying, except for the start and the landing.

After the tandem flight I managed to get two shorter flights in with my own glider. The launches were a bit more tricky than I was used to because the start place was on angled ground with the wind coming in slightly off angle. However I managed to get the glider to launch without too much trouble in both situations and I did some amazing flying in yet another beautiful New Zealand beach location.

nAdoni release - V0.2.1.0

Wednesday, November 13, 2013 | Posted in nAdoni

In my haste to release the new KeyGenerator functionality I completely forgot to add the packaging of said KeyGenerator. Hence I hereby present Version V0.2.1.0 of the nAdoni update manifest builder project which adds the ZIP archive containing the nAdoni.KeyGenerator console application.

nAdoni release - V0.2.0.0

Wednesday, November 13, 2013 | Posted in nAdoni

Version V0.2.0.0 of the nAdoni update manifest builder project has been released. This release adds the nAdoni.KeyGenerator console application to the project. This application can be used to generate the RSA key files which are necessary in order to build the manifest files.

PG2 - Up and down the coast

Monday, November 11, 2013 | Posted in Paragliding PG2

Last weekend once again the wind was suitable for flying at Kario and so more flying was had. The weather forecast indicated that the wind was going to be pretty strong and as we arrived on the beach that was confirmed by the general feel and the wind socks.

The first thing I had to deal with was the fact that launching was tricky due to the strong winds. The first launch went off without problems but on the second one I got picked up as the glider rose above my head, causing me to fall on my back (yay for back protectors) and the glider to overshoot and collapse. The conclusion was too much brake and not enough moving towards the glider. At that point Eva taught me how to do sideways launch where you lay the glider out at an angle to the wind, pull one end up first and then use brake on the high side to stabilize the glider above your head. Using this method the glider generates less pull and is relatively easy to control during the launching phase. Unlike the reverse launch however you need to be much more aggressive in pulling up the glider because if you are not, you give the low side time to catch up which defeats the entire purpose of using this method. In fact you want the whole thing to come up on an angle so that you can control the inflation speed. It took me several attempts at getting the sideways launch right but in the end I got the hang of it well enough to start using them as my launch technique for the day.

Because the wind was so strong it was easy to get up high it was possible for me to practise big ears again without having to worry about being forced to land. I also got to try a 360 degree turn which looks strange as you look at the ground track because while you are turning through 360 degrees, the ground track is making a rather deformed curl shape. At the end of the turn it even feels like you snap into place.

Finally at the end of the day we flew back to the car park from the launch site, a flight of about 6 or so kilometres. All in all once again a good days flying.

nAnicitus release - V0.1.3.0

Thursday, October 31, 2013 | Posted in nAnicitus

Version V0.1.3.0 of the nAnicitus symbol store application has been released. This release adds additional logging and improves the handling of unexpected situations during processing of symbol packages.

Sherlock release - V0.4.7.0

Wednesday, October 30, 2013 | Posted in Sherlock

Version V0.4.7.0 of the Sherlock regression testing application has been released. This release provides one bug fix:

  • The Windows job object created by the Sherlock service to ensure termination of the Sherlock.Service.Master and Sherlock.Service.Executor applications is now set to only include direct children (i.e. Sherlock.Service.Master / Sherlock.Service.Executor) in the job and not the indirect children. This allows the application under test to create its own job objects even on Windows 7 (which does not allow nesting of job objects) (issue #11).

Weekend at Waipapa - Drying tomatoes

Monday, October 28, 2013 | Posted in Climbing Waipapa

On the long weekend my friend Chris organised a climbing trip to lake Waipapa, a climbing area with many delicate slab climbs.

We spend both days at the main wall working on some of the climbs. For Chris his climb of the weekend was 'Millennium madness', an amazing corner trad climb which is rated 18. From the trad gear side you have to make sure that your rack has several finger size pieces of gear so that you can use some at the start of the climb and some near the middle. The crux of the climb is the first 5 or so meters where you have to figure out how to switch between the different foot placements that are available. The climber starts out bridging between a set of small ledges and then transitions to using the crack to foot-jam and then to using some small indentations on the main face. The tricky bit is getting the transition between the different types of foot placements correct.

On saturday Chris had two attempts at cleaning 'Millennium Madness' but didn't quite get it right. It seems that the first time climbing in Waipapa after winter is always tricky as you are trying to re-acquire the correct climbing techniques for the delicate Waipapa climbs.

For me the plan was to work my long time project called 'Sun dried tomatoes', a delicate sport 22. For this climb the main focus is balance and once again foot placements, however unlike 'Millennium madness' where there are many features that can serve as hand holds, 'sun dried tomatoes' has sections where hand holds are very sparse.

Over the last two years or so I've semi-seriously tried this climb a bunch of times but until this weekend I never found the motivation to put a continuous focus on the climb. During both days I put a top-rope on the climb and worked the moves a number of times. This helped me figure out where the correct foot placements were, where the clipping holds were and in which locations a rest would be possible. In the end I climbed it on top rope four times, eventually working out the moves. And each day I ended the day with a lead attempt. On Saturday I failed to clean it because my head space wasn't in the right spot, however on Sunday I managed to push through the hard bits and got to the top clean. So that's another tick in the books and another sweet climb done.

On to other projects it is now ...

PG2 - At the top of the stack

Friday, October 25, 2013 | Posted in Paragliding PG2

Last weekend I thought I could repeat the full weekend flying trick that I pulled a few weeks back. The weather forecast was for sunny and windy, but not too windy. So on Saturday morning we drove out to the beach but when we got to the launch site it became clear that it would be touch-and-go due to the high winds. In the end I only got a little hop while practising my launches. After that I practised my ground handling with a Gin Nano which is a fun little glider but it did feel more like a kite than a real glider.

The next day was a different story. The wind died down just enough and the sun stayed, all in all a perfect day for flying. Because there were a large number of people the group was split into two, the absolute beginners and the people who had flown before. The latter group immediately set up for their first flights while the former group started with ground handling exercises. I was with the group of people who got to prepare for their first flights immediately.

Over the day I managed to get three flights in, all with varying levels of success. On the first flight of the day I too rushed and stuffed several things up. First of all I was too enthusiastic with the initial launch which lead to the glider pulling me off my feet and then over flying me. We both ended lying on the ground. Fortunately only my ego was damaged so another attempt was made. On the second start I was more controlled but still kinda jittery during the launch and the run. To top it off I managed to twist the brake line around the rises. So in the end I just fly a circuit down to the ground and landed. No harm done and lots of lessons learned.

The second flight was a lot better, much more controlled, although my launch still needs work. I'm not checking the canopy before commencing my launch run and during the run I need to remember to have no pressure on the brakes. During the flight I practised using the speed bar, which is a lot harder to push than I thought, and flying with big ears and speed bar. Turns out on the Atlas the big ears go in very easily (make sure you are really only pulling the outer A's) but it takes a bit of time for them to come out.

While the first two flights provided plenty of good (and safe) learning moments, neither flight was very special. However the third flight was one of those magic flights. After checking that the airspace was empty I started my launch, which went smoothly, but then somebody flew too close to the start place for me to continue. The instructor told me to keep my position with the glider above my head while the other paraglider flew by, but my limited skill was no match for the strong desire of my glider to fly and slowly we drifted towards the end of the cliff. A little bit of left brake allowed me to fly away from the other paraglider and then the soaring started. The Atlas climbs and climbs and keeps climbing, going higher and higher. Eventually, mostly thanks to the amazing Atlas, I ended up at the top of the stack looking down onto the start field with its tiny people, a whole bunch of other gliders and looking over the hills to the neighbouring towns. Later on I determined, from looking at the height topo's online, that I must have been somewhere between 100m and 120m above the beach! That to me was a magic moment.

Unfortunately all magic moments must end, otherwise they wouldn't be magic anymore, and in this case it was ended by my instructor who thought it would be wise for me to come down while the wind was still behaving so on with the big ears and the speed bar and down I went.

Sherlock release - V0.4.6.0

Thursday, October 24, 2013 | Posted in Sherlock

Version V0.4.6.0 of the Sherlock regression testing application has been released. This release provides two bug fixes:

  • The MSI installer now verifies that all directories exist before writing to them (issue #9)
  • The X-copy installer now verifies that the destination directory exists before copying files to it (issue #10)

Sherlock release - V0.4.5.0

Wednesday, October 23, 2013 | Posted in Sherlock

Version V0.4.5.0 of the Sherlock regression testing application has been released. This release provides one improvement and one bug fix:

  • The test report can now include files generated during the test (issue #5)
  • Sherlock.Service no longer includes a manifest signing key, instead the key is obtained via a configuration setting (issue #8)

Nuclei release - V0.6.7.0

Wednesday, October 23, 2013 | Posted in Nuclei

Version V0.6.7.0 of the Nuclei library has been released. This release fixes a bug in the processing of ICommandSet return tasks which caused the processor to throw an exception if the return task was a continuation task. This bug fix means it is now possible to chain tasks and return the final task from an ICommandSet object. An example of this behaviour is given in the following section of code.

public interface IMyCommandSet : ICommandSet
{
    Task DoSomethingAwesome();
}

public sealed class MyCommand : IMyCommandSet
{
    public Task DoSomethingAwesome()
    {
        var firstTask = Task.Factory.StartNew(
            () => Thread.Sleep(1000));
        var secondTask = firstTask.ContinueWith(
            t => Console.WriteLine("Awesome sauce")); 
        return secondTask
    }
}

PG2 - Spending a weekend in the air

Sunday, October 13, 2013 | Posted in Paragliding PG2

Last weekend another episode in my PG2 course took place with flights happening on both Saturday and Sunday. Both days started with an off shore breeze but as the day progressed the sea breeze brought six to seven knots of wind from the South-West and we were good to go.

This time we flew from the high launch at at Kariotahi which has the lowest point of the launch at about 90m above the beach. The walk up follows a narrow, relatively steep, sandy and muddy path which increased everybody's desire to top land. Fortunately for us our excellent instructor always managed to get us into a position where top landings were possible, in spite of the sometimes lacking wind strength.

Cruising around on my Gin Atlas

The main learning point for this weekend was the reverse launch. The technique I am being taught has, in some places, been called the Mitsos reverse launch. The reverse launch method provides a lot more control over the glider while it is on the ground than the forward launch method, not in the least because one can actually see what the glider is doing.

For me the hardest part of the reverse launch method so far is controlling the glider as it rises above my head. The launch will go well as long as the glider rises evenly, but as soon as it slides to the right or left side it is hard to get it back to the middle. The second area to focus on is coordinating the turn and the drive when actually launching. Normally I can execute the turn without too much trouble but I don't always drive correctly so a lot more practise is in order. Fortunately the instructors have given me an old harness and an old glider that I can use to practise my ground handling on days that we're not flying. In fact I would be doing that right now if there wasn't a near gale force wind outside ...

Over the course of the two days I managed to do:

  • Six flights of differing lengths with one flight of 30 minutes and others between five and ten minutes long.
  • Four top landings and two landings near the car park. In all cases the instructor guided me on my landing spot selection and approach paths.
  • One forwards launch and five reverse launches which all went well except for one reverse launch where I bounced up and down due to a lacking acceleration run.

And finally I have to say that the Gin Atlas is an amazing glider. I am by no means a good pilot, after all I'm just learning to fly but the Atlas consistently lets me fly higher than nearly all other gliders, save for some of the super high performance gliders. During the weekend all the other gliders had to, at one point or another, work quite hard to stay at the height of the launch site, or even to scratch their way back up to the launch height. The Atlas just only dropped down to the height of the launch a few times, and each time it did a minor puff of wind allowed it to gain more than sufficient height again.

Nuclei release - V0.6.6.0

Saturday, October 12, 2013 | Posted in Nuclei

Version V0.6.6.0 of the Nuclei library has been released. This release adds the ability to specify the name and version of the application in the log file. If no application name or version is specified then the information from Assembly.GetEntryAssembly() is used.

The application name and version can be specified when creating a new logger via the LoggerBuilder like so:

IConfiguration configuration = new MyConfiguration();
var logger = LoggerBuilder.ForFile(
    @"c:\mylogfile.txt",
    new DebugLogTemplate(configuration, () => DateTimeOffset.Now),
    applicationName,
    applicationVersion);

Nuclei release - V0.6.5.0

Thursday, September 26, 2013 | Posted in Nuclei

Version V0.6.5.0 of the Nuclei library has been released. This release adds an option to allow the user to select which types of channels the communication layer is allowed to use. The communication layer can be allowed to use TCP/IP channels, named pipe channels or both.

In the previous versions in the constructor of the CommunicationModule needed to be provided with a list of communication subjects and a flag indicating if channel discovery was allowed.

var builder = new ContainerBuilder(); 
builder.RegisterModule(
    new CommunicationModule(
        new List<CommunicationSubject>
            {
                CommunicationSubjects.Dataset,
            },
        false));

In V0.6.5.0 an extra parameter needs to be provided which indicates which types of channel the communication layer is allowed to use to communicate with other instances. The allowable options are ChannelType.NamedPipe and ChannelType.TcpIP.

var builder = new ContainerBuilder();
builder.RegisterModule(
    new CommunicationModule(
        new List<CommunicationSubject>
            {
                CommunicationSubjects.Dataset,
            },
        new[]
            {
                ChannelType.NamedPipe,
                ChannelType.TcpIP,
            }, 
        false));

nRefs release - V0.1.0.0

Wednesday, September 25, 2013 | Posted in nRefs

The first release of the nRefs assembly reference extraction application has been released. This release

  • The test output now includes an XML report with all the messages and an indication of test success or failure.
  • Removed the Test data directory settings from the web service because test packages are now always dropped in the App_Data directory

How this blog was created

Wednesday, September 25, 2013 | Posted in Blog DocPad GitHub

A long time ago, in a location far, far, far away there was a hero in search for a quest ...

And he searched ...

And he searched ....

And .. oh never mind, our hero he is not the object of this current post, thus we shall be leaving him in some weird limbo looking for his quest... oh well. The topic of this post is how this blog / website was build. It turns out that building anything in HTML, CSS, Node.js and all the other cool web technologies isn't really my strong suit. However I'm hoping that the description of how I build this site may be useful to somebody at some point in time, if only to have a good laugh at my lack of CSS skills.

As much as I love writing code, the idea of writing my content in an angle bracket language didn't really attract me so the plan was hatched (it came out of a purple egg, by the way) to write (nearly) all content in Markdown by using Docpad. Given that I've recently started using GitHub for my open source projects why not use it for version control and GitHub pages to serve my website.

I won't bore you with the details on how to install Docpad, the Docpad website has an excellent introduction to all those things. Once the initial install is done Docpad allows you to create a skeleton website by executing the following command

Docpad run

At which point Docpad presents a list of possible skeletons to use. In my case I selected the HTML5 boilerplate template. In order to get a nice layout quickly I also used the Open tools template.

Once I had the basic site layout sorted it was time to focus creating my own sections. Fortunately Docpad has a large number of plugins that provide useful additions to the generation process. For the generation of this site the following plugins are used:

  • datefromfilename - Extracts the date of a post from the file name.
  • gist - Adds GitHub gists to a page.
  • highlightjs - Provides syntax highlighting for code samples. My current plan is to use this for the smaller samples and use GitHub gists for the larger ones.
  • navlinks - Adds navigation links to the bottom of each post, pointing at the previous and next post.
  • related - Allows you to find all related documents based on a given set of tags.
  • tagging - Generates the tag cloud for the sidebar.

With all the toys sorted it was time to integrate them in the templates.

The top level menu is based on a collection which contains all pages that have their isPage flag set to true. Note that we also assume that these pages have the default layout. One thing to note is that the docpad.coffee configuration file seems to be sensitive to the type of whitespace you use. Either use spaces or tabs, but don't mix them, otherwise you may get some weird errors.

Once the collection has been filled with pages it is used to create an unordered list with an additional highlight placed on the currently active page

The sidebar contains three different sections. The first section is the about section which is nothing special. The recent and tags sections are more interesting from a website construction perspective. The recent section based on the frontpage collection which grabs all items from the posts and projects subdirectories and then sorts them by date. The recent section only display the last 10 items in that collection.

The tags section is based on a collection provided by the tagging plugin. The font-size of the tag is based on the weight assigned to the tag by the tagging plugin.

Each tag links to a page that contains all the posts that contain that tag.

Projects, tag indexes and posts

The project pages all have a shared layout which provides them with a title and a section that lists all the posts tagged with the name of the project

Posts are displayed in three different places:

  • The home page
  • The tag index page, containing all posts with a given tag
  • The post page

All pages use the same partial layout file in order to display the post.

The only difference between the post page and the other pages is that the post page will display all the comments. The comment system is based on comments on a GitHub issue and uses a small amount of Javascript to pull the comments across into the comment list.

Acknowledgements:

Finally there some people that have provided some descriptions of how their own blogs were created with Docpad (or other static generators). Below are the posts that I used to assemble my own blog. A big thank you to the original posters!

  • Ivan Zuzak for his post describing how to use GitHub issues as a way to provide comments on a blog post.
  • takitapart provided an excellent post describing how to organise content in a Docpad website.
  • Luca's forge provided a good start to using Docpad in his migration post. This site also provides some very nice ideas around the way posts and tags should look.
  • Ben Delarre provided a nice summary of his own Docpad setup.

Sherlock release - V0.4.4.0

Saturday, September 21, 2013 | Posted in Sherlock

Version V0.4.4.0 of the Sherlock regression testing application has been released. This release provides two improvements:

  • The test output now includes an XML report with all the messages and an indication of test success or failure.
  • Removed the Test data directory settings from the web service because test packages are now always dropped in the App_Data directory

PG2 - Second day

Monday, September 16, 2013 | Posted in Paragliding PG2

On Sunday the weather was good enough for some flying on the west coast. So at 11.30am about 8 of us gathered at Kario and the second day of my PG2 had started.

Paraglider soaring high over beautiful west coast

The first thing to notice about the west coast is that it looks amazing, rugged and empty. It seems that with paragliding besides the flying I also get to look out over the amazing landscapes New Zealand has. Double win!

But before I get to fly as a qualified pilot a lot more training has to be had and that we did. The day started off with some ground handling exercises on the really old gliders. My first taste of reverse launching except with out the actual turning around and flying bit. It turns out reverse launching isn't actually as simple as I thought it would be. Having flown kites for many years I thought I would get the reverse launch down pretty easily but when the gliders get larger and the controls are reversed things get difficult. More practise is required ...

After the ground handling a few short flights down the slope were made with a larger glider to practise the forward starts and then it was time to get some actual flights done. I managed to get three short flights in where I got to practise forward starts, turning and then landing. Some observations (which are probably obvious to those that have flown paragliders before):

  • If there is a distinct edge at the end of the start field then when you fly out over that edge the glider will climb rapidly due to the wind blasting straight up the edge. This is nothing to worry about, although it feels slightly disconcerting at first.
  • A paraglider moves around more than you would think, even in a nice constant sea breeze
  • If you land at even a slight angle to the wind it gets really hard to put the glider on its back, instead of its side. I got only one landing totally right.

Flying my new Gin Atlas

And then it was time for the unexpected highlight of the day. When I paid for the PG2 course I paid for a combination package, which includes the PG2 course, a glider and a harness. Now I still haven't selected a harness but I have selected the glider, a bright red Gin Atlas. I didn't expect to be able to fly it so soon in my course but after my initial three flights both instructors gave me the go ahead to fly it.

I managed a further two flights with it. The first one was a nice long soaring flight where I got to get a feel for the glider. At the end of the flight I managed to get quite a bit of height. And to top that flight off the instructor guided me in for a top landing upon which I nearly broke my glider by stuffing up the landing. Fortunately no gliders or pilots where hurt.

The last flight of the day was back down to the beach to pack our equipment and drive home. I set myself up for a landing approach and promptly overshot my selected landing spot by about 100 meters or so. I guess selecting and landing in the right spot is yet another skill I have to work on.

All in all it was an amazing day of flying, especially the nice soaring flight on my shiny new glider.

Sherlock release - V0.4.3.0

Saturday, September 14, 2013 | Posted in Sherlock

Version V0.4.3.0 of the Sherlock regression testing application has been released. This release provides several bug fixes for:

  • The console application: Changed the way the REST URLs are constructed so that it works if the web service on a sub-part of a domain, e.g. http://myserver/sherlock works now
  • The web service: Added Trace.Write statements for ease of debugging
  • The windows service: Fixed a dead-lock bug in the shut-down process and linked all processes via a Windows Job Object to ensure that all child processes are terminated if the service is terminated.

Sherlock release - V0.4.2.0

Wednesday, September 11, 2013 | Posted in Sherlock

Version V0.4.2.0 of the Sherlock regression testing application has been released. This release adds one bug fix and one improvement.

The bug fix is a change to the IsHyperVMachine and IsPhysicalMachine stored procedures in order to make them easier to use correctly.

The improvement is the addition of build steps that package the web projects into a ZIP file for easier deployment. These zip files can now also be retrieved from the release note

Sherlock release - V0.4.0.1

Sunday, September 08, 2013 | Posted in Sherlock

Version V0.4.0.1 of the Sherlock regression testing application has been released. This release adds two new features and one improvement.

The new features are:

  • The ability to continue with a test sequence after a test step has failed. The test step should indicate that the failure mode is 'continue'. Other option is 'stop'
  • Added a test step that allows executing a console application with a set of input parameters

The following gist gives an overview of what the configuration for the new console test step looks like

The improvement is in the HTML output report which is has been overhauled to improve the layout and readability.

First time paragliding

Tuesday, September 03, 2013 | Posted in Paragliding PG2

A little while ago I signed up for a PG2 paragliding course with Wings and Waves. Unfortunately I signed up just before winter crept in and made it impossible to fly. As we are moving towards spring the weather has been getting better and last Sunday the weather was good enough for us (my girlfriend and me) to have our first shot at flying a paraglider.

Casey ground handling the old glider

So on Sunday morning we met up with the Wings and Waves crew and drove over to a nice, if some what soggy, hill somewhere around Pukemore, south of the Bombay hills for the first steps of our training.

The first part consisted of learning how to ground handle a paraglider. Ground handling is the art of controlling the paraglider while on the ground. This turns out to be harder than it first looks. In the air the pilot will normally be centred underneath the paraglider. On the ground however the pilot may not be directly below the paraglider which means the pilot needs to actively move around to keep the paraglider above them.

Me doing a forward start with the old glider

Once we got the hang of the ground handling we started learning forwards starts, or alpine starts. With this method of starting you face forward with the glider behind you. Once you have sorted out the lines you move forwards until the lines are tensioned and then you run at a constant pace. The first glider we used was the really old one we used in ground handling. Given that it was not very large it was pretty easy to run with, and running we did, lots of it.

Once we got a few good runs in with the old glider the instructor found us a more modern paraglider to play with. This one was considerably larger and far more interesting. Over the remainder of the day we slowly worked our way up the hill, starting from a higher position each time, and being rewarded with a little more flight time and a little more height each time.

Casey having a little flight with the newer glider

In the beginning both my girl friend and me got to use the same glider but it turns out I've got a big fat ass and so I had to upgrade to bigger gliders a few more times. Each time being reminded that bigger gliders also mean more backwards pull on start and higher flights.

In all it was a good day flying. I still have a lot of work to do on all parts of my launching and flying skills (obviously given that this is the first time), but the one thing I really noticed I need to work on is distance estimation and flare timing. I got that wrong a few times and it was fortunate that the landing area was soggy because it made for nice, if what splashy, landings even if I miscalculated my flare timing.

All in all a good time was had by all and we are definitely keen for more flying. Hopefully with less running and more flying next time.

Why this blog was created

Friday, August 30, 2013 | Posted in Blog Writing

A couple of years ago I had a blog on Blogger but that never got further than a few posts, mainly because I ran out of interesting topics to write about. That and my studies started interfering with life in general ... it seems research does that to ones life. In any case the old blog stalled and no more posts were written.

Recently I have been wanting to improve my writing skills, you see, as software engineer I'm reasonably good at writing instructions for a three year old, eh I mean a computer, but I am not particularly good at writing concise human-friendly text. I have written two dissertations, one for my master degree and one for my PhD, but neither of those qualify as either concise or human-friendly.

Given that the only way to get better at writing is to practise, practise and practise some more, I thought writing posts about things that interest me would be a good way to improve my writing skills. And on top of that it may help me improve my skills in explaining technical constructs as well. Double win.

The only thing left to answer is what shall I be writing about. Well obviously those things that interest me, mostly being:

  • My open source projects. All of them could do with some additional documentation and maybe even some examples. I guess having a blog is just an additional reason to finally write those
  • All the (exciting) things I get up to when rock climbing or paragliding.
  • And if things progress nicely I might write some articles describing some work I am doing at the moment that is a continuation of the software development I did for my PhD.