Tag Archives: test

Build Agent Infrastructure Testing in GoCD

In this post I would like to describe a simple technique for reducing the waiting time and stress related to build agent environment volatility when using Continuous Integration / Continuous Delivery tools like GoCD, via infrastructure testing.

The Problem

Given a modern CI server, such as GoCD, and a set of dedicated build machines (agents), it is possible to improve software development agility. Automated build/test/deploy pipelines, built to reflect the value stream, bring transparency and focus into the software delivery activities.

CI automation is software itself, and is thus susceptible to errors. Configuration management can optimize the set up of the environment, in which the build agents run. However, when computational resources are added to a CI infrastructure, i.e. to parallelize the build and thus reduce feedback times, a missing environment dependency can cause stress and pain that CI is trying to eliminate.

Consider a pipeline where a complete cycle (i.e. with slow integration tests followed by a reporting step at the end or long check-outs in the beginning) takes a significant amount of time. If one of the last tasks fails due to a configuration or an environment issue, the whole stage fails. The computational resources have been wasted just to find out that a compiler is missing. This can easily happen when there is variation in the capabilities of the build agents.

Improvement idea: fail fast! Don’t wait for environment or infrastructure mismatches

GoCD Resources as Requirements and Capabilities

If a step in a build pipeline requires a certain compiler or a particular environment, this can be conveniently expressed in the configuration of GoCD as a build agent resource. A resource can be seen as a requirement of a pipeline step that is fulfilled by a corresponding capability of a build agent.

Consider the following set-up with two build agents – one running on Windows, another one on Linux. Some tasks could be completely platform-independent, such as text processing, and thus could be potentially performed on any machine with a required interpreter installed.

agents

Build Agent is the Culprit

We have set up our environment, and have successfully tested our first commit, but the second one fails:

another_agent

The code is the same, why did the second build fail? The builds ran on different agents, but I expected them to behave similarly…

build_fails

Oh, that’s embarrassing. While yes, the script is platform-independent, there’s no executable named python2 in my Windows runner environment path.

With a one-file repository and a simple print statement this failure did not cause much damage, but as mentioned earlier, real life builds failing due to a missing executable might be costly.

Unhappy Picture

unhappy

Infrastructure Test Pipeline

In order to fail fast in situations where new agents are added to a CI infrastructure, or their environment is volatile, I propose to use a single independent pipeline that checks the assumptions that longer builds depend upon.

If a build step requires python in the path, there should be a test for it that gives this feedback in seconds without much additional waiting time. This can be as easy as calling python --version, which will fail with a non-zero return code if the binary is missing. More fine-grained assertions are possible, but should still remain fast.

If a certain binary should not be in the path, this can be asserted as well. The same goes for environment variables and file existence. Dedicated infrastructure testing tools, such as Serverspec could also be used, but having a response time under a minute is crucial in my view.

Run on All Agents

In order to validate the consistency of the CI infrastructure, the validation tasks should run on all agents that advertise a corresponding resource. This is where, in my view, the real power of GoCD comes to light, and the concepts used in it fit in the right places.

GoCD will run the test tasks on all agents that fulfill all resource requirements for the task.

run_on_all_g

Test fails

Now that we have all the tests, running them gives quick and precise feedback:

infrastructure_test_fails

Checking out the job run details reveals the offending agent. Note the test duration: under 1 second.

infrastructure_test_agent

Fixing the Infrastructure

resources_modified

Whatever the resolution of the infrastructure problem, when the infrastructure test has a good coverage of the prerequisites for a pipeline, adding new agents to the CI infrastructure should become as much fun as TDD is: write an infrastructure test, see it fail, fix the infrastructure, feel the good hormones. Add new build agents for speed — still works — great!

infrastructure_test_passes

Note how the resources that are available only on one machine are only run on one corresponding machine.

 

Happy Pictures and Developers

happy pipeline

When to Test

It is an open question, when to test the infrastructure. With the system being composed of the CI server and agents, the tests should probably run on any global state change, such as

  • added/removed/reconfigured agents
  • automatic OS updates (controversial)
  • restarts
  • network topology changes

It is also possible to schedule a regular environment check. Having the environment test pipeline be the input for other pipelines unfortunately will not do in the following sequence of events:

  • environment tests pass
  • faulty agents are added
  • downstream pipeline is triggered
  • environment failure causes a pipeline to fail

In any case, there is a REST API available for the GoCD server should automating the automation become a necessity.

Acknowledgments

I would like to thank all the great minds, authors and developers who have worked and are working to make lives of developers and software users better. Tools and ideas that work and provide value are indispensable.  The articles and the software linked in this blog entry are examples of knowledge that brings the software industry forward. I am also very grateful to my current employer for letting me learn, grow and make a positive impact.

 

 

Why wait forever for the tests? Fast tests of slow software.

Time is volatile

Imagine writing a cron-like functionality that should produce some side-effect, such as cleanup. The intervals between such actions might be quite long. How does one test that? One can surely reason about the software, but given a certain complexity, test should be written, proving that certain important scenarios work as intended.

It’s common that software depends on time flow as dictated by the physical time flow, reflected via some clock provider. However, resetting the time to a year ahead won’t make the CPU work faster and make all the computations it should have performed within that year. A clock is also a volatile component that can be manipulated, thus if time is an issue, it’s probably a good idea not to depend on it directly, following the Stable Dependencies Principle and the Dependency Inversion Principle.

Luckily, there is an abstraction for time, at least in Reactive Extensions (Rx), which is the Scheduler.

Slow non-tests

Here’s a slow Groovy non-test, waiting for some output on the console using RxGroovy:

import rx.*
import java.util.concurrent.TimeUnit

def observable = Observable
	.just(1)
	.delay(5, TimeUnit.SECONDS)

observable.subscribe { println 'ah, OK, done! Or not?' }

Observable
	.interval(1,TimeUnit.SECONDS)
	.subscribe { println 'still waiting...' }

println 'starting to wait for the test to complete ...'

observable.toBlocking().last()

Running it produces the following slow-ticking output:

oldnontest [1. Caputured with the wonderful pragmatic tool LICEcap by the Reaper developers]

Interpreting such tests without color can be somewhat challenging [2. Here, the ‘still waiting’ subscription is terminated after the first subscription ends. Try exchanging the order of the subscribe calls.].

Fast tests

Now let’s test something ridiculous, such as waiting for a hundred days using Spock. Luckily, RxJava & RxGroovy also do implement the test scheduler, thus enabling fast tests using virtual time:

import spock.lang.Specification

import rx.Observable
import rx.schedulers.TestScheduler
import java.util.concurrent.TimeUnit


class DontWaitForever extends Specification {
    def "why wait?"() {
        setup:
            def scheduler = new TestScheduler()

            // system under test: will tick once after a hundred days
            def observable = Observable
                .just(1)
                .delay(100, TimeUnit.DAYS, scheduler)
            def done = false

        when:
            observable.subscribe {
                done = true
            }

            // still in the initial state
            done == false

        and:
            scheduler.advanceTimeBy 100, TimeUnit.DAYS

        then:
            done == true
    }
}

fasttest [3. Building using Gradle]

just checking, advancing the time by 99 days results in a failure:

just_checking

Delightful, groovy colors!

Source

github.com/d-led/dont_wait_forever_for_the_tests

ReactiveUI 6 and ViewModel Testing

Testability and ReactiveUI

ReactiveUI XAML example

In the previous articles about ReactiveUI I’ve claimed without bringing any evidence that writing ViewModels using ReactiveObjects brings about testability. While the aspects of testing Rx and ReactiveUI have been treated at length in the respective authors’ blogs linked herein, this post is intended as a quick glance for the impatient online surfer at the hello-world testing code, which has been written “post-mortem” [1. as in, not within TDD] as a follow-up to the previous articles.

An update to ReactiveUI 6

Paul Betts and contributors have been busy simplifying and extending the library [2. which now also targets Xamarin and Windows Phone 8 and Windows Store Apps]. There are some extension methods now that help creating observables from properties, and transforming observables to properties. In the example ViewModel from previous articles, there’s an observable stream of strings that is simply transformed into a property defined as follows:

ObservableAsPropertyHelper<string> _BackgroundTicker;
public string BackgroundTicker
{
    get { return _BackgroundTicker.Value; }
}

In the constructor, the helper is now initialized without using strings:

public WordCounterModel(IObservable<string> someBackgroundTicker)
{
someBackgroundTicker
    .ToProperty(this, x => x.BackgroundTicker, out _BackgroundTicker);
...
}

instead of string-based error-prone

_BackgroundTicker = new ObservableAsPropertyHelper<string>(
	someBackgroundTicker, _ => raisePropertyChanged("BackgroundTicker")
);

For the actual changes in ReactiveUI, consult Paul Betts’ insightful log.

A simple test

Since the tests have been written after writing the example code, I’ve been searching for the “Generate Unit Test” context menu in Visual Studio 2013. The context menu is not there, but luckily some enthusiasts recreated the functionality partly: → Unit Test Generator.

After the initial set-up and

failedtest

here’s the simple test of the word count property:

[TestMethod]
public void WordCounterModelTest()
{
    var mock = new Mock<IObservable<string>>();
    var vm = new WordCounterModel(mock.Object);

    vm.WordCount.Should().Be(0);

    vm.TextInput = "bla!";
    vm.WordCount.Should().Be(1);

    vm.TextInput = "bla, bla!!";
    vm.WordCount.Should().Be(2);
}

In it, one can observe the use of MOQ for mocking a dummy and FluentAssertions for beautifully readable Spec/BDD-style assertions [3. I originally intended to use SpecFlow, but the specs refused to flow frictionlessly].

So, there’s no UI involved, and the UI is dogmatically bound from XAML with almost no code-behind.

Testing time series

The hello world example program simulated a dependency on some timed series of strings, ticking every second. While this is not specific to ReactiveUI, let’s make use of the test scheduler [4. see Intro to Rx]. For that, the time series should optionally depend on an injected IScheduler:

public class BackgroundTicker
{
    IScheduler scheduler = Scheduler.Default;

    public BackgroundTicker(IScheduler other_scheduler = null)
    {
        if (other_scheduler != null)
            scheduler = other_scheduler;
    }

    public IObservable<string> Ticker
    {
        get
        {
            return Observable
                .Interval(TimeSpan.FromSeconds(1), scheduler)
                .Select(_ => DateTime.Now.ToLongTimeString());
        }
    }
}

The test instantiates a test scheduler, which is then advanced to make deterministic assertions. The code should speak for itself:

[TestMethod]
public void BackgroundTickerTest()
{
    (new TestScheduler()).With(scheduler =>
    {
        var ticker = new BackgroundTicker(scheduler);

        int count = 0;
        ticker.Ticker.Subscribe(_ => count++);
        count.Should().Be(0);

        scheduler.AdvanceByMs(1000);
        count.Should().Be(1);

        scheduler.AdvanceByMs(2000);
        count.Should().Be(3);
    });
}

Summary

passedtest

Code: https://github.com/d-led/reactiveexamples

Previous article: The WPF + ReactiveUI Refactored Version of the Responsive UI Hello World.

See also: the c++ version.