April 24, 2014 Posted in speaking  |  programming

BDD all the way down

When I started doing TDD a few years ago, I often felt an inexplicable gap between the functionality described in the requirements and the tests I was writing to drive my implementation. BDD turned out to be the answer.

TDD makes sense and improves the quality of your code. I don’t think anybody could argue against this simple fact. Is TDD flawless? Well, that’s a whole different discussion.Turtles all the way down

But let’s back up for a moment.

Picture this: you’ve just joined a team tasked with developing some kind of software which you know nothing about. The only thing you know is that it’s an implementation of Conway’s Game of Life as a web API. Your first assignment is to develop a feature in the system:

In order to display the current state of the Universe
As a Game of Life client
I want to get the next generation from underpopulated cells

You want to implement this feature by following the principles of TDD, so you know that the first thing you should do is start writing a failing test. But, what should you test? How would you express this requirement in code? What’s even a cell?

If you’re looking at TDD for some guidance, well, I’m sorry to tell you that you’ll find none. TDD has one rule. Do you remember what the first rule of TDD is? (No, it’s not you don’t talk about TDD):

Thou shalt not write a single line of production code without a failing test.

And that’s basically it. TDD doesn’t tell you what to test, how you should name your tests and not even how to understand why they fail in first place.

As a programmer, the only thing you can think of, at this point, is writing a test that checks for nulls. Arguably, that’s the equivalent of trying to start a car by emptying the ashtrays.

Let the behavior be your guide

What if we stopped worrying about writing tests for the sake of, well, writing tests, and instead focused on verifying what the system is supposed to do in a certain situation? That’s called behavior and a lot of good things may come out of letting it be the driving force when developing software. Dan North noticed this during his research, which ultimately led him to the formalization of Behavior-driven Development:

I started using the word “behavior” in place of “test” in my dealings with TDD and found that not only did it seem to fit but also that a whole category of coaching questions magically dissolved. I now had answers to some of those TDD questions. What to call your test is easy – it’s a sentence describing the next behavior in which you are interested. How much to test becomes moot – you can only describe so much behavior in a single sentence. When a test fails, simply work through the process described above – either you introduced a bug, the behavior moved, or the test is no longer relevant.

Focusing on behavior has other advantages, too. It forces you to understand the context in which the feature you’re developing is going work. It makes you see the value that it brings for the users. And, last but not least, it forces you to ask questions about the concepts mentioned in the requirements.

So, what’s the first thing you should do? Well, let’s start by understanding what a generation of cells is and then we can move on to the concept of underpopulation:

Any live cell with fewer than two live neighbors dies, as if by needs caused by underpopulation.

Acceptance tests

At this point we’re ready to write our first test. But, since there’s still no code for the feature, what exactly are we supposed to test? The answer to that question is simpler than you might expect: the system itself.

We’re implementing a requirement for our Game of Life web API. The user is supposed to make an HTTP request to a certain URL sending a list of cells formatted as JSON and get back a response containing the same list of cells after the rule of underpopulation has been applied. We’ll know we’ll have fulfilled the requirement when the system does exactly that. It’s, in other words, the requirement’s acceptance criteria. It sure sounds like a good place to start writing a test.

Let’s express it the way formalized by Dan North:

Scenario: Death by underpopulation
    Given a live cell has fewer than 2 live neighbors
    When I ask for the next generation of cells
    Then I should get back a new generation
    And it should have the same number of cells
    And the cell should be dead

Here, we call the acceptance criteria scenario and use a Given-When-Then syntax to express its premises, action and expected outcome. Having a common language like this for expressing software requirements is one of the greatest innovations brought by BDD.

So, we said we were going to test the system itself. In practice, that means we must put ourselves in the user’s shoes and let the test interact with the system at its outmost boundaries. In case of a web API, that translates in sending and receiving HTTP requests.

In order to turn our scenario into an executable test, we need some kind of framework that can map the Given-When-Then sentences to methods. In the realm of .NET, that framework is called SpecFlow. Here’s how we could use it together with C# to implement our test:

IEnumerable<dynamic> generation;
HttpResponseMessage response;
IEnumerable<dynamic> nextGeneration;

public void Given_a_live_cell_has_fewer_than_COUNT_live_neighbors(int count)
    generation = new[]
                     new { Alive = true, Neighbors = --count }

public void When_I_ask_for_the_next_generation_of_cells()
    response = WebClient.PostAsJson("api/generation", generation);

public void Then_I_should_get_back_a_new_generation()
    nextGeneration = ParseGenerationFromResponse();

public void Then_it_should_have_the_same_number_of_cells()

public void Then_the_cell_should_be_dead()
    var isAlive = (bool)nextGeneration.Single().Alive;

IEnumerable<dynamic> ParseGenerationFromResponse()
    return response.ReadContentAs<IEnumerable<dynamic>>();

As you can see, each portion of the scenario is mapped directly to a method by matching the words used within the sentences separated by underscores.

At this point, we can finally run our acceptance test and watch it fail:

Given a live cell has fewer than 2 live neighbors
-> done.
When I ask for the next generation of cells
-> done.
Then I should get back a new generation
-> error: Expected: in range (200 OK, 299)
          But was:  404 NotFound

As expected, the test fails on the first assertion with an HTTP 404 response. We know this is correct since there’s nothing listening to that URL on the other end. Now that we understand why this test fails, we can now officially start implementing that feature.

The TDD cycle

We’re about to get our hands dirty (albeit keeping the code clean) and dive into the system. At this point we follow the normal rules of Test-driven development with its Red-Green-Refactor cycle. However, we’re faced with the exact same problem we had at the boundaries of the system. What should be our first test? Even in this case, we’ll let the behavior be our guiding light.

If you have experience writing unit tests, my guess is that you’re used to write one test class for every production code class. For example, given SomeClass you’d write your tests in SomeClassTests. This is fine, but I’m here to tell you that there’s a better way. If we focus on how a class should behave in a given situation, wouldn’t it be more natural to have one test class per scenario?

Consider this:

public class When_getting_the_next_generation_with_underpopulation
    : ForSubject<GenerationController>

This class will contain all the tests related to how the GenerationController class behaves when asked to get the next generation of cells given that one of them is underpopulated.

But, wait a minute. Wouldn’t that create a lot of tiny test classes?

Yes. One per scenario to be precise. That way, you’ll know exactly which class to look at when you’re working with a certain feature. Besides, having many small cohesive classes is better than having giant ones with all kinds of tests in them, don’t you think?

Let’s get back to our test. Since we’re testing one single scenario, we can structure it much in same way as we would define an acceptance test:

For each scenario there's exactly one context to set up, one action and one or more assertions on the outcome.

As always, readability is king, so we’d like to express our test in a way that gets us as close as possible to human language. In other words, our tests should read like specifications:

public class When_getting_the_next_generation_with_underpopulation
    : ForSubject<GenerationController>
    static Cell solitaryCell;
    static IEnumerable<Cell> currentGen;
    static IEnumerable<Cell> nextGen;

    Establish context = () =>
        solitaryCell = new Cell { Alive = true, Neighbors = 1 };
        currentGen = AFewCellsIncluding(solitaryCell);

    Because of = () =>
        nextGen = Subject.GetNextGeneration(currentGen);

    It should_return_the_next_generation_of_cells = () =>

    It should_include_the_solitary_cell_in_the_next_generation = () =>

    It should_only_include_the_original_cells_in_the_next_generation = () =>

    It should_mark_the_solitary_cell_as_dead = () =>

In this case I’m using a test framework for .NET called Machine.Specifications or MSpec. MSpec belongs to a category of frameworks called BDD-style testing frameworks. There are at least a few of them for almost every language known to men (including Haskell). What they all have in common is a strong focus on allowing you to express your tests in a way that resembles requirements.

Speaking of readability, see all those underscores, static variables and lambdas? Those are just the tricks MSpec has to pull on the C# compiler, in order to give us a domain-specific language to express requirements while still producing runnable code. Other frameworks have different techniques to get as close as possible to human language without angering the compiler. Which one you choose is largely a matter of preference.

Wrapping it up

I’ll leave the implementation of the GenerationController class as an exercise for the Reader, since it’s outside the scope of this article. If you like, you can find mine over at GitHub.

What’s important here, is that after a few rounds of Red-Green-Refactor we’ll finally be able to run our initial acceptance test and see it pass. At that point, we’ll know with certainty that we’ll have successfully implemented our feature.

Let’s recap our entire process with a picture:

The Outside-in development cycle

This approach to developing software is called Outside-in development and is described beautifully in Steve Freeman’s & Nat Pryce’s excellent book Growing Object-Oriented Software, Guided by Tests.

In our little exercise, we grew a feature in our Game of Life web API from the outside-in, following the principles of Behavior-driven Development.


You can see a complete recording of the talk I gave at Foo Café last year about this topic. The presentation pretty much covers the material described in this articles and expands a bit on the programming aspect of BDD. I hope you’ll find it useful. If have any questions, please fill free to contact me directly or write in the comments.

Here’s the abstract:

In this session I’ll show how to apply the Behavior Driven Development (BDD) cycle when developing a feature from top to bottom in a fictitious.NET web application. Building on the concepts of Test-driven Development, I’ll show you how BDD helps you produce tests that read like specifications, by forcing you to focus on what the system is supposed to do, rather than how it’s implemented.

Starting from the acceptance tests seen from the user’s perspective, we’ll work our way through the system implementing the necessary components at each tier, guided by unit tests. In the process, I’ll cover how to write BDD-style tests both in plain English with SpecFlow and in C# with MSpec. In other words, it’ll be BDD all the way down.

April 16, 2013 Posted in programming  |  autofixture

General-purpose customizations with AutoFixture

If you’ve been using AutoFixture in your tests for more than a while, chances are you’ve already come across the concept of customizations. If you’re not familiar with it, let me give you a quick introduction:

A customization is a group of settings that, when applied to a given Fixture object, control the way AutoFixture will create instances for the types requested through that Fixture.

At this point you might find yourself feeling an irresistible urge to know everything there’s to know about customizations. If that’s the case, don’t worry. There are a few resources online where you learn more about them. For example, I wrote about how to take advantage of customizations to group together test data related to specific scenarios.

In this post I’m going to talk about something different which, in a sense, is quite the opposite of that: how to write general-purpose customizations.

A (user) story about cooking

It’s hard to talk about test data without a bit of context. So, for the sake of this post, I thought we would pretend to be working on a somewhat realistic project. The system we’re going to build is an online catalogue of food recipies. The domain, at the very basic level, consists of three concepts:

  • Cookbook
  • Recipes
  • Ingredients

Basic domain model for a recipe catalogue.

Now, let’s imagine that in our backlog of requirements we have one where the user wishes to be able to search for recepies that contain a specific set of ingredients. Or in other words:

As a foodie, I want to know which recipes I can prepare with the ingredients I have, so that I can get the best value for my groceries.

From the tests…

As usual, we start out by translating the requirement at hand into a set of acceptance tests. In order do that, we need to tell AutoFixture how we’d like the test data for our domain model to be generated.

For this particular scenario, we need every Ingredient created in the test fixture to be randomly chosen from a fixed pool of objects. That way we can ensure that all recepies in the cookbook will be made up of the same set of ingredients.

Here’s how such a customization would look like:

public class RandomIngredientsFromFixedSequence : ICustomization
    private readonly Random randomizer = new Random();
    private IEnumerable<Ingredient> sequence;

    public void Customize(IFixture fixture)

    private void InitializeIngredientSequence(IFixture fixture)
        this.sequence = fixture.CreateMany<Ingredient>();

    private Ingredient PickRandomIngredientFromSequence()
        var randomIndex = this.randomizer.Next(0, sequence.Count());
        return sequence.ElementAt(randomIndex);

Here we’re creating a pool of ingredients and telling AutoFixture to randomly pick one of those every time it needs to create an Ingredient object by using the Fixture.Register method.

Since we’ll be using Xunit as our test runner, you can take advantage of the AutoFixture Data Theories to keep our tests succinct by using AutoFixture in a declarative fashion. In order to do so, we need to write an xUnit Data Theory attribute that tells AutoFixture to use our new customization:

public class CookbookAutoDataAttribute : AutoDataAttribute
    public CookbookAutoDataAttribute()
        : base(new Fixture().Customize(
                   new RandomIngredientsFromFixedSequence())))

If you prefer to use AutoFixture directly in your tests, the imperative equivalent of the above is:

var fixture = new Fixture();
fixture.Customize(new RandomIngredientsFromFixedSequence());

At this point, we can finally start writing the acceptance tests to satisfy our original requirement:

public class When_searching_for_recipies_by_ingredients
    [Theory, CookbookAutoData]
    public void Should_only_return_recipes_with_a_specific_ingredient(
        Cookbook sut,
        Ingredient ingredient)
        // When
        var recipes = sut.FindRecipies(ingredient);
        // Then
        Assert.True(recipes.All(r => r.Ingredients.Contains(ingredient)));

    [Theory, CookbookAutoData]
    public void Should_include_new_recipes_with_a_specific_ingredient(
        Cookbook sut,
        Ingredient ingredient,
        Recipe recipeWithIngredient)
        // Given
        // When
        var recipes = sut.FindRecipies(ingredient);
        // Then
        Assert.Contains(recipeWithIngredient, recipes);

Notice that during these tests AutoFixture will have to create Ingredient objects in a couple of different ways:

  • indirectly when constructing Recipe objects associated to a Cookbook
  • directly when providing arguments for the test parameters

As far as AutoFixture is concerned, it doesn’t really matter which code path leads to the creation of ingredients. The algorithm provided by the RandomIngredientsFromFixedSequence customization will apply in all situations.

…to the implementation

After a couple of Red-Green-Refactor cycles spawned from the above tests, it’s not completely unlikely that we might end up with some production code similar to this:

// Cookbook.cs
public class Cookbook
    private readonly ICollection<Recipe> recipes;

    public Cookbook(IEnumerable<Recipe> recipes)
        this.recipes = new List<Recipe>(recipes);

    public IEnumerable<Recipe> FindRecipies(params Ingredient[] ingredients)
        return recipes.Where(r => r.Ingredients.Intersect(ingredients).Any());

    public void AddRecipe(Recipe recipe)

// Recipe.cs
public class Recipe
    public readonly IEnumerable<Ingredient> Ingredients;

    public Recipe(IEnumerable<Ingredient> ingredients)
        this.Ingredients = ingredients;

// Ingredient.cs
public class Ingredient
    public readonly string Name;

    public Ingredient(string name)
        this.Name = name;

Nice and simple. But let’s not stop here. It’s time to take it a bit further.

An opportunity for generalization

Given the fact that we started working from a very concrete requirement, it’s only natural that the RandomIngredientsFromFixedSequence customization we came up at with encapsulates a behavior that is specific to the scenario at hand. However, if we take a closer look we might notice the following:

The only part of the algorithm that is specific to the original scenario is the type of the objects being created. The rest can easily be applied whenever you want to create objects that are picked at random from a predefined pool.

An opportunity for writing a general-purpose customization has just presented itself. We can’t let it slip.

Let’s see what happens if we extract the Ingredient type into a generic argument and remove all references to the word “ingredient”:

public class RandomFromFixedSequence<T> : ICustomization
    private readonly Random randomizer = new Random();
    private IEnumerable<T> sequence;

    public void Customize(IFixture fixture)

    private void InitializeSequence(IFixture fixture)
        this.sequence = fixture.CreateMany<T>();

    private T PickRandomItemFromSequence()
        var randomIndex = this.randomizer.Next(0, sequence.Count());
        return sequence.ElementAt(randomIndex);

Voilà. We just turned our scenario-specific customization into a pluggable algorithm that changes the way objects of any type are going to be generated by AutoFixture. In this case the algorithm will create items by picking them at random from a fixed sequence of T.

The CookbookAutoDataAttribute can easily changed to use the general-purpose version of the customization by closing the generic argument with the Ingredient type:

public class CookbookAutoDataAttribute : AutoDataAttribute
    public CookbookAutoDataAttribute()
        : base(new Fixture().Customize(
                   new RandomFromFixedSequence<Ingredient>())))

The same is true if you’re using AutoFixture imperatively:

var fixture = new Fixture();
fixture.Customize(new RandomFromFixedSequence<Ingredient>());

Wrapping up

As I said before, customizations are a great way to set up test data for a specific scenario. Sometimes these configurations turn out to be useful in more than just one situation.

When such opportunity arises, it’s often a good idea to separate out the parts that are specific to a particular context and turn them into parameters. This allows the customization to become a reusable strategy for controlling AutoFixture’s behavior across entire test suites.

February 26, 2013 Posted in technology  |  programming

Adventures in overclocking a Raspberry Pi

This article sums up my experience when overclocking a Raspberry Pi computer. It doesn’t provide a step-by-step guide on how to do the actual overclocking, since that kind of resources can easily be found elsewhere on the Internet. Instead, it gathers the pieces of information that I found most interesting during my research, while diving deeper on some exquisitely geeky details on the way.

A little background

For the past couple of months I’ve been running a Raspberry Pi as my primary NAS at home. It wasn’t something that I had planned. On the contrary, it all started by chance when I received a Raspberry Pi as a conference gift at last year’s Leetspeak. But why using it as a NAS when there are much better products on the market, you might ask. Well, because I happen to have a small network closet at home and the Pi is a pretty good fit for a device that’s capable of acting like a NAS while at the same time taking very little space.

My Raspberry Pi sitting in the network closet.

Much like the size, the setup itself also strikes with its simplicity: I plugged in an external 1 TB WD Element USB drive that I had lying around (the black box sitting above the Pi in the picture on the right), installed Raspbian on a SD memory card and went to town. Once booted, the little Pi exposes the storage space on the local network through 2 channels:

  • As a Windows file share on top of the SMB protocol, through Samba
  • As an Apple file share on top of the AFP protocol, through Nettalk

On top of that it also runs a headless version of the CrashPlan Linux client to backup the contents of the external drive to the cloud. So, the Pi not only works as a central storage for all my files, but it also manages to fool Mac OS X into thinking that it’s a Time Capsule. Not too bad for a tiny ARM11 700 MHz processor and 256 MB of RAM.

The need for (more) speed

A Raspberry Pi needs 5 volts of electricity to function. On top of that, depending on the number and kind of devices that you connect to it, it’ll draw approximately between 700 and 1400 milliamperes (mA) of current. This gives an average consumption of roughly 5 watts, which makes it ideal to work as an appliance that’s on 24/7. However, as impressive as all of this might be, it’s not all sunshine and rainbows. In fact, as the expectations get higher, the Pi’s limited hardware resources quickly become a frustrating bottleneck.

Luckily for us, the folks at the Raspberry Pi Foundation have made it fairly easy to squeeze more power out of the little piece of silicon by officially allowing overcloking. Now, there are a few different combinations of frequencies that you can use to boost the CPU and GPU in your Raspberry Pi.

The amount of stable overclocking that you’ll be able to achieve, however, depends on a number of physical factors, such as the quality of the soldering on the board and the amount of output that’s supported by the power supply in use. In other words, YMMV.

There are also at least a couple of different ways to go about overclocking a Raspberry Pi. I found that the most suitable one for me is to manually edit the configuration file found at /boot/config.txt. This will not only give you fine-grained control on what parts of the board to overclock, but it will also allow you to change other aspects of the process such as voltage, temperature thresholds and so on.

In my case, I managed to work my way up from the stock 700 MHz to 1 GHz through a number of small incremental steps. Here’s the final configuration I ended up with:


One thing to notice is the force_turbo option that’s currently turned off. It’s there because, until September of last year, modifying the CPU frequencies of the Raspberry Pi would set a permanent bit inside the chip that voided the warranty.

However, having recognized the widespread interest in overclocking, the Raspberry Foundation decided to give it their blessing by building a feature into their own version of the Linux kernel called Turbo Mode. This allows the operating system to automatically increase and decrease the speed and voltage of the CPU based on much load is put on the system, thus reducing the impact on the hardware’s lifetime to effectively zero.

Setting the force_turbo option to 1 will cause the CPU to run at its full speed all the time and will apparently also contribute to setting the dreaded warranty bit in some configurations.

Entering Turbo Mode

When Turbo Mode is enabled, the CPU speed and voltage will switch between two values, a minimum one and a maximum one, both of which are configurable. When it comes to speed, the default minimum is the stock 700 MHz. The default voltage is 1.20 V. During my overclocking experiments I wanted to keep a close eye on these parameters, so I wrote a simple Bash script that fetches the current state of the CPU from different sources within the system and displays a brief summary. Here’s how it looks like when the system is idle:

Output of my cpustatus script when the CPU is idle.

See how the current speed is equal to the minimum one? Now, take a look at how things change on full blast with the Turbo mode kicked in:

Output of my cpustatus script with Turbo Mode enabled.

As you can see, the CPU is running hot at the maximum speed of 1 GHz fed with 0,15 extra volts.

The last line shows the governor, which is a piece of the Linux kernel driver called cpufreq that’s responsible for adjusting the speed of the processor. The governor is the strategy that regulates exactly when and how much the CPU frequency will be scaled up and down. The one that’s currently in use is called ondemand and it’s the foundation upon which the entire Turbo Mode is built.

It’s interesting to notice that the choice of governor, contrary to what you would expect, isn’t fixed. The cpufreq driver can, in fact, be configured to use a different governor during boot simply by modifying a file on disk. For example, changing from the ondemand governor to the one called powersave would block the CPU speed to its minimum value, effectively disabling Turbo Mode:

cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
# Prints ondemand

echo "powersave" | /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor

cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
# Prints powersave

Here’s a list of available governors as seen in Raspbian:

cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_available_governors
# Prints conservative ondemand userspace powersave performance

If you’re interested in seeing how they work, I encourage you to check out the cpufreq source code on GitHub. It’s very well written.

Fine tuning

I’ve managed to get a pretty decent performance boost out of my Raspberry Pi just by applying the settings shown above. However, there are still a couple of nods left to tweak before we can settle.

The ondemand governor used in the Raspberry Pi will increase the CPU speed to the maximum configured value whenever it finds it to be busy more than 95% of the time. That sounds fair enough for most cases, but if you’d like that extra speed bump even when the system is performing somewhat lighter tasks, you’ll have to lower the load threshold. This is also easily done by writing an integer value to a file:

60 > /sys/devices/system/cpu/cpufreq/ondemand/up_threshold

Here we’re saying that we’d like to have the Turbo Mode kick in when the CPU is busy at least 60% of the time. That is enough to make the Pi feel a little snappier during general use.

Wrap up

I have to say that I’ve been positively surprised by the capabilities of the Raspberry Pi. Its exceptional form factor and low power consumption makes it ideal to work as a NAS in very restricted spaces, like my network closet. Add to that the flexibility that comes from running Linux and the possibilities become truly endless. In fact, the more stuff I add to my Raspberry Pi, the more I’d like it to do. What’s next, a Node.js server?

January 22, 2013 Posted in speaking

Grokking Git by seeing it

When I first started getting into Git a couple of years ago, one of the things I found most frustrating about the learning experience was the complete lack of guidance on how to interpret the myriad of commands and switches found in the documentation. On second thought, calling it frustrating is actually an understatement. Utterly painful would be a better way to describe it. Git logo What I was looking for, was a way to represent the state of a Git repository in some sort of graphical format. In my mind, if only I could have visualized how the different combinations of commands and switches impacted my repo, I would have had a much better shot at actually understand their meaning.

After a bit of research on the Git forums, I noticed that many people was using a simple text-based notation to describe the state of their repo. The actual symbols varied a bit, but they all essentially came down to something like this:

                               C4--C5 (feature)
                C1--C2--C3--C4'--C5'--C6 (master)

where the symbols mean:

  • Cn represents a single commit
  • Cn’ represents a commit that has been moved from another location in history, i.e. it has been rebased
  • (branch) represents a branch name
  • ^ indicates the commit referenced by HEAD

This form of graphical DSL proved itself to be extremely useful not only as a learning tool but also as a universal Git language, useful for documentation as well as for communication during problem solving.

Now, keeping this idea in mind, imagine having a tool that is able to draw a similar diagram automatically. Sounds interesting? Well, let me introduce SeeGit.

SeeGit is Windows application that, given the path to a Git repository on disk, will generate a diagram of its commits and references. Once done, it will keep watching that directory for changes and automatically update the diagram accordingly.

This is where the idea for my Grokking Git by seeing it session came from. The goal is to illustrate the meaning behind different Git operations by going through a series of demos, while having the command line running on one half of the screen and SeeGit on the other. As I type away in the console you can see the Git history unfold in front of you, giving you an insight in how things work under the covers.

In other words, something like this:

SeeGit session in progress.

So, this is just to give you a little background. Here you’ll find the session’s abstract, slides and demos. There’s also a recording from when I presented this talk at LeetSpeak in Malmö, Sweden back in October 2012. I hope you find it useful.


In this session I’ll teach you the Git zen from the inside out. Working out of real world scenarios, I’ll walk you through Git’s fundamental building blocks and common use cases, working our way up to more advanced features. And I’ll do it by showing you graphically what happens under the covers, as we fire different Git commands.

You may already have been using Git for a while to collaborate on some open source project or even at work. You know how to commit files, create branches and merge your work with others. If that’s the case, believe me, you’ve only scratched the surface. I firmly believe that a deep understanding of Git’s inner workings is the key to unlock its true power allowing you, as a developer, to take full control of your codebase’s history.

August 30, 2012 Posted in speaking

Make your system administrator friendly with PowerShell

I know I’ve said it before, but I love the command line. And being a command line junkie, I’m naturally attracted to all kinds of tools the involve a bright blinking square on a black canvas. Historically, I’ve always been a huge fan of the mighty Bash. PowerShell, however, came to change that.


Since PowerShell made its first appearance under the codename “Monad back in 2005, it proposed itself as more than just a regular command prompt. It brought, in fact, something completely new to the table: it combined the flexibility of a Unix-style console, such as Bash, with the richness of the .NET Framework and an object-oriented pipeline, which in itself was totally unheard of. With such a compelling story, it soon became apparent that PowerShell was aiming to become the official command line tool for Windows, replacing both the ancient Command Prompt and the often criticized Windows Script Host. And so it has been.

Seven years has passed since “Monadwas officially released as PowerShell, and its presence is as pervasive as ever. Nowadays you can expect to find PowerShell in just about all of Microsoft’s major server products, from Exchange to SQL Server. It’s even become part of Visual Studio thorugh the NuGet Package Manager Console. Not only that, but tools such as posh-git, make PowerShell a very nice, and arguably more natural, alternative to Git Bash when using Git on Windows.

Following up on my interest for PowerShell, I’ve found myself talking a fair deal about it both at conferences and user groups. In particular, during the last year or so, I’ve been giving a presentation about how to integrate PowerShell into your own applications.

The idea is to leverage the PowerShell programming model to provide a rich set of administrative tools that will (hopefully) improve the often stormy relationship between devs and admins.

Since I’m often asked about where to get the slides and the code samples from the talk, I thought I would make them all available here in one place for future reference.

So here it goes, I hope you find it useful.


Have you ever been in a software project where the IT staff who would run the system in production, was accounted for right from the start? My guess is not very often. In fact, it’s far too rare to see functionality being built into software systems specifically to make the job of the IT administrator easier. It’s a pity, because pulling that off doesn’t require as much time and effort as you might think with tools like PowerShell.

In this session I’ll show you how to enhance an existing .NET web application with a set of administrative tools, built using the PowerShell programming model. Once that is in place, I’ll demonstrate how common maintenance tasks can either be performed manually using a traditional GUI or be fully automated through PowerShell scripts using the same code base.

Since the last few years, Microsoft itself has committed to making all of its major server products fully administrable both through traditional GUI based tools as well as PowerShell. If you’re building a server application on the .NET platform, you will soon be expected to do the same.