May 09, 2014 Posted in mechanical-keyboards

A gentle introduction to mechanical keyboards

The past few of years have seen a renaissance for mechanical keyboards, especially among two large groups of computer enthusiasts: programmers and gamers. I, too, share a passion for these wonderful pieces of hardware that goes way back. A quick search on the Internet reveals a wealth of web sites entirely dedicated to the subject. Unfortunately, most of the information in them is too "nerdy" for someone who's approaching it for the first time. In this post I'd like to give a gentle introduction to the world of mechanical keyboards for the curious, but uninitiated.

I love typing. I still remember how I, as a kid, used to borrow my mother’s Diplomat typewriter from 1964 and started hammering away on it, pretending to type some long text:

The legendary Diplomat typewriter

“It sure sounds like you’re a fast typist!”

she would say when she heard the racket coming from my room. Little did she know that what came out on the sheet of paper was actually gibberish. Or maybe she was just being kind.

Anyway, I enjoyed the feeling of pressing those stiff plastic buttons and hear the sound made by the metallic arm every time it punched a letter.

Today, I continue to enjoy a similar experience on my computer by using a mechanical keyboard.

So, what’s a mechanical keyboard?

First and foremost, let’s get the terminology straight by answering that burning question.

Wikipedia’s definition gives us a good starting point, although it’s still pretty vague:

Mechanical-switch keyboards use real switches underneath every key. Depending on the construction of the switch, such keyboards have varying response and travel times.

That leaves us wondering: what do they mean by real switches? Well, in this case real means physical, in the sense that under each key there’s a metal spring that compresses every time you press a key. The spring is housed inside a plastic device that makes sure the key is registered by the computer at a specific point in time while the key is being pressed. That device is called a switch.

A mechanical switch

So, let’s reformulate the definition of a mechanical keyboard:

A keyboard is considered mechanical if it uses metal springs underneath each key.

It’s still true that there are different types of mechanical switches, each with their own special characteristics. The kind of switch used in a mechanical keyboard determines the overall feeling you’ll get when you type on it.

A little bit of history

At this point you might rightfully wonder:

Wait a minute. Aren’t all keyboards mechanical?

No, they are not. At least not anymore. The keyboards sold with the first personal computers back in the late 70’s, however, were indeed.

Back then, computer keyboards were designed to mimic the feeling and behavior of typewriters. These were robust pieces of hardware, built with quality in mind. They used solid materials, like metal plates and thick ABS plastic frames. The keys were stiff enough to allow the typist to rest her fingers on the home row without accidentally pressing any key.

If you’re in your 30’s, chances are you remember how it felt to type on one of these keyboards, maybe in front of a Macintosh or a Commodore 64. All throughout the 80’s and early 90’s, every keyboard was mechanical. And heavy.

A Commodore 64 computer. Photo by Shane Doucette on Flickr

But then things started to change. When PCs became a mass market product, the additional cost of producing quality mechanical keyboards to go along with them was no longer justifiable. Consumers didn’t seem to care, either. As graphical user interfaces (GUI) grew in popularity during the mid 80’s, the keyboard, which had been the primary way to interact with computers up to that point, had to leave room to another kind of input device: the mouse.

Keyboards, like computers, became a commodity sold in large volumes. In an effort to continuously try to bring the production costs down, new designs and materials became mainstream. Focus shifted from quality and durability to cost effectiveness.

Consequently, the heavy metal plates disappeared in favor of cheaper silicon membranes put on top of electrical matrixes. Metal springs left room to air pads under each key. This keyboard technique is called rubber dome or membrane, and is to this day the most common type of keyboard sold with desktop computers.

Rubber dome switches. Source: codinghorror.com

For laptops, a new kind of low profile rubber dome switch began to appear. They were made out of thin plastic parts and were called scissors because of their collapsable X shape.

The scissor switch

While today’s mainstream keyboards certainly serve their purpose, they have a limited lifetime.

With time and usage, the rubber bubbles eventually wear out, which means you have to press harder on the keys to activate them. The letters printed on the keys fade or peel off. The cheap plastic parts loosen up or bend. And by the time you’re finally ready to throw those suckers in the bin, they’ve managed to drive you nuts.

Mechanical keyboards, on the other hand, are built to last.

As a testimony of their extreme durability, I can tell you that the oldest one in my collection is an Apple Extended Keyboard II dated all the way back to 1989 and it still works like a charm.

My glorious Apple Extended Keyboard II from 1989

That’s a 25 years old piece of hardware still in perfectly good condition. So much, in fact, that I use it daily as my primary keyboard at home.

It’s all about the feeling

However, believe it or not, the biggest problem with rubber dome keyboards isn’t their fragility. It’s the fact that they’re an ergonomic nightmare.

We said that the switches are responsible for telling the computer which key is being pressed. The precise moment when that happens is called actuation point. The distance between a key’s rest position and when it’s fully pressed down is called key travel.

Now, in rubber dome keyboards, the only way to actuate a key is to press it all the way to the bottom (also known as bottoming out). That’s when the silicon air bubble beneath it gets squashed, thus sending an electrical signal to the computer. This means that your fingers have to travel a longer distance when typing, which puts a strain on your hands and wrists.

Mechanical keys, on the other hand, are actuated before they bottom out. For most types of switches, that happens about half way through the key travel.

What does this mean in practice? Well, it means that you don’t have to press as hard on your keyboard when you type. Your fingers don’t have to travel as much and, consequently, you don’t get as fatigued. It may even help you type faster.

To each his own switch

There are at least a dozen different kinds of mechanical switches available on the market today. That number doubles if you count the vintage ones.

Each and every one of them has its own way to physically actuate a key, which conforms its own distinctive feel.

The buckling spring switch patented by IBM in 1971

The buckling spring switch patented by IBM in 1971, for example, actuates a key by having the internal spring buckle outwards as it collapses. That activates a plastic hammer located beneath the spring, which, upon hitting a contact, closes an electrical circuit. As the spring buckles, it hits the inner wall of its plastic housing producing a loud “click” sound. That feedback signals the typist that the key has been registered.

The Cherry MX Blue switch

The modern Cherry MX Blue switch, on the other hand, uses a leaf spring pushed by a plastic slider (the grey one in the picture) as it passes by on its way down. When the slider is half way through, the leaf spring encounters no more resistance from the slider, thus closing a circuit that signals the computer about the key press. As the grey slider hits the bottom of the switch, it emits a high pitched “click” sound similar to the one of a mouse button.

Don’t worry, I won’t get into all the nuts and bolts of all the different switches. The Internet will happily provide you more information than you can possibly ask for in that regard.

The important thing to remember is that the ergonomic properties of a mechanical keyboard are determined by three fundamental characteristics of its switches:

  • Tactility
  • Force
  • Sound

Let’s look at each of them briefly.

Tactility

A switch is called tactile if it gives some kind of feedback about when a key is actuated. This is usually done by significantly dropping the key’s resistance the instant it gets registered. Some switches even accompany that with a “click” sound and are therefore referred to as “clicky”.

Force

The force indicates how hard you have to press down a key in order to actuate it. This is measured in centinewtons (cN) or, more commonly grams-force (gf). In practice they’re interchangeable, since 1 cN is roughly equivalent to 1 gf (more precisely 1 cn = 1.02 gf).

Switches get progressively stiffer the further down you press a key, which, again, helps you avoid bottoming out. A soft switch requires about 45 cN to actuate. A stiff one is about twice that, around 80 cN.

The force graph of the stiff tactile buckling spring switch The force graph of a soft tactile Cherry switch

These are called force graphs and indicate how much force in cN (Y axis) is required to press a key during its travel in mm (X axis) from rest position to bottom. Did you notice the sudden drop in force about half way through the travel? That’s how tactile switches announce that the key has been actuated, as shown by the little black dot.

Sound

As I mentioned earlier, certain switches emit a “click” sound when they get actuated. This extra piece of feedback can either be pleasurable (even nostalgic if you will) or totally annoying. I strongly advise you to investigate how the people who will be sitting next to you feel about clicky keyboards before buying one. And telling them that all keyboards sounded like that back in the 80’s no longer cuts it.

Wrapping it up

I know it sounds crazy, but I barely scratched the surface of what there is to know about mechanical keyboards. However, the purpose of this post was to give you an idea about what they are and where they come from, and I hope I’ve succeeded in that.

If you’re eager to know more, fear not. I intend to write more about this topic. Next time, I’ll start reviewing some of my favorite mechanical keyboards. In the meantime, take a look at the official guide to mechanical keyboards put together by this growing community of keyboard enthusiasts.


April 24, 2014 Posted in speaking  |  programming

BDD all the way down

When I started doing TDD a few years ago, I often felt an inexplicable gap between the functionality described in the requirements and the tests I was writing to drive my implementation. BDD turned out to be the answer.

TDD makes sense and improves the quality of your code. I don’t think anybody could argue against this simple fact. Is TDD flawless? Well, that’s a whole different discussion.Turtles all the way down

But let’s back up for a moment.

Picture this: you’ve just joined a team tasked with developing some kind of software which you know nothing about. The only thing you know is that it’s an implementation of Conway’s Game of Life as a web API. Your first assignment is to develop a feature in the system:

In order to display the current state of the Universe
As a Game of Life client
I want to get the next generation from underpopulated cells

You want to implement this feature by following the principles of TDD, so you know that the first thing you should do is start writing a failing test. But, what should you test? How would you express this requirement in code? What’s even a cell?

If you’re looking at TDD for some guidance, well, I’m sorry to tell you that you’ll find none. TDD has one rule. Do you remember what the first rule of TDD is? (No, it’s not you don’t talk about TDD):

Thou shalt not write a single line of production code without a failing test.

And that’s basically it. TDD doesn’t tell you what to test, how you should name your tests and not even how to understand why they fail in first place.

As a programmer, the only thing you can think of, at this point, is writing a test that checks for nulls. Arguably, that’s the equivalent of trying to start a car by emptying the ashtrays.

Let the behavior be your guide

What if we stopped worrying about writing tests for the sake of, well, writing tests, and instead focused on verifying what the system is supposed to do in a certain situation? That’s called behavior and a lot of good things may come out of letting it be the driving force when developing software. Dan North noticed this during his research, which ultimately led him to the formalization of Behavior-driven Development:

I started using the word “behavior” in place of “test” in my dealings with TDD and found that not only did it seem to fit but also that a whole category of coaching questions magically dissolved. I now had answers to some of those TDD questions. What to call your test is easy – it’s a sentence describing the next behavior in which you are interested. How much to test becomes moot – you can only describe so much behavior in a single sentence. When a test fails, simply work through the process described above – either you introduced a bug, the behavior moved, or the test is no longer relevant.

Focusing on behavior has other advantages, too. It forces you to understand the context in which the feature you’re developing is going work. It makes you see the value that it brings for the users. And, last but not least, it forces you to ask questions about the concepts mentioned in the requirements.

So, what’s the first thing you should do? Well, let’s start by understanding what a generation of cells is and then we can move on to the concept of underpopulation:

Any live cell with fewer than two live neighbors dies, as if by needs caused by underpopulation.

Acceptance tests

At this point we’re ready to write our first test. But, since there’s still no code for the feature, what exactly are we supposed to test? The answer to that question is simpler than you might expect: the system itself.

We’re implementing a requirement for our Game of Life web API. The user is supposed to make an HTTP request to a certain URL sending a list of cells formatted as JSON and get back a response containing the same list of cells after the rule of underpopulation has been applied. We’ll know we’ll have fulfilled the requirement when the system does exactly that. It’s, in other words, the requirement’s acceptance criteria. It sure sounds like a good place to start writing a test.

Let’s express it the way formalized by Dan North:

Scenario: Death by underpopulation
    Given a live cell has fewer than 2 live neighbors
    When I ask for the next generation of cells
    Then I should get back a new generation
    And it should have the same number of cells
    And the cell should be dead

Here, we call the acceptance criteria scenario and use a Given-When-Then syntax to express its premises, action and expected outcome. Having a common language like this for expressing software requirements is one of the greatest innovations brought by BDD.

So, we said we were going to test the system itself. In practice, that means we must put ourselves in the user’s shoes and let the test interact with the system at its outmost boundaries. In case of a web API, that translates in sending and receiving HTTP requests.

In order to turn our scenario into an executable test, we need some kind of framework that can map the Given-When-Then sentences to methods. In the realm of .NET, that framework is called SpecFlow. Here’s how we could use it together with C# to implement our test:

IEnumerable<dynamic> generation;
HttpResponseMessage response;
IEnumerable<dynamic> nextGeneration;

[Given]
public void Given_a_live_cell_has_fewer_than_COUNT_live_neighbors(int count)
{
    generation = new[]
                 {
                     new { Alive = true, Neighbors = --count }
                 };
}

[When]
public void When_I_ask_for_the_next_generation_of_cells()
{
    response = WebClient.PostAsJson("api/generation", generation);
}

[Then]
public void Then_I_should_get_back_a_new_generation()
{
    response.ShouldBeSuccessful();
    nextGeneration = ParseGenerationFromResponse();
    nextGeneration.ShouldNot(Be.Null);
}

[Then]
public void Then_it_should_have_the_same_number_of_cells()
{
    nextGeneration.Should(Have.Count.EqualTo(1));
}

[Then]
public void Then_the_cell_should_be_dead()
{
    var isAlive = (bool)nextGeneration.Single().Alive;
    isAlive.ShouldBeFalse();
}

IEnumerable<dynamic> ParseGenerationFromResponse()
{
    return response.ReadContentAs<IEnumerable<dynamic>>();
}

As you can see, each portion of the scenario is mapped directly to a method by matching the words used within the sentences separated by underscores.

At this point, we can finally run our acceptance test and watch it fail:

Given a live cell has fewer than 2 live neighbors
-> done.
When I ask for the next generation of cells
-> done.
Then I should get back a new generation
-> error: Expected: in range (200 OK, 299)
          But was:  404 NotFound

As expected, the test fails on the first assertion with an HTTP 404 response. We know this is correct since there’s nothing listening to that URL on the other end. Now that we understand why this test fails, we can now officially start implementing that feature.

The TDD cycle

We’re about to get our hands dirty (albeit keeping the code clean) and dive into the system. At this point we follow the normal rules of Test-driven development with its Red-Green-Refactor cycle. However, we’re faced with the exact same problem we had at the boundaries of the system. What should be our first test? Even in this case, we’ll let the behavior be our guiding light.

If you have experience writing unit tests, my guess is that you’re used to write one test class for every production code class. For example, given SomeClass you’d write your tests in SomeClassTests. This is fine, but I’m here to tell you that there’s a better way. If we focus on how a class should behave in a given situation, wouldn’t it be more natural to have one test class per scenario?

Consider this:

public class When_getting_the_next_generation_with_underpopulation
    : ForSubject<GenerationController>
{
}

This class will contain all the tests related to how the GenerationController class behaves when asked to get the next generation of cells given that one of them is underpopulated.

But, wait a minute. Wouldn’t that create a lot of tiny test classes?

Yes. One per scenario to be precise. That way, you’ll know exactly which class to look at when you’re working with a certain feature. Besides, having many small cohesive classes is better than having giant ones with all kinds of tests in them, don’t you think?

Let’s get back to our test. Since we’re testing one single scenario, we can structure it much in same way as we would define an acceptance test:

For each scenario there's exactly one context to set up, one action and one or more assertions on the outcome.

As always, readability is king, so we’d like to express our test in a way that gets us as close as possible to human language. In other words, our tests should read like specifications:

public class When_getting_the_next_generation_with_underpopulation
    : ForSubject<GenerationController>
{
    static Cell solitaryCell;
    static IEnumerable<Cell> currentGen;
    static IEnumerable<Cell> nextGen;

    Establish context = () =>
    {
        solitaryCell = new Cell { Alive = true, Neighbors = 1 };
        currentGen = AFewCellsIncluding(solitaryCell);
    };

    Because of = () =>
        nextGen = Subject.GetNextGeneration(currentGen);

    It should_return_the_next_generation_of_cells = () =>
        nextGen.ShouldNotBeEmpty();

    It should_include_the_solitary_cell_in_the_next_generation = () =>
        nextGen.ShouldContain(solitaryCell);

    It should_only_include_the_original_cells_in_the_next_generation = () =>
        nextGen.ShouldContainOnly(currentGen);

    It should_mark_the_solitary_cell_as_dead = () =>
        solitaryCell.Alive.ShouldBeFalse();
}

In this case I’m using a test framework for .NET called Machine.Specifications or MSpec. MSpec belongs to a category of frameworks called BDD-style testing frameworks. There are at least a few of them for almost every language known to men (including Haskell). What they all have in common is a strong focus on allowing you to express your tests in a way that resembles requirements.

Speaking of readability, see all those underscores, static variables and lambdas? Those are just the tricks MSpec has to pull on the C# compiler, in order to give us a domain-specific language to express requirements while still producing runnable code. Other frameworks have different techniques to get as close as possible to human language without angering the compiler. Which one you choose is largely a matter of preference.

Wrapping it up

I’ll leave the implementation of the GenerationController class as an exercise for the Reader, since it’s outside the scope of this article. If you like, you can find mine over at GitHub.

What’s important here, is that after a few rounds of Red-Green-Refactor we’ll finally be able to run our initial acceptance test and see it pass. At that point, we’ll know with certainty that we’ll have successfully implemented our feature.

Let’s recap our entire process with a picture:

The Outside-in development cycle

This approach to developing software is called Outside-in development and is described beautifully in Steve Freeman’s & Nat Pryce’s excellent book Growing Object-Oriented Software, Guided by Tests.

In our little exercise, we grew a feature in our Game of Life web API from the outside-in, following the principles of Behavior-driven Development.

Presentation

You can see a complete recording of the talk I gave at Foo Café last year about this topic. The presentation pretty much covers the material described in this articles and expands a bit on the programming aspect of BDD. I hope you’ll find it useful. If have any questions, please fill free to contact me directly or write in the comments.

Here’s the abstract:

In this session I’ll show how to apply the Behavior Driven Development (BDD) cycle when developing a feature from top to bottom in a fictitious.NET web application. Building on the concepts of Test-driven Development, I’ll show you how BDD helps you produce tests that read like specifications, by forcing you to focus on what the system is supposed to do, rather than how it’s implemented.

Starting from the acceptance tests seen from the user’s perspective, we’ll work our way through the system implementing the necessary components at each tier, guided by unit tests. In the process, I’ll cover how to write BDD-style tests both in plain English with SpecFlow and in C# with MSpec. In other words, it’ll be BDD all the way down.


April 16, 2013 Posted in programming  |  autofixture

General-purpose customizations with AutoFixture

If you’ve been using AutoFixture in your tests for more than a while, chances are you’ve already come across the concept of customizations. If you’re not familiar with it, let me give you a quick introduction:

A customization is a group of settings that, when applied to a given Fixture object, control the way AutoFixture will create instances for the types requested through that Fixture.

At this point you might find yourself feeling an irresistible urge to know everything there’s to know about customizations. If that’s the case, don’t worry. There are a few resources online where you learn more about them. For example, I wrote about how to take advantage of customizations to group together test data related to specific scenarios.

In this post I’m going to talk about something different which, in a sense, is quite the opposite of that: how to write general-purpose customizations.

A (user) story about cooking

It’s hard to talk about test data without a bit of context. So, for the sake of this post, I thought we would pretend to be working on a somewhat realistic project. The system we’re going to build is an online catalogue of food recipies. The domain, at the very basic level, consists of three concepts:

  • Cookbook
  • Recipes
  • Ingredients

Basic domain model for a recipe catalogue.

Now, let’s imagine that in our backlog of requirements we have one where the user wishes to be able to search for recepies that contain a specific set of ingredients. Or in other words:

As a foodie, I want to know which recipes I can prepare with the ingredients I have, so that I can get the best value for my groceries.

From the tests…

As usual, we start out by translating the requirement at hand into a set of acceptance tests. In order do that, we need to tell AutoFixture how we’d like the test data for our domain model to be generated.

For this particular scenario, we need every Ingredient created in the test fixture to be randomly chosen from a fixed pool of objects. That way we can ensure that all recepies in the cookbook will be made up of the same set of ingredients.

Here’s how such a customization would look like:

public class RandomIngredientsFromFixedSequence : ICustomization
{
    private readonly Random randomizer = new Random();
    private IEnumerable<Ingredient> sequence;

    public void Customize(IFixture fixture)
    {
        InitializeIngredientSequence(fixture);
        fixture.Register(PickRandomIngredientFromSequence);
    }

    private void InitializeIngredientSequence(IFixture fixture)
    {
        this.sequence = fixture.CreateMany<Ingredient>();
    }

    private Ingredient PickRandomIngredientFromSequence()
    {
        var randomIndex = this.randomizer.Next(0, sequence.Count());
        return sequence.ElementAt(randomIndex);
    }
}

Here we’re creating a pool of ingredients and telling AutoFixture to randomly pick one of those every time it needs to create an Ingredient object by using the Fixture.Register method.

Since we’ll be using Xunit as our test runner, you can take advantage of the AutoFixture Data Theories to keep our tests succinct by using AutoFixture in a declarative fashion. In order to do so, we need to write an xUnit Data Theory attribute that tells AutoFixture to use our new customization:

public class CookbookAutoDataAttribute : AutoDataAttribute
{
    public CookbookAutoDataAttribute()
        : base(new Fixture().Customize(
                   new RandomIngredientsFromFixedSequence())))
    {
    }
}

If you prefer to use AutoFixture directly in your tests, the imperative equivalent of the above is:

var fixture = new Fixture();
fixture.Customize(new RandomIngredientsFromFixedSequence());

At this point, we can finally start writing the acceptance tests to satisfy our original requirement:

public class When_searching_for_recipies_by_ingredients
{
    [Theory, CookbookAutoData]
    public void Should_only_return_recipes_with_a_specific_ingredient(
        Cookbook sut,
        Ingredient ingredient)
    {
        // When
        var recipes = sut.FindRecipies(ingredient);
        // Then
        Assert.True(recipes.All(r => r.Ingredients.Contains(ingredient)));
    }

    [Theory, CookbookAutoData]
    public void Should_include_new_recipes_with_a_specific_ingredient(
        Cookbook sut,
        Ingredient ingredient,
        Recipe recipeWithIngredient)
    {
        // Given
        sut.AddRecipe(recipeWithIngredient);
        // When
        var recipes = sut.FindRecipies(ingredient);
        // Then
        Assert.Contains(recipeWithIngredient, recipes);
    }
}

Notice that during these tests AutoFixture will have to create Ingredient objects in a couple of different ways:

  • indirectly when constructing Recipe objects associated to a Cookbook
  • directly when providing arguments for the test parameters

As far as AutoFixture is concerned, it doesn’t really matter which code path leads to the creation of ingredients. The algorithm provided by the RandomIngredientsFromFixedSequence customization will apply in all situations.

…to the implementation

After a couple of Red-Green-Refactor cycles spawned from the above tests, it’s not completely unlikely that we might end up with some production code similar to this:

// Cookbook.cs
public class Cookbook
{
    private readonly ICollection<Recipe> recipes;

    public Cookbook(IEnumerable<Recipe> recipes)
    {
        this.recipes = new List<Recipe>(recipes);
    }

    public IEnumerable<Recipe> FindRecipies(params Ingredient[] ingredients)
    {
        return recipes.Where(r => r.Ingredients.Intersect(ingredients).Any());
    }

    public void AddRecipe(Recipe recipe)
    {
        this.recipes.Add(recipe);
    }
}

// Recipe.cs
public class Recipe
{
    public readonly IEnumerable<Ingredient> Ingredients;

    public Recipe(IEnumerable<Ingredient> ingredients)
    {
        this.Ingredients = ingredients;
    }
}

// Ingredient.cs
public class Ingredient
{
    public readonly string Name;

    public Ingredient(string name)
    {
        this.Name = name;
    }
}

Nice and simple. But let’s not stop here. It’s time to take it a bit further.

An opportunity for generalization

Given the fact that we started working from a very concrete requirement, it’s only natural that the RandomIngredientsFromFixedSequence customization we came up at with encapsulates a behavior that is specific to the scenario at hand. However, if we take a closer look we might notice the following:

The only part of the algorithm that is specific to the original scenario is the type of the objects being created. The rest can easily be applied whenever you want to create objects that are picked at random from a predefined pool.

An opportunity for writing a general-purpose customization has just presented itself. We can’t let it slip.

Let’s see what happens if we extract the Ingredient type into a generic argument and remove all references to the word “ingredient”:

public class RandomFromFixedSequence<T> : ICustomization
{
    private readonly Random randomizer = new Random();
    private IEnumerable<T> sequence;

    public void Customize(IFixture fixture)
    {
        InitializeSequence(fixture);
        fixture.Register(PickRandomItemFromSequence);
    }

    private void InitializeSequence(IFixture fixture)
    {
        this.sequence = fixture.CreateMany<T>();
    }

    private T PickRandomItemFromSequence()
    {
        var randomIndex = this.randomizer.Next(0, sequence.Count());
        return sequence.ElementAt(randomIndex);
    }
}

Voilà. We just turned our scenario-specific customization into a pluggable algorithm that changes the way objects of any type are going to be generated by AutoFixture. In this case the algorithm will create items by picking them at random from a fixed sequence of T.

The CookbookAutoDataAttribute can easily changed to use the general-purpose version of the customization by closing the generic argument with the Ingredient type:

public class CookbookAutoDataAttribute : AutoDataAttribute
{
    public CookbookAutoDataAttribute()
        : base(new Fixture().Customize(
                   new RandomFromFixedSequence<Ingredient>())))
    {
    }
}

The same is true if you’re using AutoFixture imperatively:

var fixture = new Fixture();
fixture.Customize(new RandomFromFixedSequence<Ingredient>());

Wrapping up

As I said before, customizations are a great way to set up test data for a specific scenario. Sometimes these configurations turn out to be useful in more than just one situation.

When such opportunity arises, it’s often a good idea to separate out the parts that are specific to a particular context and turn them into parameters. This allows the customization to become a reusable strategy for controlling AutoFixture’s behavior across entire test suites.