August 25, 2016 Posted in programming

Git Undo

Tell me if you recognize this scenario: you’re in the middle of rewriting your local commits when you suddenly realize that you have gone too far and, after one too many rebases, you are left with a history that looks nothing like the way you wanted. No? Well, I certainly do. And when that happens, I wish I could just CTRL+Z my way back to where I started. Of course, it’s never that simple — not even in a GUI.

It was in one of those moments of despair that I finally decided to set out to create my own git undo command. Here’s what I came up with and how I got there.

The Reflog

My story of undoing things in Git starts with the reflog. What’s the reflog, you might ask. Well, I’m here to tell you: every time a branch reference moves1 Git records its previous value in a sort of local journal. This journal is the called the reference log — or reflog for short.

In a repository there is a reflog for each branch as well as a separate one for the HEAD reference.

Getting the list of entries in a branch’s reflog is as easy as saying git reflog followed by the name of the branch:

git reflog master

shows the reflog entries for the master branch:

Output of git-reflog for the master branch

If you instead wanted to look at HEAD’s own reflog, you would simply omit the argument and say:

git reflog

which yields the same output, only for the HEAD reference:

Output of git-reflog for the HEAD reference

What isn’t immediately obvious is that the entries in the reflog are stored in reverse chronological order with the most recent one on top.

What is obvious, on the other hand, is that each entry has its own index. This turns out to be extremely useful, because we can use that index to directly reference the commit associated to a certain reflog entry. But more on that later. For now, suffice it to say that in order to reference a reflog entry, we have to use the syntax:


where the two parts separated by the @ sign are:

  • reference which can either be the name of a branch or HEAD
  • index which is the entry’s position in the reflog2

For example, let’s say that we wanted to look at the commit HEAD was referencing two positions ago. To do that, we could use the git show command followed by HEAD@{2}:

git show HEAD@{2}

If we, instead, wanted to look at the commit master was referencing just before the latest one we would say:

git show master@{1}

The Undo Alias

Here’s my point: the reflog keeps track of the history of commits referenced by a branch, just like a web browser keeps track of the history of URLs we visit.

This means that the commit referenced by @{1} is always the commit that was referenced just before the current one.

If we were to combine the reflog with the git reset command like this:

git reset --hard master@{1}

we would suddenly have a way to move HEAD, the index and the working directory to the previous commit referenced by a branch. This is essentially the same as pressing the back button in our web browser!

At this point, we have everything we need to implement our own git undo command, which we do in the form of an alias. Here it is:

git config --global alias.undo '!f() { \
    git reset --hard $(git rev-parse --abbrev-ref HEAD)@{${1-1}}; \
}; f'

I realize it’s quite a mouthful so let’s break it down piece by piece:

  1. !f() { ... } f
    Here, we’re defining the alias as a shell function named f which is then invoked immediately.

  2. $(git rev-parse --abbrev-ref HEAD)@{...}
    We use the git rev-parse command followed by the --abbrev-ref option to get the name of the current branch, which we then concatenate with @{...} to form the reference to a previous position in the reflog (e.g. master@{1}).

  3. ${1-1}
    We specify the position in the reflog as the first parameter $1 with a default value of 1. This is the whole reason why we defined the alias as a shell function: to be able to provide a default value for the parameter using the standard Bash syntax.

The beauty of using an optional parameter like this, is that it allows us to undo any number of operations. At the same time, if we don’t specify anything, it’s going to undo the just latest one.

Trying It Out

Let’s say that we have a history that looks like this:3

History before the rewrite

We have two branches — master and feature — that have diverged at commit C. For the sake of our example, let’s also assume that we wanted to remove the latest commit in master — that is commit F — and then merge the feature branch:

git reset --hard HEAD^
git merge feature

At this point, we would end up with a history looking like this:

History after the rewrite

As you can see, everything went fine — but we’re still not happy. For some reason, we want to go back to the way history was before. In practice, this means we need to undo our latest two operations: the merge and the reset. Time to whip out that undo alias:

git undo 2

This moves HEAD to the commit referenced by master@{2} — that is the commit the master branch was pointing to 2 reflog entries ago. Let’s go ahead and check our history again:

History restored with the undo alias

And everything is back the way it was. \o/

But what if wanted to undo the undo? Easy. Since git undo itself creates an entry in the reflog, it’s enough to say:

git undo

which, without argument, is the equivalent of saying git undo 1.

Did you find this useful? If you're interested in learning other techniques like the one described in this article, I wrote down a few more in my Pluralsight course Advanced Git Tips and Tricks.

  1. That is, it’s modified to point to a different commit than it did before.

  2. You can also use dates here. Try for example master@{yesterday} or HEAD@{2.days.ago} — pretty amazing, don’t you think?

  3. I like my history succinct and colorful. For this reason, I never use the plain git log; instead, I define an alias called lg where I use the --pretty option to customize its output. If you want to know more, I wrote about this a while ago when talking about the importance of a good-looking history.

June 20, 2016 Posted in programming

On Being a Good .NET Developer

While reading Rob Ashton’s thought-provoking piece titled “Why you can’t be a good .NET developer” over my morning cappuccino the other day, for the first few paragraphs I found myself nodding in agreement.

Having been a consultant for the past fifteen years, I’ve certainly come across more than a few teams where the “lowest common denominator” was without a doubt the driving force behind every decision. This isn’t in any way unique to .NET, though. I have seen the exact same thing happen in other platforms as well: Java, JavaScript and — to some degree — even C, C++1.

What they all have in common is a humongous active user base.

You see, it’s simply a matter of statistics: the more popular the platform2, the higher the number of beginners. The two variables are directly proportional to each other — some might argue even exponential. If you’re looking for a concrete example, consider the amount of novice JavaScript developers brought in by the popularity of jQuery.

The problem is not that .NET has an unusually high number of “lowest common denominators”. That number is simply higher compared to platforms with a narrower, mostly self-selected, audience.

The problem — and this is where I disagree with the underlying message in that article — is failing a platform based on the number of inexperienced programmers who work with it.

I also don’t think that fleeing is the right way to handle the situation. I don’t know about you, but I like to apply the Boy Scout Rule in more than just code; when I join a team, I want to leave it in better shape than I found it. This means that if I join a team who is dominated by inexperienced programmers, I don’t see it as an excuse to hold back on quality. Quite the opposite, I feel compelled to introduce the team to new ways of doing things, new perspectives. Note that I don’t force anything on anyone; instead, I try to lead by example.

For instance, if I see that the team is stuck using TFS, I will still use Git on my machine and add a bridge like git-tfs to collaborate. Sooner or later, without mistake, someone is going to wonder why I do that. Driven by curiosity, they’ll ask me to explain how Git is better than TFS and I’ll be more than happy to tell them all about it. After a while, that same person — or someone else on the team — is going to start using Git on their own machine and, soon enough, the entire team will be sitting in a console firing Git commands like there’s no tomorrow, wondering why they hadn’t learned it earlier.

I never compromise on excellence. It’s just that with some teams, the way to get there is longer than with others.

To me the solution isn’t to run away from beginners. It’s to inspire and mentor them so that they won’t stay beginners forever and instead go on to do the same for other people. That applies as much to .NET as it does to any other platform or language.

If you aren’t the type of person who has the time or the interest to raise the lowest common denominator, that’s perfectly fine. I do believe you’re better off moving somewhere else where your ambitions aren’t being held back by inexperienced team members. As for myself, I’ll stay behind — teaching.

  1. C and C++ have a steep learning curve which forces programmers to move past the beginner stage far more quickly than with other languages in order to get anything done. So, while C and C++ are immensely widespread, the number of novices who work with them tends to stay relatively low.

  2. Just to be clear, by “platform” I mean a programming language together with its ecosystem of libraries, frameworks and tools.

August 14, 2014 Posted in programming

The importance of a good-looking history

“Study the past if you would define the future.” ~Confucius

Since the dawn of civilization, common sense has taught us that the way forward starts by knowing how we got here in the first place. While this powerful principle applies to practically all aspects of life, it’s especially true when developing software.

For us programmers, the rear mirror through which we look at the history of a code base before we go on to shape its future is version control. Among all the information captured by a version control tool, the most critical ones are the commit messages.

Git’s view of history

When we’re trying to understand how a piece of software has evolved over time, the first thing we tend to do is look at the trails of messages left by the programmers who came before us. Those sentences hold the key to understanding the choices that molded the software into what it is today.

In other words, what you write in these messages is crucial and you should put extra effort in making them as loud and clear as possible.

This is true regardless of what version control system you happen to be using. However, it is especially true for Git. Why? Because Git simply holds the history of your code to a higher standard.

As Linus Torvalds explained in his excellent Teck Talk at Google back in 2007, Git evolved out of the need to manage the development of the Linux kernel, a humongous open source project with a 20 year history and hundreds of contributors from all around the world.

If source code history has ever played a more critical role in a software project, the Linux kernel is where it’s at.

Torvalds’ attention to history is also reflected in the features he built into his own distributed version control tool. To put it in his own words:

I want clean history, but that really means (a) clean and (b) history.

Regarding the “clean” part, he goes on to elaborate:

Keep your own history readable.

Some people do this by just working things out in their head first, and not making mistakes. But that’s very rare, and for the rest of us, we use “git rebase” etc. while we work on our problems.

Don’t expose your crap.

When it comes to “history”, he says:

People can (and probably should) rebase their private trees (their own work). That’s a cleanup. But never other people’s code. That’s a “destroy history”

You see, Git grants you all the tools you need to go back in time and rewrite your own commits (for example by changing their order, contents and messages) because having a clear history of the code matters. It matters to the sanity of whoever is working on it; present or future.

A legacy of e-mails

Having talked about the importance of keeping your history clean, let’s take the concept one step further.

When you use Git, you should not only pay attention to the contents of your commit messages, but also how they're formatted.

There’s a reason for that. As Torvalds himself stated in his Google talk, for a long period of time the history of the Linux kernel was captured in e-mail threads with patches attached:

“For the first 10 years of kernel maintenance we literally used tarballs and patches.” ~Linus Torvalds

Even in the early days of Git, e-mail was still used as a way to send patches among collaborators of the Linux project.

If you look closely, you’ll notice that the concept of e-mail is pretty pervasive throughout Git. Here’s some evidence off the top of my head:

  • Every user has to have an e-mail address which is always part of the commit’s metadata
  • The git format-patch and git am commands are specifically designed to convert commits into e-mails with patches as attachments
  • Both git blame and git shortlog have special options to display the committers’ e-mail addresses instead of their names
  • The git log command has dedicated placeholders to indicate a commit message’s subject and body

The last one is particularly interesting. Git seems to assume that a commit message is divided in two parts:

  1. A short one-sentence summary
  2. An optional longer description defined in its own paragraph separated by an empty line

A “well-formed” Git commit message would then look like this:

A short summary, possibly under 50 characters.

A longer description of the change and the reasoning
behind it for the future generations to know.
Even better if it's wrapped at 80 characters so that
it will look good in the console.

If you follow this simple convention, Git will reward you by going out of its way to show you your history in the prettiest way possible. And that’s a good thing.

Formatting matters

Once you fall into the habit of keeping your commit messages under 50 characters and relegate any longer description to a separate paragraph, you can start pretty-printing your history in almost any way you like.

For example, you could choose to only display the commits’ summaries by using the %s placeholder in the --format option of git log:

Simple example of pretty-printing the commit history

Or you could go crazy with all kinds of colors and indentation:

Gorgeous-looking commit history

The format string I used in this particular example can be broken down as:

"%C(cyan)%s%Creset %C(dim white)(%ar)%Creset%n%w(72,4,4)%b"


  • %C(cyan) colors the following text in cyan
  • %s shows the commit summary
  • %Creset restores the default color for the text
  • %C(dim white) colors the following text in grey
  • %ar shows the time of the commit relative to now
  • %n adds a newline character
  • %w(72,4,4) wraps the following text at 72 characters. Then, indents the first line as well as the remaining ones with 4 spaces
  • %b shows the long description of the commit, if any

GitHub itself follows this convention when showing the commit history of a project. In fact, they will only show you the summary of each commit by default. If there’s a longer description available, they allow you to expand it with the press of a button.

Commit message formatting in the GitHub web UI Pretty-printed commit message on GitHub

Enforcing the rule

Of course, this all works best if everyone on the project agrees to follow the convention.

But how do you ensure that the team sticks to the golden rule of pretty commits™?

Well, you give your peers a gentle nudge at exactly the right moment: just when they’re about to make a commit. This is what Jeff Atwood calls the “Just In Time” theory:

You do it by showing them:

  • the minimum helpful reminder
  • at exactly the right time

GitHub does this already, both on the Web:

Commit message validation in the GitHub web UI Commit message being validated in the GitHub web UI

and in its desktop clients:

Commit message validation in GitHub for Mac Commit message being validated in GitHub for Mac...

Commit message validation in GitHub for Windows ...and in GitHub for Windows

But what if you prefer to use Git from the command line, the way it should be?

Easy. You write a shell script that gets triggered by Git’s client side hooks every time you’re about to do a commit. In that script, you make sure the message is formatted according to the rules.

Here’s my version of it:

# A hook script that checks the length of the commit message.
# Called by "git commit" with one argument, the name of the file
# that has the commit message. The hook should exit with non-zero
# status after issuing an appropriate message if it wants to stop the
# commit. The hook is allowed to edit the commit message file.


function printWarning {
    printf >&2 "${YELLOW}$message${DEFAULT}\n"

function printNewline {
    printf "\n"

function captureUserInput {
    # Assigns stdin to the keyboard
    exec < /dev/tty

function confirm {
    read -p "$question [y/n]"$'\n' -n 1 -r

message=$(cat $messageFilePath)
firstLine=$(printf "$message" | sed -n 1p)
firstLineLength=$(printf ${#firstLine})

test $firstLineLength -lt 51 || {
    printWarning "Tip: the first line of the commit message shouldn't be longer than 50 characters and yours was $firstLineLength."
    confirm "Do you want to modify the message in your editor or just commit it?"

    if [[ $REPLY =~ ^[Yy]$ ]]; then
        $EDITOR $messageFilePath

    exit 0

In order to use it in your local repo, you’ll have to manually copy the script file into the .git\hooks directory and call it commit-msg. Finally, you’ll have grant execute rights to the file in order to make it runnable:

cp commit-msg somerepo/.git/hooks
chmod +x somerepo/.git/hooks/commit-msg

From that point forward, every time you attempt to create a commit that doesn’t follow the rules you’ll get a chance to do the right thing:

The commit-msg shell script in action

If you choose to press y, the commit message will open up in your default text editor from which you can rewrite it properly. Pressing n, on the other hand, will override the rule altogether and commit the message as it is.

Not that you’d ever want to do that.

June 25, 2014 Posted in gaming

Leveraging the cloud for fun and games

The story of a group of programmers, who one day decided to have a Counter-Strike: Global Offensive deathmatch, but didn't have the hardware to host their own game. So they took it to the cloud, learning a few lessons along the way.

Friday night, February 21, 2014. That’s when the tretton37 Counter-Strike: Global Offensive fragfest was bound to start. Avid gamers looking to share virtual blood together, were eager to join in from our offices in Lund and Stockholm. A few more would play over the Internet.

CS:GO icon by griddark

The time and place were set. Pizzas were ordered. Everything was ready to go. Except for one thing:

We didn’t have a dedicated Counter-Strike server to host the game on.

Finding a spare machine to dedicate for that one night wasn’t an easy task, given our requirements:

  • The host should be reachable from the Internet
  • The machine should have enough hardware to handle a CS:GO game with 15+ players
  • The machine should be able to scale up as more players join the game
  • The whole thing should be a breeze to setup

For days I pondered my options when, suddenly, it hit me:

Where's the place to find commodity hardware that's available for rent, is on the Internet and can scale at will?

The cloud, of course! This realization fell on my head like the proverbial apple from the tree.

Step 1: Getting a machine in the cloud

Valve puts out their Source Dedicated Server software both for Windows and Linux. The Windows version has a GUI and is generally what you’d call “user friendly”. The Linux version, on the other hand, is lean & mean and is managed entirely from the command line. Programmers being programmers, I decided to go for the Linux version.

Now, having established that I needed a Linux box, the next question was: which of the available clouds was I going to entrust with our gaming night? Since tretton37 is mainly a Microsoft shop, it felt natural to go for Microsoft Azure. However, I wasn’t holding any high hopes that they would allow me to install Linux on one of their virtual machines.

As it turned out, I had to eat my hat on that one. Azure does, in fact, offer pre-installed Linux virtual machines ready to go. To me, this is proof that the cloud division at Microsoft totally gets how things are supposed to work in the 21st century. Kudos to them.

Creating a Linux VM on Azure

After literally 2 minutes, I had an Ubuntu Server machine with root access via SSH running in the cloud.

Creating a Linux VM on Azure

If I hadn’t already eaten my hat, I would take it off for Azure.

Step 2: Installing the Steam Console Client

Hosting a CS:GO server implies setting up a so called Source Dedicated Server, also known as SRCDS. That’s Valve’s server software used to run all their games that are based on the Source Engine. The list includes Half Life 2, Team Fortress, Counter-Strike and so on.

A SRCDS is easily installed through the Steam Console Client, or SteamCMD. The easiest way to get it on Linux, is to download it and unpack it from a tarball. But first things first.

It’s probably a good idea to run the Source server with a dedicated user account that doesn’t have root privileges. So I went ahead and created a steam user, switched to it and headed to its home directory:

adduser steam
su steam
cd ~

Next, I needed to install a few libraries that SteamCMD depends on, like the GNU C compiler and its friends. That’s where I hit the first roadblock.

steamcmd: error while loading shared libraries: cannot open shared object file: No such file or directory

Uh? A quick search on the Internet revealed that SteamCMD doesn’t like to run on a 64-bit OS. In fact:

SteamCMD is a 32-bit binary, so it needs 32-bit libraries.

On the other hand:

The prepackaged Linux VMs available in Azure come in 64 bit only.

Ouch. Luckily, the issue was easily solved by installing the right version of Libgcc:

apt-get install lib32gcc1

Finally, I was ready to download the SteamCMD binaries and unpack them:

tar xzvf steamcmd_linux.tar.gz

The client itself was kicked off by a Bash script:

cd ./steamcmd

That brought down the necessary updates to the client tools and started an interactive prompt from where I could install any of Valve’s Source games servers.

At this point, I could have continued down the same route and install the CS:GO Dedicated Server (CSGO DS) by using SteamCMD.

However, a few intricate problems would be waiting further down the road. So, I decided to back out and find a better solution.

Steam> quit

Step 3: Installing the CS:GO Dedicated Server

Remember that thing about SteamCMD being a 32-bit binary and the Linux VM on Azure being only available in 64-bit?

Well, that turned out to be a bigger issue than I thought. Even after having successfully installed the CS:GO server, getting it to run became a nightmare. The server was constantly complaining about the wrong version of some obscure libraries. Files and directories were missing. Everything was a mess.

Salvation came in the form of a meticulously crafted script, designed to take care of those nitty-gritty details for me.

Thanks to Daniel Gibbs' hard work, I could use his fabulous csgoserver script to install, configure and, above all, manage our CS:GO Dedicated Server without pain.

You can find a detailed description how to use the csgoserver up on his site, so I’m just gonna report how I configured it to suit our deathmatch needs.

The status of the CS:GO Dedicated server as reported by csgoserver

Step 4: Configuration

The CS:GO server can be configured in a few different ways and it’s all done in the server.cfg configuration file. In it, you can set up things like the game mode (Arms Race, Classic, Competitive to name a few) the maximum number of players and so on.

Here’s how I configured it for the tretton37 deathmatch:

sv_password "secret" # Requires a password to join the server
sv_cheats 0 # Disables hacks and cheat codes
sv_lan 0 # Disables LAN mode

Step 5: Gold plating

The final touch was to provide an appropriate Message of the Day (or MOTD) for the occasion. That would be the screen that greets the players as they join the game, setting the right tone.

Once again, the whole thing was done by simply editing a text file. In this case, the file contained some HTML markup and a few stylesheets and was located in /home/steam/csgo/motd.txt.

Here’s how it looked like in action:

The tretton37 Message of the Day in action

Step 6: Deathmatch!

This article is primarily meant as a reference on how to configure a dedicated CS:GO server on a Linux box hosted on Microsoft Azure. Nonetheless, I figured it would be interesting to follow up with some information on how the server itself held up during that glorious game night.

The CS:GO Dedicated server stats while running on Azure during game night

Here’s a few stats taken both from the Azure Dashboard as well as from the operating system itself. Note that the server was running on a Large VM sporting a quad core 1.6 GHz CPU and 7 GB of RAM:

  • Number of simultaneous players: 16
  • Average CPU load: 15 %
  • Memory usage: 2.8 GB
  • Total outbound network traffic served: 1.18 GB

In retrospect, that configuration was probably a little overkill for the job. A Medium VM with a dual 1.6 GHz CPU and 3.5 GB of RAM would have probably sufficed. But hey, elastic scaling is exactly what the cloud is for.

One final thought

Oddly enough, this experience opened up my eyes to the great potential of cloud computing.

The CS:GO server was only intended to run for the duration of the event, which would last for a few hours. During that short period of time, I needed it to be as fast and responsive as possible. Hence, I went all out on the hardware.

As soon as the game night was over, I immediately shut down the virtual machine. The total cost for borrowing that awesome hardware for a few hours? Literally peanuts.


May 09, 2014 Posted in mechanical-keyboards

A gentle introduction to mechanical keyboards

The past few of years have seen a renaissance for mechanical keyboards, especially among two large groups of computer enthusiasts: programmers and gamers. I, too, share a passion for these wonderful pieces of hardware that goes way back. A quick search on the Internet reveals a wealth of web sites entirely dedicated to the subject. Unfortunately, most of the information in them is too "nerdy" for someone who's approaching it for the first time. In this post I'd like to give a gentle introduction to the world of mechanical keyboards for the curious, but uninitiated.

I love typing. I still remember how I, as a kid, used to borrow my mother’s Diplomat typewriter from 1964 and started hammering away on it, pretending to type some long text:

The legendary Diplomat typewriter

“It sure sounds like you’re a fast typist!”

she would say when she heard the racket coming from my room. Little did she know that what came out on the sheet of paper was actually gibberish. Or maybe she was just being kind.

Anyway, I enjoyed the feeling of pressing those stiff plastic buttons and hear the sound made by the metallic arm every time it punched a letter.

Today, I continue to enjoy a similar experience on my computer by using a mechanical keyboard.

So, what’s a mechanical keyboard?

First and foremost, let’s get the terminology straight by answering that burning question.

Wikipedia’s definition gives us a good starting point, although it’s still pretty vague:

Mechanical-switch keyboards use real switches underneath every key. Depending on the construction of the switch, such keyboards have varying response and travel times.

That leaves us wondering: what do they mean by real switches? Well, in this case real means physical, in the sense that under each key there’s a metal spring that compresses every time you press a key. The spring is housed inside a plastic device that makes sure the key is registered by the computer at a specific point in time while the key is being pressed. That device is called a switch.

A mechanical switch

So, let’s reformulate the definition of a mechanical keyboard:

A keyboard is considered mechanical if it uses metal springs underneath each key.

It’s still true that there are different types of mechanical switches, each with their own special characteristics. The kind of switch used in a mechanical keyboard determines the overall feeling you’ll get when you type on it.

A little bit of history

At this point you might rightfully wonder:

Wait a minute. Aren’t all keyboards mechanical?

No, they are not. At least not anymore. The keyboards sold with the first personal computers back in the late 70’s, however, were indeed.

Back then, computer keyboards were designed to mimic the feeling and behavior of typewriters. These were robust pieces of hardware, built with quality in mind. They used solid materials, like metal plates and thick ABS plastic frames. The keys were stiff enough to allow the typist to rest her fingers on the home row without accidentally pressing any key.

If you’re in your 30’s, chances are you remember how it felt to type on one of these keyboards, maybe in front of a Macintosh or a Commodore 64. All throughout the 80’s and early 90’s, every keyboard was mechanical. And heavy.

A Commodore 64 computer. Photo by Shane Doucette on Flickr

But then things started to change. When PCs became a mass market product, the additional cost of producing quality mechanical keyboards to go along with them was no longer justifiable. Consumers didn’t seem to care, either. As graphical user interfaces (GUI) grew in popularity during the mid 80’s, the keyboard, which had been the primary way to interact with computers up to that point, had to leave room to another kind of input device: the mouse.

Keyboards, like computers, became a commodity sold in large volumes. In an effort to continuously try to bring the production costs down, new designs and materials became mainstream. Focus shifted from quality and durability to cost effectiveness.

Consequently, the heavy metal plates disappeared in favor of cheaper silicon membranes put on top of electrical matrixes. Metal springs left room to air pads under each key. This keyboard technique is called rubber dome or membrane, and is to this day the most common type of keyboard sold with desktop computers.

Rubber dome switches. Source:

For laptops, a new kind of low profile rubber dome switch began to appear. They were made out of thin plastic parts and were called scissors because of their collapsable X shape.

The scissor switch

While today’s mainstream keyboards certainly serve their purpose, they have a limited lifetime.

With time and usage, the rubber bubbles eventually wear out, which means you have to press harder on the keys to activate them. The letters printed on the keys fade or peel off. The cheap plastic parts loosen up or bend. And by the time you’re finally ready to throw those suckers in the bin, they’ve managed to drive you nuts.

Mechanical keyboards, on the other hand, are built to last.

As a testimony of their extreme durability, I can tell you that the oldest one in my collection is an Apple Extended Keyboard II dated all the way back to 1989 and it still works like a charm.

My glorious Apple Extended Keyboard II from 1989

That’s a 25 years old piece of hardware still in perfectly good condition. So much, in fact, that I use it daily as my primary keyboard at home.

It’s all about the feeling

However, believe it or not, the biggest problem with rubber dome keyboards isn’t their fragility. It’s the fact that they’re an ergonomic nightmare.

We said that the switches are responsible for telling the computer which key is being pressed. The precise moment when that happens is called actuation point. The distance between a key’s rest position and when it’s fully pressed down is called key travel.

Now, in rubber dome keyboards, the only way to actuate a key is to press it all the way to the bottom (also known as bottoming out). That’s when the silicon air bubble beneath it gets squashed, thus sending an electrical signal to the computer. This means that your fingers have to travel a longer distance when typing, which puts a strain on your hands and wrists.

Mechanical keys, on the other hand, are actuated before they bottom out. For most types of switches, that happens about half way through the key travel.

What does this mean in practice? Well, it means that you don’t have to press as hard on your keyboard when you type. Your fingers don’t have to travel as much and, consequently, you don’t get as fatigued. It may even help you type faster.

To each his own switch

There are at least a dozen different kinds of mechanical switches available on the market today. That number doubles if you count the vintage ones.

Each and every one of them has its own way to physically actuate a key, which conforms its own distinctive feel.

The buckling spring switch patented by IBM in 1971

The buckling spring switch patented by IBM in 1971, for example, actuates a key by having the internal spring buckle outwards as it collapses. That activates a plastic hammer located beneath the spring, which, upon hitting a contact, closes an electrical circuit. As the spring buckles, it hits the inner wall of its plastic housing producing a loud “click” sound. That feedback signals the typist that the key has been registered.

The Cherry MX Blue switch

The modern Cherry MX Blue switch, on the other hand, uses a leaf spring pushed by a plastic slider (the grey one in the picture) as it passes by on its way down. When the slider is half way through, the leaf spring encounters no more resistance from the slider, thus closing a circuit that signals the computer about the key press. As the grey slider hits the bottom of the switch, it emits a high pitched “click” sound similar to the one of a mouse button.

Don’t worry, I won’t get into all the nuts and bolts of all the different switches. The Internet will happily provide you more information than you can possibly ask for in that regard.

The important thing to remember is that the ergonomic properties of a mechanical keyboard are determined by three fundamental characteristics of its switches:

  • Tactility
  • Force
  • Sound

Let’s look at each of them briefly.


A switch is called tactile if it gives some kind of feedback about when a key is actuated. This is usually done by significantly dropping the key’s resistance the instant it gets registered. Some switches even accompany that with a “click” sound and are therefore referred to as “clicky”.


The force indicates how hard you have to press down a key in order to actuate it. This is measured in centinewtons (cN) or, more commonly grams-force (gf). In practice they’re interchangeable, since 1 cN is roughly equivalent to 1 gf (more precisely 1 cn = 1.02 gf).

Switches get progressively stiffer the further down you press a key, which, again, helps you avoid bottoming out. A soft switch requires about 45 cN to actuate. A stiff one is about twice that, around 80 cN.

The force graph of the stiff tactile buckling spring switch The force graph of a soft tactile Cherry switch

These are called force graphs and indicate how much force in cN (Y axis) is required to press a key during its travel in mm (X axis) from rest position to bottom. Did you notice the sudden drop in force about half way through the travel? That’s how tactile switches announce that the key has been actuated, as shown by the little black dot.


As I mentioned earlier, certain switches emit a “click” sound when they get actuated. This extra piece of feedback can either be pleasurable (even nostalgic if you will) or totally annoying. I strongly advise you to investigate how the people who will be sitting next to you feel about clicky keyboards before buying one. And telling them that all keyboards sounded like that back in the 80’s no longer cuts it.

Wrapping it up

I know it sounds crazy, but I barely scratched the surface of what there is to know about mechanical keyboards. However, the purpose of this post was to give you an idea about what they are and where they come from, and I hope I’ve succeeded in that.

If you’re eager to know more, fear not. I intend to write more about this topic. Next time, I’ll start reviewing some of my favorite mechanical keyboards. In the meantime, take a look at the official guide to mechanical keyboards put together by this growing community of keyboard enthusiasts.