February 26, 2013 Posted in technology  |  programming

Adventures in overclocking a Raspberry Pi

This article sums up my experience when overclocking a Raspberry Pi computer. It doesn’t provide a step-by-step guide on how to do the actual overclocking, since that kind of resources can easily be found elsewhere on the Internet. Instead, it gathers the pieces of information that I found most interesting during my research, while diving deeper on some exquisitely geeky details on the way.

A little background

For the past couple of months I’ve been running a Raspberry Pi as my primary NAS at home. It wasn’t something that I had planned. On the contrary, it all started by chance when I received a Raspberry Pi as a conference gift at last year’s Leetspeak. But why using it as a NAS when there are much better products on the market, you might ask. Well, because I happen to have a small network closet at home and the Pi is a pretty good fit for a device that’s capable of acting like a NAS while at the same time taking very little space.

My Raspberry Pi sitting in the network closet.

Much like the size, the setup itself also strikes with its simplicity: I plugged in an external 1 TB WD Element USB drive that I had lying around (the black box sitting above the Pi in the picture on the right), installed Raspbian on a SD memory card and went to town. Once booted, the little Pi exposes the storage space on the local network through 2 channels:

  • As a Windows file share on top of the SMB protocol, through Samba
  • As an Apple file share on top of the AFP protocol, through Nettalk

On top of that it also runs a headless version of the CrashPlan Linux client to backup the contents of the external drive to the cloud. So, the Pi not only works as a central storage for all my files, but it also manages to fool Mac OS X into thinking that it’s a Time Capsule. Not too bad for a tiny ARM11 700 MHz processor and 256 MB of RAM.

The need for (more) speed

A Raspberry Pi needs 5 volts of electricity to function. On top of that, depending on the number and kind of devices that you connect to it, it’ll draw approximately between 700 and 1400 milliamperes (mA) of current. This gives an average consumption of roughly 5 watts, which makes it ideal to work as an appliance that’s on 24/7. However, as impressive as all of this might be, it’s not all sunshine and rainbows. In fact, as the expectations get higher, the Pi’s limited hardware resources quickly become a frustrating bottleneck.

Luckily for us, the folks at the Raspberry Pi Foundation have made it fairly easy to squeeze more power out of the little piece of silicon by officially allowing overcloking. Now, there are a few different combinations of frequencies that you can use to boost the CPU and GPU in your Raspberry Pi.

The amount of stable overclocking that you’ll be able to achieve, however, depends on a number of physical factors, such as the quality of the soldering on the board and the amount of output that’s supported by the power supply in use. In other words, YMMV.

There are also at least a couple of different ways to go about overclocking a Raspberry Pi. I found that the most suitable one for me is to manually edit the configuration file found at /boot/config.txt. This will not only give you fine-grained control on what parts of the board to overclock, but it will also allow you to change other aspects of the process such as voltage, temperature thresholds and so on.

In my case, I managed to work my way up from the stock 700 MHz to 1 GHz through a number of small incremental steps. Here’s the final configuration I ended up with:

arm_freq=1000
core_freq=500
sdram_freq=500
over_voltage=6
force_turbo=0

One thing to notice is the force_turbo option that’s currently turned off. It’s there because, until September of last year, modifying the CPU frequencies of the Raspberry Pi would set a permanent bit inside the chip that voided the warranty.

However, having recognized the widespread interest in overclocking, the Raspberry Foundation decided to give it their blessing by building a feature into their own version of the Linux kernel called Turbo Mode. This allows the operating system to automatically increase and decrease the speed and voltage of the CPU based on much load is put on the system, thus reducing the impact on the hardware’s lifetime to effectively zero.

Setting the force_turbo option to 1 will cause the CPU to run at its full speed all the time and will apparently also contribute to setting the dreaded warranty bit in some configurations.

Entering Turbo Mode

When Turbo Mode is enabled, the CPU speed and voltage will switch between two values, a minimum one and a maximum one, both of which are configurable. When it comes to speed, the default minimum is the stock 700 MHz. The default voltage is 1.20 V. During my overclocking experiments I wanted to keep a close eye on these parameters, so I wrote a simple Bash script that fetches the current state of the CPU from different sources within the system and displays a brief summary. Here’s how it looks like when the system is idle:

Output of my cpustatus script when the CPU is idle.

See how the current speed is equal to the minimum one? Now, take a look at how things change on full blast with the Turbo mode kicked in:

Output of my cpustatus script with Turbo Mode enabled.

As you can see, the CPU is running hot at the maximum speed of 1 GHz fed with 0,15 extra volts.

The last line shows the governor, which is a piece of the Linux kernel driver called cpufreq that’s responsible for adjusting the speed of the processor. The governor is the strategy that regulates exactly when and how much the CPU frequency will be scaled up and down. The one that’s currently in use is called ondemand and it’s the foundation upon which the entire Turbo Mode is built.

It’s interesting to notice that the choice of governor, contrary to what you would expect, isn’t fixed. The cpufreq driver can, in fact, be configured to use a different governor during boot simply by modifying a file on disk. For example, changing from the ondemand governor to the one called powersave would block the CPU speed to its minimum value, effectively disabling Turbo Mode:

cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
# Prints ondemand

echo "powersave" | /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor

cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor
# Prints powersave

Here’s a list of available governors as seen in Raspbian:

cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_available_governors
# Prints conservative ondemand userspace powersave performance

If you’re interested in seeing how they work, I encourage you to check out the cpufreq source code on GitHub. It’s very well written.

Fine tuning

I’ve managed to get a pretty decent performance boost out of my Raspberry Pi just by applying the settings shown above. However, there are still a couple of nods left to tweak before we can settle.

The ondemand governor used in the Raspberry Pi will increase the CPU speed to the maximum configured value whenever it finds it to be busy more than 95% of the time. That sounds fair enough for most cases, but if you’d like that extra speed bump even when the system is performing somewhat lighter tasks, you’ll have to lower the load threshold. This is also easily done by writing an integer value to a file:

60 > /sys/devices/system/cpu/cpufreq/ondemand/up_threshold

Here we’re saying that we’d like to have the Turbo Mode kick in when the CPU is busy at least 60% of the time. That is enough to make the Pi feel a little snappier during general use.

Wrap up

I have to say that I’ve been positively surprised by the capabilities of the Raspberry Pi. Its exceptional form factor and low power consumption makes it ideal to work as a NAS in very restricted spaces, like my network closet. Add to that the flexibility that comes from running Linux and the possibilities become truly endless. In fact, the more stuff I add to my Raspberry Pi, the more I’d like it to do. What’s next, a Node.js server?


January 22, 2013 Posted in git

Grokking Git by seeing it

When I first started getting into Git a couple of years ago, one of the things I found most frustrating about the learning experience was the complete lack of guidance on how to interpret the myriad of commands and switches found in the documentation. On second thought, calling it frustrating is actually an understatement. Utterly painful would be a better way to describe it. Git logo What I was looking for, was a way to represent the state of a Git repository in some sort of graphical format. In my mind, if only I could have visualized how the different combinations of commands and switches impacted my repo, I would have had a much better shot at actually understand their meaning.

After a bit of research on the Git forums, I noticed that many people was using a simple text-based notation to describe the state of their repo. The actual symbols varied a bit, but they all essentially came down to something like this:


                               C4--C5 (feature)
                              /
                C1--C2--C3--C4'--C5'--C6 (master)
                                       ^

where the symbols mean:

  • Cn represents a single commit
  • Cn’ represents a commit that has been moved from another location in history, i.e. it has been rebased
  • (branch) represents a branch name
  • ^ indicates the commit referenced by HEAD

This form of graphical DSL proved itself to be extremely useful not only as a learning tool but also as a universal Git language, useful for documentation as well as for communication during problem solving.

Now, keeping this idea in mind, imagine having a tool that is able to draw a similar diagram automatically. Sounds interesting? Well, let me introduce SeeGit.

SeeGit is Windows application that, given the path to a Git repository on disk, will generate a diagram of its commits and references. Once done, it will keep watching that directory for changes and automatically update the diagram accordingly.

This is where the idea for my Grokking Git by seeing it session came from. The goal is to illustrate the meaning behind different Git operations by going through a series of demos, while having the command line running on one half of the screen and SeeGit on the other. As I type away in the console you can see the Git history unfold in front of you, giving you an insight in how things work under the covers.

In other words, something like this:

SeeGit session in progress.

So, this is just to give you a little background. Here you’ll find the session’s abstract, slides and demos. There’s also a recording from when I presented this talk at LeetSpeak in Malmö, Sweden back in October 2012. I hope you find it useful.

Abstract

In this session I’ll teach you the Git zen from the inside out. Working out of real world scenarios, I’ll walk you through Git’s fundamental building blocks and common use cases, working our way up to more advanced features. And I’ll do it by showing you graphically what happens under the covers, as we fire different Git commands.

You may already have been using Git for a while to collaborate on some open source project or even at work. You know how to commit files, create branches and merge your work with others. If that’s the case, believe me, you’ve only scratched the surface. I firmly believe that a deep understanding of Git’s inner workings is the key to unlock its true power allowing you, as a developer, to take full control of your codebase’s history.


August 30, 2012 Posted in speaking

Make your system administrator friendly with PowerShell

I know I’ve said it before, but I love the command line. And being a command line junkie, I’m naturally attracted to all kinds of tools the involve a bright blinking square on a black canvas. Historically, I’ve always been a huge fan of the mighty Bash. PowerShell, however, came to change that.

PowerShell

Since PowerShell made its first appearance under the codename “Monad back in 2005, it proposed itself as more than just a regular command prompt. It brought, in fact, something completely new to the table: it combined the flexibility of a Unix-style console, such as Bash, with the richness of the .NET Framework and an object-oriented pipeline, which in itself was totally unheard of. With such a compelling story, it soon became apparent that PowerShell was aiming to become the official command line tool for Windows, replacing both the ancient Command Prompt and the often criticized Windows Script Host. And so it has been.

Seven years has passed since “Monadwas officially released as PowerShell, and its presence is as pervasive as ever. Nowadays you can expect to find PowerShell in just about all of Microsoft’s major server products, from Exchange to SQL Server. It’s even become part of Visual Studio thorugh the NuGet Package Manager Console. Not only that, but tools such as posh-git, make PowerShell a very nice, and arguably more natural, alternative to Git Bash when using Git on Windows.

Following up on my interest for PowerShell, I’ve found myself talking a fair deal about it both at conferences and user groups. In particular, during the last year or so, I’ve been giving a presentation about how to integrate PowerShell into your own applications.

The idea is to leverage the PowerShell programming model to provide a rich set of administrative tools that will (hopefully) improve the often stormy relationship between devs and admins.

Since I’m often asked about where to get the slides and the code samples from the talk, I thought I would make them all available here in one place for future reference.

So here it goes, I hope you find it useful.

Abstract

Have you ever been in a software project where the IT staff who would run the system in production, was accounted for right from the start? My guess is not very often. In fact, it’s far too rare to see functionality being built into software systems specifically to make the job of the IT administrator easier. It’s a pity, because pulling that off doesn’t require as much time and effort as you might think with tools like PowerShell.

In this session I’ll show you how to enhance an existing .NET web application with a set of administrative tools, built using the PowerShell programming model. Once that is in place, I’ll demonstrate how common maintenance tasks can either be performed manually using a traditional GUI or be fully automated through PowerShell scripts using the same code base.

Since the last few years, Microsoft itself has committed to making all of its major server products fully administrable both through traditional GUI based tools as well as PowerShell. If you’re building a server application on the .NET platform, you will soon be expected to do the same.