October 22, 2008 Posted in programming

Microsoft Oslo and the evolution of programming

The other day I was reading about one of the most talked about future Microsoft technologies code-named “Oslo”. The details of what “Oslo” is are still pretty vague, at  least until PDC 2008. In short it’s a set of technologies aimed at creating Domain Specific Languages (DSL) that can then be used to build executable programs. Yes, I know that still sounds vague, but that’s really all Microsoft has reveled about the whole project so far. dsl

According to a recent interview on .NET Rocks! with Don Box and Doug Purdy, two of the main architects behind “Oslo”, Microsoft’s goal is:

enabling people with knowledge on a specific domain to create software in order to solve a problem in that domain, without having to know the technical details of software construction.

To me, “Oslo” represents the next big step in the evolution of programming styles.

Imperative Programming

Traditionally computer programming has always been about “telling” the machine what to do by specifying a set of instructions to execute in a particular order. This is known as imperative programming. The way programmers expressed these instructions has evolved over time from being commands directly interpreted by CPU to higher level structured languages that used abstracted concepts instead of processor-specific instructions, and let another program, the compiler, produce the appropriate code executable by the machine. The goal of programming languages has always been to give programmers new metaphors to interact with the computer in a more intuitive and natural way. Here are a few important milestones in this evolution:

  • Structured Programming: introduced natural language-like constructs to control the execution flow of program, like selection (“IF“, “ELSE“) and iteration (“WHILE“, “FOR“). Before that programmers used to construct programs by explicitly pointing to the next instruction to execute (“GOTO“).
  • Procedural Programming: introduced the concept of a “routine“, an ordered set of instructions that could be executed as a group by referring to them with a name (procedure call). Routines could take input data through parameters and output results through return values. Related routines could be grouped into modules to better organize them.
  • Object-Oriented Programming: focuses on the shape of the data being processed in a program. Introduced the concepts of “objects“, data structures that contain both data and the functions (methods) that operate on that data in a single unit. Objects and the interactions among them are modeled to represent real-world entities inside of a program, which allows programmers to think about a problem in a more natural way.

Declarative Programming

At the same time another style of programming has evolved over the years, known as declarative programming. Instead of telling the computer what to do, declarative programming focuses on telling what results are expected, and letting the computer figure out which steps it has to go through to obtain those results. Declarative programming expresses intent without issuing commands. Key milestones in the evolution include:

  • Functional Programming: describes a program as a set of mathematical functions that operate on immutable data. Functions can have data or other functions as their input and return the result of the computation.
  • Logic Programming: describes a program as a set of mathematical logic expressions. These expressions often take the form of premises and conclusions composed by logical statements (for example, “IF A AND B THEN C” where A, B, and C are logical statements). The program output is produced by evaluating these expressions on set of data called facts.
  • Language Oriented Programming: focuses on constructing a domain-specific language to describe the problem in the terms of its domain, and then using that language to describe a program to solve that problem.

Microsoft “Oslo

The technologies delivered with “Oslo” fall clearly in this last category. In “Oslo” a compiler will translate a program expressed with a domain-specific language into an imperative general-purpose programming language such as C# or Visual Basic, which in its turn gets compiled into executable machine code. This way programmers can think using even more natural metaphors, and let the computer take care of the details of how to translate their intent into running software.

/Enrico


October 21, 2008 Posted in technology

Stack Overflow

If you are interested in technology-agnostic software development practices, then you should definitely check out Jeff Atwood’s blog “Coding Horror“. I’ve been a reader for years and, even if I might not necessarily agree on all his points, it’s very interesting to have a discussion about the art and science of building high quality software without focusing on particular technologies or programming languages.

Now, Jeff Atwood and Joel Spolsky, author of another popular blog, inspired by the way these kinds of discussions are driven on the Web, decided stackoverflow-logo to go a step further and build a whole community around that. They called it Stack Overflow, and they  concretized in a web site where people can freely ask and answer questions related to computer programming. What’s so special about that, you might think. Well, what they came up with is not quite the usual forum you are used to. It’s actually much more than that.

The basic idea is to build a community made from programmers to programmers, whit a desire to share their knowledge and expertise, independently of their technology or programming language of choice. This is also a collaborative effort, much like a Wiki is, where anyone can edit the questions and answers that are being posted. There are some other elements in it, like a voting system to rate questions and answers, but the main concept is that this is a self-sustained community. You can read more about the details on the their about page.

I have been active on Stack Overflow for a little more than a week and I have to say I really like it. The site has a nice & clean design and make extensive use of AJAX to improve responsiveness. I highly recommend you to check it out.

/Enrico


October 09, 2008 Posted in .net

Visual Studio Code Coverage reports with MSBuild

Recently I had a project where I was using Microsoft Visual Studio 2008 Team System for development and CruiseControl.NET for doing continuous integration. I VisualStudioLogo usually care about code coverage when running unit tests, so I decided to integrate the code profiling tool included in Visual Studio Team System as part of my build process, in order to produce a code coverage report with each build.

I figured that wouldn’t be too hard, all I had to do in my build script was to invoke Visual Studio’s test runner’s executable (usually found in C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\MSTest.exe) passing an option to enable code coverage profiling, and grab the output in a file that CruiseControl.NET would later use to produce the build report.

However, I quickly found out that Visual Studio’s test runner produces code coverage output in a binary proprietary format while CruiseControl.NET uses XML in order to generate its reports. Ouch!

Luckily, Microsoft distributes a .NET API that can be used to convert the content of code coverage files produced by Visual Studio into XML. Pheeew!

The library is contained in the Microsoft.VisualStudio.Coverage.Analysis.dll assembly, which can be found in the Visual Studio 2008 Team System installation folder (usually in C:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\PrivateAssemblies). So all I had to do was to add an extra step in the build process to invoke that library and do the conversion.

Since I am using MSBuild to run the build, I encapsulated the code in an MSBuild task which you can find over here at the MSDN Code Gallery. There isn’t really much to it, the actual conversion is easily done in a couple of lines of code:

// You need to specify the directory containing
// the binaries that have been profiled by MSTest
CoverageInfoManager.SymPath = symbolsDirPath;
CoverageInfoManager.ExePath = symbolsDirPath;

// The input file is the binary output produced by MSTest
CoverageInfo info = CoverageInfoManager.CreateInfoFromFile(inputFilePath);
CoverageDS dataSet = info.BuildDataSet(null);
dataSet.WriteXml(outputFilePath);

Then, the task can be invoked from the MSBuild script with:

<!-- Imports the task from the assembly -->
<UsingTask TaskName="ConvertVSCoverageToXml" AssemblyFile="CI.MSBuild.Tasks.dll" />

<!-- The values of the 'OutputPath' and 'TestConfigName' variables
     must be the same as the arguments passed to MSTest.exe
     with the /resultsfile and /runconfig options -->
<ConvertVSCoverageToXml
    CoverageFiles="$(OutputPath)\$(TestConfigName)\In\$(ComputerName)\data.coverage"
    SymbolsDirectory="$(OutputPath)\$(TestConfigName)\Out"
    OutputDirectory="$(OutputPath)" />

This could easily be achieved in much the same way with an NAnt task, if that’s your build tool of choice.

/Enrico