I’ve recently picked up reading (again) my all-time favorite book about software development: Code Complete by Steve McConnell. The book was first published in 1993 and has been a timeless guide for anyone in this industry about the science and art of software development. A craftsman’s handbook, if you will.
In 2004 McConnell published Code Complete Second Edition, updated to cover the advancements in the software development practices that has happened since the first release, like object-oriented programming, agile and test-driven development to name a few. All the code examples have also been updated to modern languages like C++ and Visual Basic, which substitute C and Pascal. However the main topic of the book remains unchanged, and that’s the whole point really. Software development concepts and best practices are a foundation whose principles are independent of specific technologies and programming languages.
Once I read the original Code Complete and now I am going through the Second Edition. My plan is to write here about some of the gems of wisdom kept in these tomes, based on the opinion that is extremely valuable to keep track of and share precious knowledge.
Last night I read about the fundamental concepts of developing software systems, and I was fascinated by one of them in particular. McConnell calls it “The Primary Technical Imperative”, and in order to explain it he refers to a famous essay published in 1987 by Fred Brooks called “No Silver Bullets – Essence and Accident in Software Engineering” (Computer, April 1987). In his paper Brooks argues that everything in the world as two kinds of characteristics, accidental and essential. Accidental characteristics are those that can be attributed to a thing, but that do not define it. For example, a car could have a V8 engine and have 15 inch tires. These characteristics belong to that specific car but, if it had a different engine or tires it would still be a car. Essential characteristics, on the contrary, are those that are distinctive for a specific thing and cannot be eliminated. A car must have an engine and four wheels in order to be a car, otherwise it would be something else.
In the same way software development has both accidental and essential difficulties. The accidental difficulties are related to specific technologies and programming tools used to develop the software, and can be made easier through evolution. For example, the difficulty in programming with assembly language has been addressed by creating higher level programming languages and compilers. Also programmers no longer have to write code in basic text editors, but instead can take advantage of integrated development environments with aids like syntax highlighting and statements auto-completion. However, the essential difficulty in software development comes from the goal of software itself, which is to solve real world problems with the aid of the computer. In order to do that, we need to represent reality in a way that is manageable by a computer, and this is exactly where the essential difficulty lies:
To define and constraint the unpredictable and approximate interactions that happen among things in the real world, in order to make them fit inside a predictable and exact computer program.
Developing software is therefore a matter of managing the complexity that arises when bringing order and structure to the chaos of reality. With this principle in mind, it becomes obvious why all software development practices traditionally aim at achieving control and predictability. Their goal is to prevent complexity from getting out of hands. Modern practices like agile development take a different approach. They recognize that change is an essential characteristic of reality and therefore, instead of fighting it by putting limits and constraints, they adjust the process of developing software to cope with change by simply becoming more flexible. But that’s for a whole other story.
/Enrico
October 27 – 30 I will be attending Microsoft Professional Developer Conference 2008 in Los Angeles, USA.
PDC is considered one of the most important events in the IT industry as it is usually the place where Microsoft makes its big announcements. Because its nature, the conference is now held every year, but only in the face of a big new technology wave.
Last PDC was held in 2005 where Microsoft announced and previewed the next version of Windows codename “Longhorn” (originally meant to be Vista’s codename, but that later lead to the development of Windows Server 2008).
Before that, on PDC 2000 the world was introduced to a new development platform called the .NET Framework, a new web programming model called ASP+ (later to be known as ASP.NET) and got a sneak-peak of the next Windows code-named “Whistler”, which later became Windows XP.
This year PDC focuses around the Software + Services model. This is a new style of building applications that consume data stored on the Internet, where it is exposed through a series of services. These application leverage the ubiquity of the Internet to give users access to their data, whether it might be work documents or family pictures, from anywhere. Of course this is a big shift from the model we have been used to for the last 20 years, where applications store data locally on the computer.
Another big topic is of course, as it is tradition, the next release of Windows, this time simply code-named “Windows 7″. Rumors say it will be a “minor upgrade” to Windows Vista, but a the same time Microsoft is talking about big innovations like support for multi-touch-computing. Mixed signals then, we will just have to wait a little longer to find out what it really is all about.
I will be posting my impressions and whatever interesting piece of information I can get during the conference.
/Enrico
In a previous post I wrote about how Mozilla Firefox during the last 4 years slowly grew in popularity among Internet users, until it finally became a threat to Microsoft, who finally decided to refresh good old Internet Explorer by releasing version 7 in late 2006.
Internet Explorer 7 was a “catch-up” release, which finally included many of the usability features that had made Firefox so popular, like tab browsing, support for RSS feeds, stronger security and better compliance with web standards. However Microsoft wasn’t clearly aiming at the stars with IE7, since Firefox was still far superior in terms of page rendering and JavaScript performance, as well as remaining the browser of choice for web designers to check their site’s compatibility with W3C standards.
Obviously Firefox wasn’t that much of a threat after all, since Internet Explorer managed to maintain the biggest slice of the browser market share to date. Most likely for the same reasons that put Microsoft in trouble with the antitrust case against the US Justice Department in 1998: bundling with the world’s most popular operative system.
However, when Google releases a browser, it’s a completely different story. Google’s solid position on the Internet market combined with its business model of free services give the company a great marketing potential. This potential can be used as a highway to mass-distribute software, much in the same way Windows served to put Internet Explorer on the majority of PCs in the world during the 90′s. And Microsoft must have sensed that threat.
It doesn’t come as a surprise then, that yet a new version Internet Explorer is about to hit the market, this time quicker than ever. In fact, if it took Microsoft roughly 5 years put Internet Explorer 7 out the doors (from late 2001 to late 2006) , the first public release of Windows Internet Explorer 8 was made available a mere 1,5 years later. The browser is currently in its beta 2 phase and is available for download here. Microsoft says that with this release it has focused on increasing JavaScript performance and provide full compliance with web standards, especially CSS.
I am currently giving IE8 a spin, and I will soon post my impressions. In the meantime, have someone already tried it? What’s your opinion? I’m curious to hear.
/Enrico