Modern Software Development, Analyzed
Back in 2006, Jeff Atwood of Coding Horror made this legendary post. In which he describes, what it means to do modern development. It’s…
Back in 2006, Jeff Atwood of Coding Horror made this legendary post. In which he describes, what it means to do modern development. It’s now a decade later, and it’s worth revisiting the things he revisited that were about a decade old at his time. I’ll be referencing the content he did at the time from The Joel Test by Joel Spolsky and Software’s Ten Essentials by Steve McConnell.
What they included
I’m going to attempt to rank these in their order of appearance by the two authors, even though the ranking was likely random.
Specification — #7 for Joel, #1 for Steve
What we mean by specification or spec has shifted over the years. Originally, we meant the entire software deliverables were documented, usually in a contract or an attached functional spec. In either case, they were fixed and binding.
With the advent of the Agile approach to software in 2001, as an industry, we shunned fixed requirements and instead “welcome[d] changing requirements, even late in development”. As developers, this approach recognized the reality that contracts that take 3 months to negotiate in order to accomplish 3 months of work are probably mostly a waste of time
We further realized that we can deliver our clients and business working software sooner than that if we can kinda sorta skip all this long difficult requirements gathering and documentation step up front and instead get it along the way.
But if your experience is similar to mine, what Agile means in practice is that the client or the business thinks they can change their requirements as they want, but cost and timeline will remain fixed because “you agreed to it”.
So Agile and Scrum changed what the spec meant, and we’re all now somewhat weary of it. Even Andy Hunt, one of the original authors of the Agile Manifesto, has realized that the Agile movement has, well, failed.
So this item is in sorry shape. We’ve lost our way on it, and we can’t agree if it’s a case of “bad idea” or “you’re doing it wrong”. In either case, we need to figure this one out still.
Source Control — #1 for Joel, #8 for Steve
Both authors included this, Steve called it “software configuration management”. Whatever the name, if you’re not using source control at this point, you simply cannot continue to live this way. Here’s a list of reasons why.
At this point, using source control is the bare minimum. Using source control well is the real trick now. In order to do that, you must follow good branching strategies and naming conventions for starters. Of course, you should only do what’s needed and no more: if you have only two devs on a project, you probably only need master and develop branches. (And to be honest, if it’s really super small, you might just get way with them each having a fork.)
As a community we’ve done a good job of this altogether. So let’s put this in the win column.
Daily Builds — #3 for Joel, #10 for Steve
For those in the compiled languages universe, this is an obvious point. But if you use Python, Ruby, or JavaScript, then this may seem like an easy, “Yeah, this doesn’t apply to us.”
You’re wrong.
The point of the build is to make sure your code runs. Just because it was applied historically to a compiler doesn’t mean we get to opt out now.
Now the most appropriate name for this is Continuous Integration (CI). If you use Jenkins, Travis, or whatever to do a regular build — even without tests — then you’re doing this. Keep it up.
If you’re not using it, start. If you are using it, but you’re not doing builds on every commit, start.
From my impression, I can’t say we’re doing very well on this one. I’m not entirely sure why.
Curious Omissions
Reviewing these lists, it was surprising to me to see three things missing:
Unit tests
Multiple environments
Code reviews
At the time of these being written, compiled, desktop development was the bread and butter of a lot of people writing and thinking about these issues. Also, any kind of web site/service/portal having more than one environment may have been seen as a ridiculous luxury. I further expect that source control was neither broadly enough implemented or mature enough to benefit from any kind of rudimentary CI to be feasible, and any means to code review would be also thwarted by having no good centralized code base, I suspect.
But as I was not doing professional work at the time, I simply don’t know what was going on at the time that these ideas weren’t common. In retrospect, they seem obvious, but even now, many companies don’t practice these things.