Extreme Integration: The future of software development?
What will the daily experience of software development look like, say, five years from now? Have our current processes reached their peak, or will the world continue to change? Alan Kay said “the easiest way to predict the future is to invent it.” Here are some ideas of the future I want to invent: I hope it will be dramatically better than what we currently do.
[caption id="" align=“alignright” width=“240” caption=“Steel pipes (by monkeyc.net)”][/caption]
The term “Continuous Integration” was first discussed when Extreme Programming was starting to garner interest in the late 90s. From then, it has gone from being a manual process that top-notch team used, to being an automated, nearly ubiquitous process. The tools have gone from being home made through demanding tools like CruiseControl to user friendly tools like Hudson. After Hudson, is there still any radical change in store for us?
Somewhat independent of the evolution of Continuous Integration tools, there have been four trends that have developed in the last few years:
- Continuous testing: Tools like autotest for Ruby and Kent Beck’s JUnit Max for Java execute your tests after every change you make to the source code. Autotest is widely used within the Rails community, and even though JUnit Max did not get the takeoff Kent was hoping for, I think there’s still great potential in this sphere. I’ve used both tools, and they transform the way I work for the better.
- Distributed source control greatly increases our flexibility in terms of multiple sources and stages of source code. Especially Git has seen growing interest in the last two years. Github is quickly becoming one of the large project hosting providers.
- Continuous deployment: Organizations have started pushing the result of their continuous integration process further towards production. In the last three years, I’ve worked on two large projects, both of which deploy every build to a test server. The company IMVU, with it’s large customer base, deploys automatically into production roughly 50 times per day.
- Smaller checkins: In the last issue of The Agile Toolkit George (no last name given in podcast or notes) suggest checking in every time your build is green. I’ve never worked on a project like that, but I’ve experienced a gradual increase in how frequently we check in.
[caption id="" align=“alignleft” width=“240” caption=“Complexity (by nerovivo)”][/caption]
If we extrapolate from these trends, where do they lead? Here is what I think will be the development experience of advanced teams in the future:
- Whenever I save a file, my (fast running) tests are run in the background.
- When all the tests run successfully, my changes are pushed up to my personal clone of the repository. A first stage continuous integration server listens to changes from all the developers repositories. When it verifies the tests, it pushes the changes to the integrated repository.
- Every few minutes, my workspace is updated to reflect new changes from other developers in the integrated repository.
- After the integrated repository, similar build processes propagate code changes through slower, and possibly even manual tests. The verified result is stored in the staged repository.
- At the push of a button, I can roll the code from the staged repository into any test or production environment.
Sounds far fetched? Vincent Massol wrote about unbreakable builds five years ago. Distributed version control is being adopted quickly and will greatly simplify the implementation of such processes. Despite Kent Beck’s regretful decision to stop active development of JUnit Max, I believe the time for continuous testing is near. The process I outline can include as enough verification steps to make the organization comfortable. As the trend of improving test quality continues, this process will be gradually more automated.
The strange thing is that we’ve almost made a complete circle: Before the widespread use of revision control, many developers would edit the code directly in their production environment. Extreme Integration will feel almost like this, but with enough non-intrusive verification to make even the most paranoid test manager happy.
Thanks to Martin Eggen for digging up the information on IMVU’s Continuous Deployment. Thanks to Sarah Brodwall, Trond Pedersen and Finn-Robert Kristensen for helpful comments.
Comments:
[gavinclarkeuk] - Aug 18, 2009
Much as I like it this approach still allows people to easily push out code to production without writing tests for it first (as already mentioned). I wonder if there is a way to automate forcing people to write the tests first?
you could check code coverage before checking in, but I don’t think that goes far enough. As we all know coverage != tested.
I’m thinking you might be able to write an issue tracking system which requires acceptance tests to be defined for each issue, and then do some validation when you check in code that ensures the code you checkin is covered by the relevant acceptance tests (linking with an issue id in the checkin comment). It would need to allow changes to existing acceptance tests too.
This doesn’t work for refactorings though, as you are relying on the existing tests, not new ones. Perhaps we also need a tool to do some automated invariance testing - just auto generate tests that run on the old and new version of a class/module and check no behaviour has changed between revisions (remember artifactory?). Any API or behaviour changes would have to have a new or modified acceptance test anyway.
The more I look at this space the more I think developers need tools which force them to do the right thing, not just allow them to do it.
jhannes - Aug 15, 2009
Thanks for insightful comments and links. The Enterprise Continuous Integration chart was interesting. Do you have more information on the specifics of the Reporting row?
I agree that testing is the big barrier. It is what requires improvements in practice, not just tools, and that’s always harder.
I’d also like to hear your thoughts on the auto-checkin and checkout ideas I outlined.
[JeffreyFredrick] - Aug 15, 2009
I’ve you mourn the loss of JUnitMax you should check out Infinitest: http://infinitest.org
I don’t think your vision of the future sounds far fetched at all. A few years ago I was on a project where we used David Saff’s experimental continuous test runner and I thought that it was fantastic. We also had very small checkins form the policy of competitive commits: each pair tried to checkin more frequently to make merging the other pair’s problem!
However I do think the idea of abundant automated tests will be the last element to go mainstream in practice, even later that distributed source control. Most of the teams I see actually do far less automated testing than you’d think, and often even less than they think (they assume someone else is writing more tests than they are).
One element of this kind of environment you didn’t mention is the reporting aspect. When you have all these actions happening automatically it is much easier to know what builds are where, what the difference are between each build, etc.
Other than that I think your description fits very well with what Eric and I were describing as “Enterprise Continuous Integration”. Just today in IM I’d said that at the limit our Elements of ECI (http://www.anthillpro.com/html/resources/elemen…) would become Continuous Building, Continuous Deploying, Continuous Testing and Continuous Reporting, which is an idea I plan to develop further.
jhannes - Aug 12, 2009
Thanks for your comments and insights, Bjørn. I should examine TeamCity more. Probably not to use it, but to see what it feels like.
[thommyb] - Aug 11, 2009
Its kind of ironic that the people that demand quality also often deny to pay the bill of the extra hardware needed to support the wish..
jhannes - Aug 12, 2009
No sweat! We can fix the above description with very little hardware. At least if you seize control of the app server.
[bjerkeli] - Aug 11, 2009
Some interesting views here as always Johannes.
Funny that you are pointing out that we are really heading back to where we came from. In 1996 the deployment step to production was to do cvs up in the source-directory where the perl-scripts accessed by mod_perl or mod_cgi was placed. Immediate upgrade and deployment in seconds.
There were a lot of things that could go wrong, continuous staging and testing was a different story back then. What I think we could learn from these practices is that you can build and deploy systems avoiding all those tedious package-deploy-restart cycles that both complicates the process and increases lead-time to production. Add the staging and testing that you describe in your article to a system where immediate deployment is possible with minimum orchestration is needed; I hope that will be the direction of the future.
The stepwise integration that you are talking about has been provided in products like TeamCity for quite a few years, although I am not familiar with widespread deployment of this product. I fully support your viewpoint on distributed source control being the key here. The functionality need to be intrinsic in the SCM, not in a product put on top.
Often I find IT-departments standing in the way when opting for staging/test and acceptance infrastructure that are in the hands of the developers. Getting that problem out of the way, and in addition going for open-sourcing of code that you normally store in closed in-house repos, you don’t even need to have a repository internally. Some of the answers to our problems with getting the equipment we need to facilitate the steps might be found in the cloud, both as a service and a platform; configuring and provisioning a new staging environment might be done in minutes.
[Whatever] - Jan 5, 2011
JUnit Max is back: http://www.junitmax.com/PressRelease9.2010.pdf
Johannes Brodwall - Jan 5, 2011
So it is. I’m having a hard time using it with a project that has a mix of unit and integration tests, though.
Johannes Brodwall - Jan 5, 2011
Thanks for the heads up. :-)
~Johannes