We have a build server and we practice continuous integration on all of our projects. In fact, it’s pretty much the first thing we set up after version control. We’re feedback junkies. It became especially apparent while working on a client project last year where we used their development infrastructure. They had a build server running, but the problem was that it took too long to get feedback. I was lost and jonesing. They had a monolithic build that took about 45 minutes to run its course and give me the affirmation I was seeking.
While setting up my current project, I decided to take a different approach when configuring our build server and continuous integration. I had a few goals in mind:
I split our build into three plans:
This resets the unit test database (using c5-db-migrations), compiles the project, runs all of the unit tests, and if there are no errors or failures, produces a war. The war is installed locally using Maven so that it’s accessible to other processes in a known location. This build is very fast and is triggered on every subversion commit. The command used for this build plan is:
mvn db-migration:reset clean install
This resets the functional test database, deploys the war built during the continuous integration build to tomcat using Cargo, runs all of the functional tests, and shuts down tomcat. This build is triggered on every successful continuous integration build (i.e. as a dependent build). A very short script performs the work of this build:
mvn -Pdev db-migration:reset
cd functional-tests; mvn -Pdev clean test-compile cargo:start surefire:test cargo:stop
And here’s the cargo-maven-plugin configuration:
<plugins> ... <plugin> <groupId>org.codehaus.cargo</groupId> <artifactId>cargo-maven2-plugin</artifactId> <version>1.0-beta-2</version> <configuration> <wait>false</wait> <configuration> <deployables> <deployable> <groupId>com.acme</groupId> <artifactId>acme-web</artifactId> <type>war</type> <properties> <context>acme-web</context> </properties> </deployable> </deployables> </configuration> <container> <containerId>tomcat6x</containerId> <zipUrlInstaller> <url>http://www.apache.org/dist/tomcat/tomcat-6/v6.0.18/bin/apache-tomcat-6.0.18.zip</url> </zipUrlInstaller> </container> </configuration> </plugin> </plugins>
This step doesn’t build anything or run any tests, but is a little more complicated than the others because it’s interacting with a another machine: our dedicated project acceptance server. We scp the war (built during the continuous integration phase) to our acceptance server. Next, we shutdown tomcat and clean a few things up (logs, old webapp, and work). Then we migrate the database (no reset because we care about the data). Last, we bring tomcat back up with the new war. This build is triggered on every successful functional test build. Here’s the script that’s run by the build server:
ARTIFACT_NAME=./ROOT.war # SCP Application and Scripts WAR=`find ~/.m2/repository/com/acme/acme-web -name "*-*.war" | sort | tail -1` echo "Copying $WAR to acceptance server." scp $WAR acme@acme-acceptance:$ARTIFACT_NAME scp ./bin/*_as.sh acme@acme-acceptance:. # Shutdown and Clean Tomcat ssh acme@acme-acceptance sh ./shutdown_as.sh # DB Migration mvn -Pdev db-migration:migrate -Djdbc.host=acme-acceptance # Install Application ssh acme@acme-acceptance mv $ARTIFACT_NAME ./apache-tomcat/webapps/ # Startup ssh acme@acme-acceptance sh ./startup_as.sh
This has worked out really well for the project and we get feedback very quickly.
You may wonder why we broke functional tests into their own plan. I find that functional tests can be a little less stable than unit tests (especially if selenium is involved), and they run much more slowly. I’ve seen cases where flaky functional tests caused a team to start ignoring build results because it was usually a problem with the infrastructure, not with the code. So, the decision was somewhat defensive and in retrospect, probably unnecessary.
We’ve spent less time maintaining our build plans than in the past as well. At some point, our build server MySQL instance crapped out and even though all of the databases were deleted, our builds all ran successfully when the build server came back up because they start with a database reset, which creates the target database and migrates it to the latest schema.
A single war is promoted as it passes a greater level of testing, and is eventually deployed to the acceptance server if all of the tests pass. While we’re saving a little time by not rebuilding the archive for each plan, that’s not the only thing I like about it. It just feels a little more right and it completely eliminates the chance that something about the artifact changes as it makes its way through the pipelines. The same war can make its way from the first CI build all the way to production. This is possible because we include default configuration for the application which matches our development environments, and then provide a mechanism for externalizing application configuration for the one-off environments.
I think most modern build server software provides everything you need to do something like this as it’s rather straight forward. However, for those who are curious, we have been using Bamboo for the last year or two and recently installed TeamCity so that we can give it a proper try. Both are great products, and if you’re on a smallish team, TeamCity is completely free (and is superior to the open source alternatives, IMHO).
How are you using your build server?
Christian is a software developer, technical lead and agile coach. He's passionate about helping teams find creative ways to make work fun and productive. He's a partner at Carbon Five and serves as the Director of Engineering in the San Francisco office. When not slinging code or playing agile games, you can find him trekking in the Sierras and playing with his daughters.