Getting Started with Subversion or Git

September 17, 2010 by

A guest post by Mark Bathie, CTO of Codesion – a provider of version control hosting in the cloud.

For most software developers the question is not whether you should use version control, rather, what version control tool you should use. The most widely adopted is Subversion, but Git is the fastest growing.

Both are good, popular choices, but there are some core differences. Subversion, a centralized version control system, forces all developers to commit to a central server, whereas Git, a decentralized system, encourages many-to-many merging. For example, with Git you can merge to a fellow developer before merging with a central node (if there is one). There are other key differences I explore in this blog post and summarized below.

Subversion

Pros

  • Centralized control, meaning its easier to keep one official history. Good for backups, accountability and security.
  • Security: Native subdirectory-level user access restrictions
  • Branches are easy and stored centrally
  • Gentlest learning curve for newbies, and operations like branching become quite intuitive as you progress
  • Ability to lock files, ensuring that others don’t work on the same binary files assets like documents & media files simultaneously
  • Widest adoption amongst developers
  • History of changes can’t be modified
Cons

  • Working offline means you can’t commit changes to the centralized repository until you have Internet connection, although this does not prevent developers from working on their local copy.
  • Generally slower than Git, since most operations need to synchronize with the central server, especially when merging large branches
  • Possible to revise history and preserve only their successes, not their failures

Git

Pros

  • Decentralized model means many operations are faster, and makes offline work more practical
  • More merge options which are generally faster than Subversion
  • Branching is easier and faster
  • Comes with many tools like ‘bisect’ that can partly automate the process of finding where a bug was introduced
Cons

  • The local copy model means you need to be more careful of backing up your own workstations
  • The large set of commands can be esoteric and present a steeper learning curve
  • There are no locking for files which can cause issues with binary files
  • History can be changed, being able to edit past commits might not leave the strong paper-trail you’d prefer

Now that you’ve chosen your version control system (or your team has chosen for you), you’ll want to get up and running quickly. Here are some of the basics you’ll need to checkout and commit to a Subversion or Git repository.

If you’re creating a new project from scratch, you’ll need to add some files to a fresh repository. The first step is to create the blank repository. You may need to talk to your sysadmin to set this up for you, or use a cloud provider like Codesion. Once you have your repository access URL, checkout your files.

Checkout and Commit Using Subversion

Checkout your repository from the server to your local computer (your sandbox).
$> svn checkout –username mbathie https://myorg.svn.codesion.com/mywidget

Now change into the directory and add a file.

$> cd mywidget

$> echo ‘echo “hello world”’ > hello.bash

Tell Subversion it should include this file (and any others) in the next commit, and in the repository from now on. Do this by using the “svn add” command.

$> svn add *

A    hello.bash

Then commit your new files to the server.

$> svn commit -m “adding my first file”

Adding         hello.bash

Transmitting file data ..

Committed revision 1.
Checkout and Commit Using Git

Checkout your repository from the server to your local computer (your sandbox).

git clone https://mbathie@myorg.git.codesion.com/mywidget.git
This will prompt for your password, and then display the following

$> git clone https://mbathie@myorg.git.codesion.com/mywidget.git

Initialized empty Git repository in /mywidget/.git/

Password:

Change into the directory and add a file.

$> cd mywidget

$> echo ‘echo “hello world”’ > hello.bash

Tell Git it should include this file (and any others) in the next commit, and in the repository from now on. Do this by using the “git add” command.

$> git add hello.bash

Commit the file to your local Git repository.

$> git commit -m “my first commit” hello.bash

Share your changes with other team members. In this case, we’ll push the changes to the designated central repository.

$> git push origin master

The remote Git server will respond with something like the following.

$> git push origin master

Password:

Counting objects: 3, done.

Writing objects: 100% (3/3), 228 bytes, done.

Total 3 (delta 0), reused 0 (delta 0)

To https://mbathie@myorg.git.codesion.com/mywidget.git

* [new branch]      master -> master
Joining an existing project is essentially the same process as creating a new one, except when you perform the initial checkout or clone operation, you’ll be pulling down the files relevant to the project you are joining. You won’t be checking out a blank repository. You can then use the same commands above to add, commit and push your changes and share them with other team members.

MikeCI and Codesion Integration

September 13, 2010 by

Here at MikeCI we’re in the process of integrating with Codesion’s SCM allowing users to access Codesion with a couple of clicks of the mouse, giving them a Codesion repository for either Subversion or Git.

Codesion Integration

We are delighted to have started the integration and we’re looking forward to future developments with Codesion and the SaaS community.

With SaaS specialists offering professional services at affordable prices, we are starting on a journey to offer a consortium of services across your entire stack allowing small businesses and SMEs to benefit from enterprise level services at a fraction of the cost.

We feel that joining forces with other SaaS providers allows you to get the biggest bang for your buck.

As we look to complete the integration with Codesion we are excited about the potential conglomerate we are creating.

Both Codesion and ourselves are excited by the potential new offering, you’ll be able to access award winning services from different providers in what is essentially a partners program.

Our tech lead Rob Knowles managed to shed some light on the integration and let us know what he is hoping to achieve, over the long term.

Rob claims “Complimentary SaaS providers are a great way to help get referrals as well as offer a great service to customers. This sign up processes gives SMEs a simple setup in just a couple of clicks”.

With the cloud becoming the development environment of the future it is no surprise that more developers are placing more of their stack in the cloud. Rob claims “As stacks become increasingly complex with developers creating more sophisticated builds, it makes sense to utilise the cloud accordingly, you can write your code in the cloud, save it, compile it, test it, bug track, and deploy, simply by using hosted services.”

We’ll keep you up to date with the progress and look forward to full integration in the upcoming weeks.

Try MikeCI Today For FREE with a 30 day trial.

Underdoing The Competition

September 6, 2010 by

Hosted, low-cost, internet-based services for software developers have been in existence since the formative years of the web itself. In their earliest incarnation, such services would usually involve obtaining some real estate on an Apache web server, accessible via FTP, with the ability to upload a web application and invoke some server-side technology (Perl/CGI hacking anyone?) to create the latest and greatest dot-com experimental idea for a business. Or maybe just a simple website.

Zoom forward a few years and these services start to encompass functionality that provides a platform for the activity of development itself. For example, hosted version control, issue tracking and task management applications. Now smaller web-based development shops, who are already used to renting infrastructure for their production environments from a hosting company, start to use these complimentary services – particularly when team members are not co-located. In fact, developers soon discover that these tools promote a model that supports geographically dispersed teams, and these ideas really begin to take root, like much technology these days, amongst the open source software community. The likes of Sourceforge for example, begin to acquire many thousands of projects, both large and small, due in part to providing accessible, web-based collaboration services for development. For private projects, pay-for services such as those provided by CvsDude (now Codesion) and Collabnet start to combine the power of the ideas taking root in the open source world with the industrial strength security and data protection required by larger IT businesses.

And here we are today, with a proliferation of Software-as-a-Service (SaaS) companies that offer a full range of services along the application lifecycle management continuum. Certain types of services lend themselves well to the SaaS model – the barrier to entry is relatively low, especially when the tool or service is already web-enabled. An example is hosted bug tracking solutions. These are implicitly suited to the multi-tenant application model used by most SaaS businesses. They have a very predictable resource consumption profile and a security model that easily supports multiple users. Data protection and recovery can be usually handled using industry standard practices for RDBMS management. There are a lot of businesses that offer these types of services.


When we look to other services along the ALM continuum, hosted SaaS offerings are few and far between, an example being services that support Continuous Integration. Hopefully, by now, all thinking software engineers believe that CI is integral to the practice of modern software development. In fact, I’d go further, anyone who thinks otherwise is not fit to call themselves a (thinking) software engineer. You’ve no excuse, really, and if you don’t embrace its benefits then CI is likely to be imposed on you anyway.

So, it’s 2009, you have your hosted low-cost SCM system, you’ve got your cheap web based bug tracker and your rented deployment environment (still Apache, or maybe Google App Engine?). You search for “hosted continuous integration” and hmmm…no dice. “Why does nobody do this stuff?” you ask, “Didn’t I read somewhere that its ‘integral to the practice of modern software development’ ?”. I know the answer – a simple reason really – a hosted CI service is damn hard thing to deliver.

For a hosted solution to be low-cost and truly multi-tenant it needs to be very efficient with resources. This usually involves sharing of such resources amongst many users, a model which does not easily lend itself to CI. Secondly, a hosted solution must be secure, especially with respect to user data. User data in the world of CI includes source code, build metrics and the built artefact’s themselves. Reconciling resource optimization with data security is the biggest challenge here. To illustrate this point, if such a solution provided every user with a dedicated, isolated, physical environment for their builds it would need to incorporate the associated costs into the offering. This would quickly place the service into the realms of ‘expensive luxury’ for our small agile team example. As a comparison, consider Codesions ‘Team Edition’ for hosted Subversion which starts at $6.99/month. To be competitive, the CI provider has to get very clever with shared resource optimization while still ensuring data protection and security for its users.

In addition to data protection and resource management, there are additional security concerns relating to what a build can be permitted to do in such an environment. This is usually never a concern for an on-premise CI server. Want to scan for available ports, open particular sockets or start and stop certain daemon processes? No problemo. However, I’m afraid that in the shared real-estate scenario there are some necessary limitations on what build tools might be able to do. This is something of a compromise that a user has to take on the chin if they wish to use a low-cost hosted solution. If their build process requires a bunch of ‘non-orthodox’ things to happen, they need to understand that these types of advanced builds will never play well in a shared environment. But…if your project conforms to the standard life-cycle of ‘prepare-compile-test-package’ then this model fits much better and probably covers the majority of day-to-day software builds. This is just simple pareto principle logic really – a low-cost, shared hosted solution can only realistically cover the 80% well and assume that its not the right choice anyway for the remaining 20%.

So we’ve articulated some of the challenges of establishing a hosted CI offering, but what are the potential benefits that such a solution might provide to end-users? These may seem obvious, but are worth stating:

  • Zero cost initial implementation for CI infrastructure.
  • Lowers the barrier to entry for CI, simply login, configure project(s) and use it.
  • Significantly reduced software and hardware maintenance costs. Less software hosted internally means less hardware
 required, which also translates into a reduced burden of patches and patch support complexity.
  • Reduced staffing (sorry guys/gals).
  • Better use of skills/ resources.
  • Pay for what you use and no more. Subscription models for SaaS typically allow for cancellation given a months notice. Try-before-buy is very common too.

One final consideration is the range of features that hosted solutions offer when compared with the wide variety of open source and proprietary CI servers available today. There are some very slick options out there that you can download, install, configure and use. At the moment, the available low-cost multi-tenant hosted CI solutions will not win in a feature beauty contest against these tools. However, in my humble opinion this new breed of services offer something quite different, something that sets them apart. To bend a phrase from Bill Clinton: “Its because they are hosted, stupid!”. Personally, I’m a firm believer in the idea that you should (to borrow a concept from 37signals), ‘under-do the competition’. Hosted CI should solve the simple problems and leave the hairy, difficult, nasty problems to everyone else. Instead of one-upping, it should one-down. Instead of outdoing, it should under-do. Hosted CI should stick to what’s truly essential.

What’s it Like Using Hosted Continuous Integration

August 19, 2010 by

We’re always keen on finding out what people think about using MikeCI.

With hosted continuous integration being a fairly new concept we are embarking into new territory and want to ensure that we are heading in the right direction. As a result we’ve been keen on hearing our client feedback to see how they’ve found their MikeCI experience.

Lead Developer, Alex Bertram kindly took some time out of his busy schedule in The Hague, Netherlands to talk about how MikeCI was being used in his Sigmah (previously ActivityInfo) project.

What are you working on?

Project Sigmah is an information management platform designed for humanitarian organisations and was originally developed for UNICEF but has since expanded. The platform allows users to create their own databases online and track, map and analyse performance indicators over time.

Why did you choose MikeCI?

“There aren’t too many choices out there and MikeCI offered the strongest support for our technology stack. Our stack consisted of Subversion SCM, Java and Maven, which MikeCI supported. The great Maven integration provided a simple straight forward environment in which to build our project.”

How has your experience been with MikeCI?

“It’s become pretty key over the last month. Another organization won a grant to expand ActivityInfo (now Sigmah) and hired a team in Paris to add the components they wanted. As we continue development here in the Netherlands for the original client, MikeCI has become the referee that keeps the two teams from stepping on each others toes.

With geographically dispersed teams, we needed a CI server that could be accessed from any location and allowed multiple teams to submit code.

I’ve set up my email account to forward all reports from MikeCI to our development mailing list, so everyone is getting a pretty constant stream of activity with updates and code reviews from SVN, build reports, etc. It’s nice.

We are really embracing cloud computing solutions by using and developing SAAS software. Our team uses Google Apps, ProjectLocker for SVN, Amazon EC2 and Google App engine and of course MikeCI.”

How could MikeCI be improved?

“For me the next great thing would be making the results of the build available. It would be awesome to be able to link to the most recent javadocs or code coverage reports, or just a big widget for the forge site.

Also having some sort of API would be terrific as this would help us automate releases and deployment to the server, which we’re trying to do every 2-3 weeks.”

MikeCI – Developing an API has always been on our agenda and is certainly something we would look to include on our roadmap. Similarly making builds available publicly is something we would like to include somewhere down the line.

A big thanks to Alex for his input and we wish him all the success and many happy builds to come.

If you would like to share your experiences with MikeCI or try MikeCI with a free 30 day trial, then email mike@mikeci.com or register here today.

Try MikeCI Today

Continuous Integration for Agile Project Managers (Part 3)

June 15, 2010 by

In part 1 of this series, I hopefully provided you with an introduction to Continuous Integration (CI) and an overview of the building blocks of the CI process.

With part two, we introduced the concept of a CI life-cycle and how to bind software quality checks to discrete life-cycle ‘phases’.

In this final part, I’ll attempt to explain how you can use CI to run functional tests early in your development process and show how this helps us to promote a software release on its journey from development to production. I’ll then try to condense the three parts of this series into a handy diagram.

Automation, Automation, Automation

It should hopefully be clear by now that a successful CI implementation is largely dependent upon other processes that can be automated. For example

  • checkout or update of source code by a version control tool
  • compilation of project source code by a build tool
  • execution of source code checking by a static analysis tool
  • unit test execution by a unit test harness or runner

Therefore, it seems reasonable that if we can automate a software development task in a way that is deterministic and repeatable then it can be executed during our CI process. The intention is to remove human error and have these tasks run in the background, and in the event of a ‘failure’ to be notified by a CI server. Some more advanced candidates for automation might include:

  • provisioning of a clean, known harness or environment into which we deploy a release for the purposes of functional or acceptance testing
  • starting and stopping services that are required for our functional tests and population of any test data dependencies
  • execution of the functional tests

Functional Testing and CI

OK, so how do you go about adding these additional capabilities? Well, creating an automated suite of functional tests that can be triggered during the CI process is a reasonably non-trivial task.

Let’s take a look at the main areas for consideration.

Test Fixtures

In addition to writing the tests, you also need to carefully consider your test dependencies (also known as ‘fixtures’), such as any test data you might need in a database for example. Some code frameworks, like Ruby On Rails, encourage development teams to consider these test fixtures early-on in a software development project. Outside of such frameworks, it is more common to find ad-hoc processes. The key thing is ensure that your dependencies, such as a database, are in a known state prior to test execution and that the configuration or population of dependencies is entirely automated. Here, we can again apply the concept of a CI life-cycle to this particular scenario. We will need to bind the activity of test data population to a life-cycle phase prior to the execution of functional tests. We may also need to create a subsequent binding after test execution to ensure that we ‘tear-down’ the test data and keep everything in a known, consistent state.

Test Runner

Obviously, we require a tool that can execute functional tests in an automated fashion. Furthermore, there needs to be a way to trigger is execution at an appropriate stage in the CI process. Dependending upon the nature of the application you are developing, there are a number of choices. You can find a useful list of GUI testing tools here. The majority of projects that I have worked on in recent times have been web-based, and we have used Selenium extensively.

Selenium is open source, and allows you to test your app in multiple browsers. It also has bindings for lots of different programming languages, including a basic ‘dialect’ of HTML markup that is accessible to non-programmers (this type of thing is more formally referred to as a Domain Specific Language or DSL). It also comes with a browser plug-in that allows you to record and replay your tests. For the more technically-minded amongst you, I’ve already blogged in some detailed about how we use Selenium on Mike and included an example project that can be checked out from Google Code.

Test Environment

The third piece in our jigsaw is the environment within or against which the functional tests will run. Environments are often overlooked where automation is concerned, but it is very common to experience issues during CI that are a result of an inconsistent or poorly-configured environment. In my opinion, you should consider environments as a ‘first class concept’. They should be treated like any other artefact of the development process – configuration managed, reproducible and ideally, verified or unit tested.

In addition, keeping the development environments in a form which mirrors a ‘cut-down’ copy of production really helps when promoting your application through the various test environments it may encounter on its journey into the live estate. Hopefully this should help you avoid the “Well, it works on my machine….” discussion that frequently happens in dev shops where environments are poorly controlled and managed.

If we have automated the ability to provision or script the creation and tear down of an environment, then should not surprise you that we can harness this within our CI life-cycle. For example, with enterprise Java (JEE) projects you can use tools such as Cargo that allow you to provision, start, deploy-to and stop your application server of choice. Cargo supports a wide variety of server environments, from open source, such as Apache Tomcat, through to Oracle WebLogic.

Wiring it Together

Ok. If you’ve been paying attention at the back throughout this series then hopefully the following diagram should make sense. Let’s recap the steps 1 through 12 below that represent the essential elements of a CI process that can help your agile project to deliver a high-quality solution. Note that at any point past stage 3, the build may be set to fail, resulting in an email (or other e.g. IM) notification to be sent to your team.

Continuous Integration - A Summary of Steps

Continuous Integration - A Summary of Steps

  1. Developers work to transform the requirements or stories into source code using the programming language of choice.
  2. They periodically check-in (commit) their work into a version control system (VCS)
  3. The CI server is polling the VCS for changes. It initiates the build process when it encounters a change. The build is executed using a dedicated tool for the job such as Maven, Ant or Rake etc. Depending upon the language used, the source code may need to be compiled.
  4. Static analysis is performed on the source code, to ensure compliance with coding standards and to avoid common causes of bugs.
  5. Automated unit tests are executed.
  6. The percentage of the production code exercised by the unit tests is measured using a coverage analysis tool.
  7. A binary artefact package is created. At this point we might want to assist derivation and provenence by including some additional metadata with the artefact e.g. a build timestamp, or the source code repository revision that was used to produce it.
  8. Prepare for functional testing by setting up the test fixtures. For example, create the development database schema and populate it with some data.
  9. Prepare for functional testing by provisioning a test environment and deploying the built artefact.
  10. Functional tests are executed. Post-execution, tear down any fixtures or environment established in 8 and 9.
  11. Generate reports to display the relevant metrics for the build. E.g. How many tests passed? What is the number and severity of coding standard violations?
  12. The process is continuous of course! So rinse…and repeat….

Kiln Repository Support

May 25, 2010 by

Today we are pleased to announce support for projects hosted within Kiln On-Demand repositories.

Kiln Logo

For those of you who are unfamiliar with Kiln, it is based upon the popular open source Distributed Version Control System (DVCS) Mercurial.

We have created a short screen cast that demonstrates this new feature using the Java Petstore (built using Maven) as an example project.

You can also find additional information about our support for Kiln in our new, comprehensive, on-line user guide

Continuous Testing with Selenium and JBehave using Page Objects

May 6, 2010 by

Since Mike‘s inception we have always sought to automate as much of our testing as possible. For some time now we have been using Selenium for our functional/acceptance tests, and thus far have been very happy with this approach. Initially, we decided to use Selenese-style tests, as we felt this would enable non-programmers to help maintain and extend the coverage of our tests. Someone with a basic grasp of markup and enough ability to make small re-factorings to the HTML code generated by Selenium IDE. However, as the range of features provided by our platform has started to grow we have found that we are doing a lot of ctrl+c and ctrl+v of the Selenese scripts, and generally violating our DRY principles. After some internal debate, we finally settled on a new approach that adopts Behaviour Driven Development (BDD) techniques. This works nicely with our Agile, User Story based approach to development and (as you might expect) our advanced internal practices when using Continuous Integration for our own purposes.

BDD Framework Selection

The Mike codebase is predominately Java, so it seemed sensible to choose a Java-based BDD framework. We could have opted for something like the Ruby rockstars fave Cucumber, which is a now well-established, but in the end decided upon JBehave. It got our vote for a number of reasons:

  • It’s an active project, with regular production releases.
  • It’s ‘maven-ized’ and you can bind a handy JBehave plugin to appropriate life-cycle phases
  • It provides a web testing sub-component that gives you a nice, simple abstraction for Selenium
  • Scenarios can be defined in plain-text, just like Cucumber
  • It integrates into an IDE just like any other xUnit-based testing framework (right Click > Run As > JUnit Test)

The plain-text scenarios were of particular interest, as they allow non-programmers to continue to author test flows for each story. On the downside, it does mean that only developers can provide the implementation of these scenarios. But overall, it provides a good fit for our team profile.

An Example

I’ll walk through an example of a simple JBehave BDD-style scenario, that seeks to test that perennial fave – the Java Petstore web application:


 Scenario: Successful Login

 Given the user opens the home page
 When the user clicks the enter store link
 Then the store front page should be displayed
 When the user clicks the sign in link
 Then the store login page should be displayed
 When the user enters username j2ee
 And the user enters password j2ee
 And the user clicks the login button
 Then the store front page should be displayed for user ABC

This combination of ‘Given’, ‘When’ and ‘Then’ maps very nicely to a test context, event and expected outcome for each of the various pathways through a User Story.

So now that we have our scenario, stored in a text file name ‘login_scenarios’, using JBehave we need to create two additional classes. These are:

  1. a trivial subclass of org.jbehave.scenario.Scenario whose name maps to the associated text file (LoginScenarios.java)
  2. a subclass of org.jbehave.web.selenium.SeleniumSteps (LoginSteps.java) that provides an implementation for each of the ‘Given’, ‘When’ and ‘Then’ statements.

For example:


 @Given("the user opens the home page")
 public void theUserOpensTheHomePage(){
 	homePage = new HomePage(selenium, runner);
 	homePage.open("/jpetstore");
 }

Notice how JBehave uses simple annotations to map the scenario elements to Java methods. You’ll probably also notice the use of a ‘page’ object in the method body, which performs the actual heavy-lifting of the tests. In addition, the methods in the JBehave-provided base class SeleniumSteps can be overridden as required. For example, override createSelenium() if you need to provide a custom instance of Selenium with an alternative configuration.

Page Objects

Within a web application UI there are areas that our tests interact with. Using the Page Object pattern allows us to intuitively model these pages as objects within the test code. This massively reduces the amount of duplicated code and means that if the UI changes, the fixes need only be applied in one place. This means we get our DRY testing mojo back after our experiences with copy ‘n’ pasted Selenese markup. To make things even simpler, its a good idea to create an abstract base (Page.java) class that exposes a series of useful methods to its concrete children. Here’s an example of a class that represents the login page of our demo Java petstore app.


 public class StoreLoginPage extends Page {

	public StoreLoginPage(Selenium selenium, ConditionRunner runner) {
		super(selenium, runner);
	}
	
	@Override
	public void verifyPage() {
		textIsVisible("Please enter your username and password.");
	}
	
	public void verifyPresenceOfInvalidCredentialsErrorMessage(){
		textIsVisible("Invalid username or password. Signon failed.");
	}
	
	public void typeUsername(String username){
		type("//input[@name='username']", username);
	}
	
	public void typePassword(String password){
		type("//input[@name='password']", password);
	}
	
	public StoreFrontPage clickLoginButton(){
		clickButton("//input[@value='Login']");
		waitForPageToLoad();
		return new StoreFrontPage(selenium, runner);
	}
 }

Once these page classes are wired into our SeleniumSteps subclasses, we can use the action (open, click, type etc) or verification style methods to drive the tests.

Maven Magic

As mentioned earlier, we selected JBehave in part because of its Maven integration. There is plug-in you can configure to execute the tests during the required phase:


<project>
[...]
  <plugins>
  [...]
    <plugin>
	<groupId>org.jbehave</groupId>
	<artifactId>jbehave-maven-plugin</artifactId>
	<version>2.5.1</version>
	<executions>
		<execution>
			<id>run-scenarios-found</id>
			<phase>integration-test</phase>
			<configuration>
				<scenarioClassNames>
					<scenarioClassName>
						com.mikeci.jpetstore.test.scenario.LoginScenarios
        				</scenarioClassName>
				</scenarioClassNames>
				<classLoaderInjected>false</classLoaderInjected>
				<skip>false</skip>
				<ignoreFailure>false</ignoreFailure>
				<batch>true</batch>
				<scope>test</scope>
			</configuration>
			<goals>
				<goal>run-scenarios</goal>
			</goals>
		</execution>
	</executions>
	<dependencies>
		<dependency>
			<groupId>org.jbehave.web</groupId>
			<artifactId>jbehave-web-selenium</artifactId>
			<version>2.1.4</version>
		</dependency>
	</dependencies>
    </plugin>
  [...]
  </plugins>
[...]
</project>

Obviously, there are some additional services that we need when these scenarios are executed – the app must be deployed and available along with a running selenium server. In the petstore example, the Maven cargo and selenium plugins are used for these purposes. Have a look at the full unexpurgated pom.xml to see how they are configured.

Running The Example

Prerequisites are a Subversion client, Maven 2.2.x, Java 1.6.x and Firefox installed. Localhost ports 8080 and 4444 need to be unoccupied too.

Check out the project:


~$ svn co https://mikesamples.googlecode.com/svn/trunk/maven/jpetstore-maven-multi-module/ jpetstore
~$ cd jpetstore

Run the Maven build:


~$ mvn clean install -Pmikeci,run-functional-tests,run-on-desktop

Et voila, this should build the JPetstore web app, deploy it to Jetty, start Selenium and run two JBehave scenarios against it. You could use this example to bootstrap your own JBehave/Selenium implementation and get started with BDD in no time at all.

Continuous Integration for Agile Project Managers (Part 2)

April 27, 2010 by

In part 1 of this series, I described the essential elements of Continuous Integration within the context of agile development and briefly discussed the software options for a CI server. I explained that the building blocks of CI are a version control system and a build management tool. The former creates the foundation, by giving the CI process access to the latest copy of project source code. The latter then takes the source code and, in most scenarios, transforms it into the deployable binary artefact that represents the software application being developed.

It is useful to consider this transformational process as a series of steps or life-cycle phases. By analyzing our build in this conceptual way, we can better understand how to ‘bind’ or attach actions to each distinct life-cycle phase. Why is this important? Well, using this simple idea I can demonstrate how you can hardwire quality checks into to your CI process. Performing such checks, and publishing their results, means that we can really start to take advantage of the significant benefits CI can bring to your agile projects.

I actually prefer the term ‘quality gate‘ as opposed to ‘quality check’ as it better fits the idea that you must pass through one gate before entering another. In each case, if we are unable to pass through the gate, we can choose to fail the build and our CI system should dispatch a notification to us. In addition, a good CI process will publish the metrics and ‘violations’ concerning each gate, so that they can be reviewed in detail after each build has occurred.

Build Life-Cycle Phases and Quality Gates

Phase 1 – Resolve component dependencies

Often our projects will have dependencies on other internal or third-party components. This is very common for example, in the Java or Ruby programming language worlds where the re-use of packaged binary artefacts (JAR’s for the former and Gem’s for the latter) is considered best practice. One key attribute of a released component is its version (e.g. 2.1.1) and it is here we can apply our first simple gate – ensure that we are using the correct versions of our dependent components.

Phase 2 – Static Analysis of the source code

Source code inspection is (or should be) a integral part of your software development process. Often this requires human beings(!) to perform formal code reviews who possess an understanding of the software requirements and application design. However, we can automate much of this using tooling integrated into the build process. Some examples of inspection that we can perform are

  • Adherence to organisational coding standards (e.g. all source files must start with a copyright statement, all functions should be commented)
  • Finding duplicated, or copied and pasted code statements.
  • Finding garden variety programming mistakes.
  • Adherence to architectural design rules.

Phase 2 – Compilation of the source code

If you are writing software using a programming language that needs to be compiled, such as Java or C++, this is obviously a very basic and essential gate to pass through.

Phase 4 – Execution of unit tests and test coverage analysis

We should always seek to execute the unit tests. Indeed, for some build management tools, this is the default setting and we can impose a gate that fails the build if unit tests do not pass.

The ability to write robust unit tests, that can be repeatedly executed in an automated fashion, in different environments, is one of the hallmarks of a good software developer IMHO. Wiring the execution of these tests into the CI process via the build tool, helps us to ensure that the tests do not contain dependencies on a developer’s local environment – a mistake more junior team members often make. However, how can we determine that the tests are actually doing something useful, and exercising appropriate functions or routines within the application code? This is accomplished using a code coverage analysis tool (such as Rcov), which can highlight those areas of the code-base untroubled by testing. Gates can be set as course-grained thresholds, e.g. trigger a build failure if less than 80% of all application code is being tested, or more granular configurations can be applied.

Phase 5 – Packaging the application

Although we normally don’t attribute a quality gate to this phase, it is often useful during the CI process to embed information into the packaged application for the purposes of traceability and provenance. For example, this can help identify the source code origin or ‘tag’ of an application deployed to a testing environment when defects are found.

Quality Gates

Quality Gates and Lifecycle Phases

Summary

Hopefully, I’ve demonstrated how CI, in addition to continuously integrating the source code developed by your team, can also provide a valuable feedback loop with respect to the quality of the software being created. In the next part, we’ll look into some options for applying ‘functional’ quality gates within the CI process using advanced testing automation.

User Review of Mike

April 9, 2010 by

One of our users, Ben Wilcock, has written on his blog about his experiences getting started with Mike.

His project is a bleeding edge demonstration of how to implement RESTful services using HyperJaxb. It is built using Maven2 and the build showcases a full end-to-end functional test and deployment of a JEE6 web application.

His Maven build takes advantage of some the new features we have recently added to Mike, in particular the freedom to spawn or fork additional processes (such as application containers or bash scripts) during a build, all within a secure, sand-boxed environment.

As Ben himself says,

“Considering my project’s build includes starting and stopping an embedded Glassfish container and running SoapUI integration tests – I was amazed at just how easy it actually was!”.

Be sure to check out his informative series of entries about his experiences with both Mike and other aspects of his innovative project.

A Sneak Preview of our Ruby/Rake support

April 6, 2010 by

Here at Mike HQ we are getting close to releasing support for the extremely popular Rake Ruby language-based build tool.

A key element of our mission at Mike is to extend the capabilities of our platform beyond those required by the Java development world, and this will be the first major step in that direction.

At the moment we are testing the feature using a range of open source Ruby-based projects. Here are a couple of screen-shots of the configuration and console output from our build of Haml, a popular templating engine for Rails apps:

Haml Rake Configuration

Haml Rake configuration

Haml build console output

Haml build console output

One nifty feature we are adding is the ability to specify the required Gems for your Rake build, which is intended to provide our users with full control of dependencies, and to avoid what we believe to be one of the most common reasons for build failures using Rake.

Stay tuned for further details…