Posts Tagged ‘agile’

Continuous Integration for Agile Project Managers (Part 3)

June 15, 2010

In part 1 of this series, I hopefully provided you with an introduction to Continuous Integration (CI) and an overview of the building blocks of the CI process.

With part two, we introduced the concept of a CI life-cycle and how to bind software quality checks to discrete life-cycle ‘phases’.

In this final part, I’ll attempt to explain how you can use CI to run functional tests early in your development process and show how this helps us to promote a software release on its journey from development to production. I’ll then try to condense the three parts of this series into a handy diagram.

Automation, Automation, Automation

It should hopefully be clear by now that a successful CI implementation is largely dependent upon other processes that can be automated. For example

  • checkout or update of source code by a version control tool
  • compilation of project source code by a build tool
  • execution of source code checking by a static analysis tool
  • unit test execution by a unit test harness or runner

Therefore, it seems reasonable that if we can automate a software development task in a way that is deterministic and repeatable then it can be executed during our CI process. The intention is to remove human error and have these tasks run in the background, and in the event of a ‘failure’ to be notified by a CI server. Some more advanced candidates for automation might include:

  • provisioning of a clean, known harness or environment into which we deploy a release for the purposes of functional or acceptance testing
  • starting and stopping services that are required for our functional tests and population of any test data dependencies
  • execution of the functional tests

Functional Testing and CI

OK, so how do you go about adding these additional capabilities? Well, creating an automated suite of functional tests that can be triggered during the CI process is a reasonably non-trivial task.

Let’s take a look at the main areas for consideration.

Test Fixtures

In addition to writing the tests, you also need to carefully consider your test dependencies (also known as ‘fixtures’), such as any test data you might need in a database for example. Some code frameworks, like Ruby On Rails, encourage development teams to consider these test fixtures early-on in a software development project. Outside of such frameworks, it is more common to find ad-hoc processes. The key thing is ensure that your dependencies, such as a database, are in a known state prior to test execution and that the configuration or population of dependencies is entirely automated. Here, we can again apply the concept of a CI life-cycle to this particular scenario. We will need to bind the activity of test data population to a life-cycle phase prior to the execution of functional tests. We may also need to create a subsequent binding after test execution to ensure that we ‘tear-down’ the test data and keep everything in a known, consistent state.

Test Runner

Obviously, we require a tool that can execute functional tests in an automated fashion. Furthermore, there needs to be a way to trigger is execution at an appropriate stage in the CI process. Dependending upon the nature of the application you are developing, there are a number of choices. You can find a useful list of GUI testing tools here. The majority of projects that I have worked on in recent times have been web-based, and we have used Selenium extensively.

Selenium is open source, and allows you to test your app in multiple browsers. It also has bindings for lots of different programming languages, including a basic ‘dialect’ of HTML markup that is accessible to non-programmers (this type of thing is more formally referred to as a Domain Specific Language or DSL). It also comes with a browser plug-in that allows you to record and replay your tests. For the more technically-minded amongst you, I’ve already blogged in some detailed about how we use Selenium on Mike and included an example project that can be checked out from Google Code.

Test Environment

The third piece in our jigsaw is the environment within or against which the functional tests will run. Environments are often overlooked where automation is concerned, but it is very common to experience issues during CI that are a result of an inconsistent or poorly-configured environment. In my opinion, you should consider environments as a ‘first class concept’. They should be treated like any other artefact of the development process – configuration managed, reproducible and ideally, verified or unit tested.

In addition, keeping the development environments in a form which mirrors a ‘cut-down’ copy of production really helps when promoting your application through the various test environments it may encounter on its journey into the live estate. Hopefully this should help you avoid the “Well, it works on my machine….” discussion that frequently happens in dev shops where environments are poorly controlled and managed.

If we have automated the ability to provision or script the creation and tear down of an environment, then should not surprise you that we can harness this within our CI life-cycle. For example, with enterprise Java (JEE) projects you can use tools such as Cargo that allow you to provision, start, deploy-to and stop your application server of choice. Cargo supports a wide variety of server environments, from open source, such as Apache Tomcat, through to Oracle WebLogic.

Wiring it Together

Ok. If you’ve been paying attention at the back throughout this series then hopefully the following diagram should make sense. Let’s recap the steps 1 through 12 below that represent the essential elements of a CI process that can help your agile project to deliver a high-quality solution. Note that at any point past stage 3, the build may be set to fail, resulting in an email (or other e.g. IM) notification to be sent to your team.

Continuous Integration - A Summary of Steps

Continuous Integration - A Summary of Steps

  1. Developers work to transform the requirements or stories into source code using the programming language of choice.
  2. They periodically check-in (commit) their work into a version control system (VCS)
  3. The CI server is polling the VCS for changes. It initiates the build process when it encounters a change. The build is executed using a dedicated tool for the job such as Maven, Ant or Rake etc. Depending upon the language used, the source code may need to be compiled.
  4. Static analysis is performed on the source code, to ensure compliance with coding standards and to avoid common causes of bugs.
  5. Automated unit tests are executed.
  6. The percentage of the production code exercised by the unit tests is measured using a coverage analysis tool.
  7. A binary artefact package is created. At this point we might want to assist derivation and provenence by including some additional metadata with the artefact e.g. a build timestamp, or the source code repository revision that was used to produce it.
  8. Prepare for functional testing by setting up the test fixtures. For example, create the development database schema and populate it with some data.
  9. Prepare for functional testing by provisioning a test environment and deploying the built artefact.
  10. Functional tests are executed. Post-execution, tear down any fixtures or environment established in 8 and 9.
  11. Generate reports to display the relevant metrics for the build. E.g. How many tests passed? What is the number and severity of coding standard violations?
  12. The process is continuous of course! So rinse…and repeat….

Continuous Integration for Agile Project Managers (Part 2)

April 27, 2010

In part 1 of this series, I described the essential elements of Continuous Integration within the context of agile development and briefly discussed the software options for a CI server. I explained that the building blocks of CI are a version control system and a build management tool. The former creates the foundation, by giving the CI process access to the latest copy of project source code. The latter then takes the source code and, in most scenarios, transforms it into the deployable binary artefact that represents the software application being developed.

It is useful to consider this transformational process as a series of steps or life-cycle phases. By analyzing our build in this conceptual way, we can better understand how to ‘bind’ or attach actions to each distinct life-cycle phase. Why is this important? Well, using this simple idea I can demonstrate how you can hardwire quality checks into to your CI process. Performing such checks, and publishing their results, means that we can really start to take advantage of the significant benefits CI can bring to your agile projects.

I actually prefer the term ‘quality gate‘ as opposed to ‘quality check’ as it better fits the idea that you must pass through one gate before entering another. In each case, if we are unable to pass through the gate, we can choose to fail the build and our CI system should dispatch a notification to us. In addition, a good CI process will publish the metrics and ‘violations’ concerning each gate, so that they can be reviewed in detail after each build has occurred.

Build Life-Cycle Phases and Quality Gates

Phase 1 – Resolve component dependencies

Often our projects will have dependencies on other internal or third-party components. This is very common for example, in the Java or Ruby programming language worlds where the re-use of packaged binary artefacts (JAR’s for the former and Gem’s for the latter) is considered best practice. One key attribute of a released component is its version (e.g. 2.1.1) and it is here we can apply our first simple gate – ensure that we are using the correct versions of our dependent components.

Phase 2 – Static Analysis of the source code

Source code inspection is (or should be) a integral part of your software development process. Often this requires human beings(!) to perform formal code reviews who possess an understanding of the software requirements and application design. However, we can automate much of this using tooling integrated into the build process. Some examples of inspection that we can perform are

  • Adherence to organisational coding standards (e.g. all source files must start with a copyright statement, all functions should be commented)
  • Finding duplicated, or copied and pasted code statements.
  • Finding garden variety programming mistakes.
  • Adherence to architectural design rules.

Phase 2 – Compilation of the source code

If you are writing software using a programming language that needs to be compiled, such as Java or C++, this is obviously a very basic and essential gate to pass through.

Phase 4 – Execution of unit tests and test coverage analysis

We should always seek to execute the unit tests. Indeed, for some build management tools, this is the default setting and we can impose a gate that fails the build if unit tests do not pass.

The ability to write robust unit tests, that can be repeatedly executed in an automated fashion, in different environments, is one of the hallmarks of a good software developer IMHO. Wiring the execution of these tests into the CI process via the build tool, helps us to ensure that the tests do not contain dependencies on a developer’s local environment – a mistake more junior team members often make. However, how can we determine that the tests are actually doing something useful, and exercising appropriate functions or routines within the application code? This is accomplished using a code coverage analysis tool (such as Rcov), which can highlight those areas of the code-base untroubled by testing. Gates can be set as course-grained thresholds, e.g. trigger a build failure if less than 80% of all application code is being tested, or more granular configurations can be applied.

Phase 5 – Packaging the application

Although we normally don’t attribute a quality gate to this phase, it is often useful during the CI process to embed information into the packaged application for the purposes of traceability and provenance. For example, this can help identify the source code origin or ‘tag’ of an application deployed to a testing environment when defects are found.

Quality Gates

Quality Gates and Lifecycle Phases

Summary

Hopefully, I’ve demonstrated how CI, in addition to continuously integrating the source code developed by your team, can also provide a valuable feedback loop with respect to the quality of the software being created. In the next part, we’ll look into some options for applying ‘functional’ quality gates within the CI process using advanced testing automation.

Continuous Integration for Agile Project Managers (Part 1)

March 30, 2010

Any Agile Project Manager worth his salt should be aware of the term ‘Continuous Integration’ (often shortened to ‘CI’). But what is it, and how is it done?

This series of short blog articles aims to answer these two questions, so you can start your next project, or re-configure an existing project, armed with the necessary understanding about this key practice within agile software delivery.

Background

The basic premise of CI is pretty straightforward. An agile team needs a repeatable and reliable method to create a build of the software under development. Why so? Well, if its not already obvious, you may want to revisit the principles behind the Agile Manifesto. Within them you will notice a number of references to ‘working software’, and the foundation of any working software is a stable, tested build.

Recipe for CI

So how does CI help to create this build? Lets list the essential ingredients that we need :

  1. Source Code Control – in a typical agile project, developers turn User Stories into source code, in whatever programming language(s) the project is using. Once their work is at an appropriate level of completeness, they check-in or commit their work to the source code (a.k.a version) control system; for example, Subversion
  2. Build Tool – if the source code needs to be compiled (e.g. Java or C++) then we will need tooling to support that. Modern Integrated Developer Environments (IDE), such as Eclipse or Visual Studio are able to perform this task as developers save source code files. But if we want to build the software independently of an IDE in an automated fashion, say on a server environment, we need an additional tool to do this. Examples of this type of tool are Ant, Maven and Rake and Make. These tools can also package a binary output from the build. For example, with Java projects this might be a JAR or WAR file – the deployable unit that represents the application being developed.
  3. Test Tools – as part of the build process, in addition to compilation and the creation of binary outputs, we should also verify that (at minimum) the unit tests pass. For example, in Java these are often written using the JUnit automated unit testing framework. The tools in (2) often natively support the running of such tests, so they should always be executed during a build. In addition to unit testing, there are numerous other quality checks we can perform and status reports CI can produce. I’ll cover these in detail in a subsequent part to this series.
  4. Schedule or Trigger – we might want to create our build according to a schedule (e.g ‘every afternoon’) or when there is a change in the state of the project source code. In the latter case we can set up a simple rule that triggers a build whenever a developer changes the state of the source code by committing his/her changes, as outlined in (1). This has the effect of ensuring that your teams work is continuously integrated to produce a stable build, and, as you may have guessed, is where this practice gets its name from.
  5. Notifications – the team needs to know when a build fails, so it can respond and fix the issue. There are lots of ways to notify a team these days – instant messaging, Twitter etc, but the most common by far is still email.
  6. Continuous Integration Recipe

    Continuous Integration Recipe


    The tool that wires these five elements together is a Continuous Integration Server. It interacts with the source control system to obtain the latest revision of the code, launches the build tool (which also runs the unit tests) and notifies us of any failures. And it does this according to a schedule or state change based trigger. A CI server often also provides a web-based interface that allows a team to review the status, metrics and data associated with each build.

    CI Server options

    There is a pretty overwhelming choice of available tools in this space. Some are open source, some proprietary. I don’t have time to go into all the available options here unfortunately. However, there is a handy feature comparison matrix available here. Of course, it would be remiss of me not to mention our own hosted service, which allows you to get started with CI in no time at all, without having to be an ‘expert’ user.

    Mike Test Reports

    Test Reports generated by Mike

    In the next part of this series, I’ll delve deeper into how you can use CI to enforce software quality within your team during the various stages of the development process.

Bootstrap your Agile Java Startup Infrastructure in < 30 minutes

March 25, 2010

So, you have a great idea for the next web-based killer application. You have assembled a small but geographically dispersed team with the requisite amount of Agile Java development-fu to get the job done. What you need now is some hosted infrastructure so you can all get to work effectively.

In less than 30 minutes.

Step 1 – Get some IDE

Get Eclipse Galileo (JEE version). Hopefully your pipe is fat enough (~190 MB download). Fire it up and use update sites to obtain m2eclipse and subclipse (we’re assuming SVN for a repository, but you could use Git). Install Tomcat and configure as a server.

Step 2 – Get some Scaffolding

Within Eclipse go to File>New>Other, to open the Select a wizard dialog. From the scrollbox, select Maven Project and then click Next. Move through the wizard to select an archetype of an appropriate type (e.g. Appfuse). Click Finish. Validate you can build and deploy your app.

Step 3 – Get some Version Control

While you are waiting for your build to complete, pick a hosted version control provider. There are a number who provide low-cost or even free hosted Subversion for private projects, typically with a trial period. Here is a list. Once signed-up with a clean repository, use subclipse to share the scaffolding app created in (2).

Step 4 – Get some Project Tracking and Wiki

The majority of providers listed in (3) also offer something in the area of hosted task/issue tracking apps that often have sufficient wiki capabilities. Alternatively you can try those who specialise
in this area such as FogBugz or Agilo. We use Agilo on Mike. And you might want to get Basecamp. We use that too.

Step 5 – Get some CI

You could roll your own on Amazon EC2, but that isn’t happening in 30 minutes or 30 hours, probably! Hmmm…Oh I almost forgot – you could use our excellent hosted service ;).

Step 6 – Get some BRM

Whats a BRM? Its Binary Repository Manager. If you’re using Maven, I’d recommend you use one. The only hosted one I’m aware of is provided by Artifactory Online.

Right. We are good to go. What are you waiting for?

Could your next development environment be in the cloud?

November 17, 2009

Your new project has been given the green light. You need to get your team up and running quickly.  Could a cloud based development environment be the answer? This blog discusses some of the options  and issues for moving to a  hosted development environment.

Cloud City

One of the first tasks for any Agile project team is to establish a robust, reliable set of development tools and associated infrastructure. Generally on any new development endeavour the following needs to be put in place:

  • Locate or create a source code repository and import your skeleton project.
  • Locate or create a continuous integration server to build your source code, run your tests and notify the team via email of any failures.
  • Locate or create a task/user story tracking server and/or issue management tool and add your project to it.
  • Locate or create your test environments for developer smoke, integration, and functional testing.

Now you might be lucky and have much of this infrastructure in place, in which case adding some extra users, a new project and associated set of build jobs might be relatively straightforward. However, if it isn’t you will need to   allocate a decent chunk of time to setting things up yourself. This may involve procurement of hardware, installation of an appropriate OS, installation of the relevant applications, providing secure access to project team members and so on. This gets more complex if the team is distributed and the infrastructure must be accessible beyond a LAN.

Often these development tools are open source, which means that while the cost of acquisition for the software may be zero, the ongoing maintenance and support will probably require specialist knowledge.  Any time your team spends doing this is time that could be spent progressing the project.

With either option, server space and administration time is required and there are obviously costs associated with this, and the costs may be disproportionately large to small and medium-sized organisations.

In the same way that SaaS offerings for email and collaboration suites (such as Google Apps) have sought over the past few years sought to turn these services into low-cost, click-and-go commodities, there are now equivalent hosted solution options available for Agile development infrastructure.

Hosted version control solutions have been available for a while and the market has expanded rapidly over the past few years. Collabnet (the people behind Subversion) and CVSdude are probably the best known. Both companies have expanded their offerings, and not only provide version control but are bundled with or integrate to a host of other tools.  These now compete with newer entrants like Beanstalk, Assembla and Unfuddle (which also does Git hosting). GitHub itself has seen huge growth over the past year, with over 400 new users and 1,000 projects being added each day.

Coming from the other direction are suppliers of Development collaboration and productivity tools, Atlassian are probably the biggest player here and their hosted JIRA studio suite based around the very popular Jira issue tracking tool was launched in May 2008.  Fogbugz is another alternative, based around an integrated Project Management, Wiki, Issue Management and Helpdesk set of tools.

Each vendor has a different focus with JIRA studio and Collabnet’s Team-Forge perhaps the most fully featured for development teams. Both offer a very comprehensive stack and are moving towards the idea of an Application Lifecycle Management (ALM) suite. It will be interesting to see how these platforms develop, and how the traditional Enterprise tool vendors (IBM Rational, Microsoft and Borland) respond.

If  a platform sounds too restrictive and you want to “pick and mix” your own set of tools you can.  There are also some great tools which focus on a specific area – Basecamp for Project Management and Lighthouse for Issue Management are a couple of well-known examples worth looking at. Most of these tools have open APIs that enable you to integrate easily with others, so getting together an integrated  set of tooling is easily achievable.

My advice here is to be clear about what you want from these tools. Some are very feature rich and developer centric while others do a great job of providing a clean and simple process and interface. So which tools suits you will depend on your project, your team, and your organisation. What is clear though is that there is growth in this sector, increased competition and greater integration between the providers. All of this can only be good for those who are happy to outsource their development environments – increased choice and competition against a backdrop of  decreasing hosting costs.

Moving on from  managing your project to testing it, the use of externalized environments that allow teams to deploy a release of a web-based application and run functional tests against it, is trickier. Depending on the nature of the application and its associated runtime dependencies this may require the creation of a bespoke environment. However, recent developments in cloud computing should soon make this much easier. Google’s App Engine for example allows you to run Java (and Python) applications on their infrastructure. So if the AppEngine is your production target, creating a test environment that is a clone of production should be a relatively straight-forward activity. More recently in August this year, SpringSource launched their Cloud Foundry which allows you to rapidly deploy a test (and also production) environment for your Java web application with a few mouse clicks and Microsoft have also weighed in with Azure. Both Google and Microsoft are promising tighter integration with the IDE, and I’ll be watching these platforms closely.

One area which is less mature is hosted continuous integration. There are currently only a small number of pioneering providers in this space, which may surprise some, as the practice of continuous integration is at the heart of the Agile development process. The SaaS multi-tenant application model does not fit easily with the requirements for continuous and often complex software builds. It is computing resource intensive activity, especially for programming languages such as Java, and this will inevitably impact the cost of such a service to the end-user. Mike CI is one of these pioneers and there is a good analysis of the others here.

Now I’m not saying trusting your code to a 3rd party is a simple decision to make. There are often legal, security and organisational hurdles to consider. It isn’t be for everyone and for many large corporates it might be a step too far. But for many people the cost, convenience and management overhead of maintaining it all yourself does not stack up. Your team is in place to write great software. For your next project, I’d recommend that you seriously consider using these low-cost, on-demand hosted services.

Behind the scenes: evolving the UI

November 6, 2009

First of an occassional series of posts describing how we do development here at Mike CI. I’m sure what we do is by no means unique, but hopefully our experiences might resonate with your own project. Or at the very least give you an opportunity to point out how we can do things better!

As you’d expect for a new product start-up we are an Agile shop.  All prospective features we get are put in our Product Backlog in Agilo, which is our Scrum tool of choice. I’ll probably do another blog at some point on how we do Scrum. At the start of each sprint we take user stories from the backlog and figure out how we will implement them during the Sprint planning meeting.

The main goal of our last sprint was to add a new component – the Account Manager. Our first release of the Account Manager includes functionality to enable users to register an account, invite other users to join them, manage their profile and change their password. Simple stuff, but a core component to the platform.

For the Account Manager we wanted a cleaner look and feel than the Control Panel. Managing your account should be easy and simple to do, and the design should reflect that. We also knew we would be adding more features here so the design had to be able to cope with that, eventually users will be able to upgrade/downgrade subscriptions, view usage and change their payment methods.

Our initial step is to story board the flow on a whiteboard and then capture the flows in Balsamiq. We’re really pleased with Balsamiq as a prototyping tool. It is incredibly quick to pick up and produces great mock-ups that convey the flow and spirit of the story without restricting the design. We then review and debate these flows in the sprint planning session. As you’d expect with a team of IT geeks professionals these debates can get quite animated! We then re-factor the mock-ups and paste the images in to the relevant user story in Agilo. The flows and mock-ups are crucial as not only do they give us the spec for development but we work our test plans from them too.

Invite a user to join Mike

Here is the first cut of the Manage Users Page from Balsamiq.

As this is a new application we decided to follow this up with some Photoshop mock-ups. We don’t always do this, but on this occasion as we weren’t constrained by the Control Panel look and feel we decided to add this step.

Manage Users Design 1

Initial Design for Account Manager

We created about six different designs, variations on a theme, but they really helped us visualise what we wanted and review and discuss the designs. This was a bit of a design smack-down, there could only be one winner!

While this was going on the developers had been implementing the functionality without the design. The application is a fairly typical Java Spring application – web pages are JSPs, we use SiteMesh and a bit of Ajax here and there. The developers coded from the back-end first giving all the screens a blank design to start with. All the key elements in the screens were given IDs which helped the skinning process later on. The most crucial stage is resolving the design on the webpages. This is another iterative cycle and often what looked good in Photoshop doesn’t necessarily work when implemented in CSS and HTML. In fact, I’d recommend not spending too long on the Photoshop design stage – the sooner you start working up the designs for real the better.

Once we’d settled on a final skin design it didn’t take long to skin the app, about a day or so, with a few impromptu reviews along the way.

Manage Users Implemented Version

Final version of Manage Users screen

This is the final version, I hope you like it. It ties in more to our website and blogs than the Control Panel and that does raise us some questions about whether or not we want to align the designs more. I’m really happy with the designs and I hope our users are too. We hope to release the initial version of the Account Manager soon – so watch this space!

After a few iterations we think we now have a pretty good process for rapidly developing Mike applications. Balsmiq has been great in enabling us to define an initial design. We can then, in parallel, work up the final designs (in Photoshop or HTML) while we progress the development. The final step is to skin the pages with the final designs. Constantly review along the way and be prepared to compromise, what looked great in Photoshop might not work for real.

I hope this has been useful, comments appreciated!