Archive for the ‘Mike development’ Category

Continuous Testing with Selenium and JBehave using Page Objects

May 6, 2010

Since Mike‘s inception we have always sought to automate as much of our testing as possible. For some time now we have been using Selenium for our functional/acceptance tests, and thus far have been very happy with this approach. Initially, we decided to use Selenese-style tests, as we felt this would enable non-programmers to help maintain and extend the coverage of our tests. Someone with a basic grasp of markup and enough ability to make small re-factorings to the HTML code generated by Selenium IDE. However, as the range of features provided by our platform has started to grow we have found that we are doing a lot of ctrl+c and ctrl+v of the Selenese scripts, and generally violating our DRY principles. After some internal debate, we finally settled on a new approach that adopts Behaviour Driven Development (BDD) techniques. This works nicely with our Agile, User Story based approach to development and (as you might expect) our advanced internal practices when using Continuous Integration for our own purposes.

BDD Framework Selection

The Mike codebase is predominately Java, so it seemed sensible to choose a Java-based BDD framework. We could have opted for something like the Ruby rockstars fave Cucumber, which is a now well-established, but in the end decided upon JBehave. It got our vote for a number of reasons:

  • It’s an active project, with regular production releases.
  • It’s ‘maven-ized’ and you can bind a handy JBehave plugin to appropriate life-cycle phases
  • It provides a web testing sub-component that gives you a nice, simple abstraction for Selenium
  • Scenarios can be defined in plain-text, just like Cucumber
  • It integrates into an IDE just like any other xUnit-based testing framework (right Click > Run As > JUnit Test)

The plain-text scenarios were of particular interest, as they allow non-programmers to continue to author test flows for each story. On the downside, it does mean that only developers can provide the implementation of these scenarios. But overall, it provides a good fit for our team profile.

An Example

I’ll walk through an example of a simple JBehave BDD-style scenario, that seeks to test that perennial fave – the Java Petstore web application:


 Scenario: Successful Login

 Given the user opens the home page
 When the user clicks the enter store link
 Then the store front page should be displayed
 When the user clicks the sign in link
 Then the store login page should be displayed
 When the user enters username j2ee
 And the user enters password j2ee
 And the user clicks the login button
 Then the store front page should be displayed for user ABC

This combination of ‘Given’, ‘When’ and ‘Then’ maps very nicely to a test context, event and expected outcome for each of the various pathways through a User Story.

So now that we have our scenario, stored in a text file name ‘login_scenarios’, using JBehave we need to create two additional classes. These are:

  1. a trivial subclass of org.jbehave.scenario.Scenario whose name maps to the associated text file (LoginScenarios.java)
  2. a subclass of org.jbehave.web.selenium.SeleniumSteps (LoginSteps.java) that provides an implementation for each of the ‘Given’, ‘When’ and ‘Then’ statements.

For example:


 @Given("the user opens the home page")
 public void theUserOpensTheHomePage(){
 	homePage = new HomePage(selenium, runner);
 	homePage.open("/jpetstore");
 }

Notice how JBehave uses simple annotations to map the scenario elements to Java methods. You’ll probably also notice the use of a ‘page’ object in the method body, which performs the actual heavy-lifting of the tests. In addition, the methods in the JBehave-provided base class SeleniumSteps can be overridden as required. For example, override createSelenium() if you need to provide a custom instance of Selenium with an alternative configuration.

Page Objects

Within a web application UI there are areas that our tests interact with. Using the Page Object pattern allows us to intuitively model these pages as objects within the test code. This massively reduces the amount of duplicated code and means that if the UI changes, the fixes need only be applied in one place. This means we get our DRY testing mojo back after our experiences with copy ‘n’ pasted Selenese markup. To make things even simpler, its a good idea to create an abstract base (Page.java) class that exposes a series of useful methods to its concrete children. Here’s an example of a class that represents the login page of our demo Java petstore app.


 public class StoreLoginPage extends Page {

	public StoreLoginPage(Selenium selenium, ConditionRunner runner) {
		super(selenium, runner);
	}
	
	@Override
	public void verifyPage() {
		textIsVisible("Please enter your username and password.");
	}
	
	public void verifyPresenceOfInvalidCredentialsErrorMessage(){
		textIsVisible("Invalid username or password. Signon failed.");
	}
	
	public void typeUsername(String username){
		type("//input[@name='username']", username);
	}
	
	public void typePassword(String password){
		type("//input[@name='password']", password);
	}
	
	public StoreFrontPage clickLoginButton(){
		clickButton("//input[@value='Login']");
		waitForPageToLoad();
		return new StoreFrontPage(selenium, runner);
	}
 }

Once these page classes are wired into our SeleniumSteps subclasses, we can use the action (open, click, type etc) or verification style methods to drive the tests.

Maven Magic

As mentioned earlier, we selected JBehave in part because of its Maven integration. There is plug-in you can configure to execute the tests during the required phase:


<project>
[...]
  <plugins>
  [...]
    <plugin>
	<groupId>org.jbehave</groupId>
	<artifactId>jbehave-maven-plugin</artifactId>
	<version>2.5.1</version>
	<executions>
		<execution>
			<id>run-scenarios-found</id>
			<phase>integration-test</phase>
			<configuration>
				<scenarioClassNames>
					<scenarioClassName>
						com.mikeci.jpetstore.test.scenario.LoginScenarios
        				</scenarioClassName>
				</scenarioClassNames>
				<classLoaderInjected>false</classLoaderInjected>
				<skip>false</skip>
				<ignoreFailure>false</ignoreFailure>
				<batch>true</batch>
				<scope>test</scope>
			</configuration>
			<goals>
				<goal>run-scenarios</goal>
			</goals>
		</execution>
	</executions>
	<dependencies>
		<dependency>
			<groupId>org.jbehave.web</groupId>
			<artifactId>jbehave-web-selenium</artifactId>
			<version>2.1.4</version>
		</dependency>
	</dependencies>
    </plugin>
  [...]
  </plugins>
[...]
</project>

Obviously, there are some additional services that we need when these scenarios are executed – the app must be deployed and available along with a running selenium server. In the petstore example, the Maven cargo and selenium plugins are used for these purposes. Have a look at the full unexpurgated pom.xml to see how they are configured.

Running The Example

Prerequisites are a Subversion client, Maven 2.2.x, Java 1.6.x and Firefox installed. Localhost ports 8080 and 4444 need to be unoccupied too.

Check out the project:


~$ svn co https://mikesamples.googlecode.com/svn/trunk/maven/jpetstore-maven-multi-module/ jpetstore
~$ cd jpetstore

Run the Maven build:


~$ mvn clean install -Pmikeci,run-functional-tests,run-on-desktop

Et voila, this should build the JPetstore web app, deploy it to Jetty, start Selenium and run two JBehave scenarios against it. You could use this example to bootstrap your own JBehave/Selenium implementation and get started with BDD in no time at all.

All The Young (Ex) Dudes

February 9, 2010

As we near the end of beta here at Mike HQ, I’d just like to publicly say a big thank you to all our participants who have helped to shape and improve our platform over the past few months – your feedback has been invaluable.

We are currently in the final phases of testing the version of our platform that will form the basis of our commercial offering. Our acceptance testing work flow involves consuming services provided by other organisations to best simulate a real-world usage scenario for our platform. The primary third party service we use is hosted version control. In fact, it is only reasonable to state that Mike has a strong dependency on the existence of such services. It is the first link in the chain of what we here at Mike HQ refer to as the ‘hosted ALM continuum’ – the suite of co-operating and complementary hosted services that provide agile teams with a full outsourced, web-enabled, development ‘stack’. Disclaimer: it is most definitely in our interest to promote hosted version control solutions, as they are an enabler for the use of our own platform.

So, what do we use for testing?

Well, at present we support Subversion repositories that are accessible over Http(s). To simulate repositories that do not require authentication for read access we use Google Code. For those who are unfamiliar with Google Code (there can’t be that many of you surely?) it provides a free collaborative development environment for open source projects, and provides each project with its own Subversion repository. Thanks, Google.

However, our main scenario is retrieving (or updating) source code from repositories that do require authentication and also provide a secure transport using Https. After surveying the landscape, we decided to trial a service offered by Codesion. At the point we signed up (last year) they were known as CVSDude and they have recently re-branded themselves under a new name. We did like the old name – it has allowed us to indulge in some office banter during our acceptance testing phases, which, lets face it, are often not one of the more exciting aspects of software engineering. I won’t bore you with our banter though as it probably falls into the camp of ‘you had to be there’ to seem even remotely funny.

Codesion web site

Codesion

Setting up a free 30-day trial on Codesion was a cinch:

  1. We swiftly signed up via their website, http://codesion.com/
  2. We created a new project, and added the Subversion service
  3. We created our users, groups, roles (they have a bunch of defaults), and assigned them to our project.
  4. We cut-n-paste’d the SVN URL from the project page, into our SVN import statement and we were done.

At this point we now had data we needed to test our platform – our test fixtures are a range of Java projects of different flavours. A side-effect was that it definitely gave us a view of what a slick SaaS sign-up process and after sales care looks like – something for us to aim for with our own offering. Since we started using them we’ve had zero problems. In some of our test cases we hit the repository repeatedly and it gives us the same reliable service every time.

We’d have no hesitation in recommending Codesion if you are looking for a low-cost, industrial-grade hosted solution for Subversion. But, if you are reading guys, we did slightly prefer the old name….sorry ;-).

Working with Custom Maven Archetypes (Part 2)

January 26, 2010

In part 1 of this series of blog entries I demonstrated how you can quickly create a custom Maven archetype. This nice feature of Maven allows you to produce template or skeleton projects, which are great for bootstrapping your development efforts. In this second part of the series, I’ll show you how to ‘production-ize’ your archetype, which involves the following steps:

  • Add your archetype to version control
  • Update the appropriate metadata elements in your archetype’s POM
  • Release your archetype using the maven-release-plugin

Step 1 – Add your archetype to version control

I decided to use Git as the VCS (version control system) for my archetype. This is in part due to the fact that we will soon be releasing a new version of Mike that supports projects hosted on the popular ‘Social coding’ site GitHub. GitHub offers free project hosting for ‘public’ (AKA open source) projects.

So, lets get down to business. First off, you must have the Git client installed of course. If you are on a flavour of *nix this is a cinch, but there is tooling support for other OS’ including TortoiseGit, from the creators of the popular TortoiseSVN client. As there are already plenty of tutorials about using Git, i’m not going to replicate all steps here. Try Git for the lazy if you are time poor, for a good intro.

First up, I navigate to a directory, initialise my Git repository and add my fledgling archetype:


~/foo$ cd mikeci-archetype-springmvc-webapp
~/foo/mikeci-archetype-springmvc-webapp$ git init
Initialized empty Git repository in /home/leggetta/foo/mikeci-archetype-springmvc-webapp/.git/
~/foo/mikeci-archetype-springmvc-webapp$ git add .
~/foo/mikeci-archetype-springmvc-webapp$ git commit -m "Initial commit"
Created initial commit fad815f: Initial commit
 13 files changed, 513 insertions(+), 0 deletions(-)
 create mode 100644 pom.xml
 create mode 100644 src/main/resources/META-INF/maven/archetype-metadata.xml
 create mode 100644 src/main/resources/META-INF/maven/archetype.xml
[...]

Now that i have my archetype added to a local Git repo, I want to share this via GitHub. This obviously requires a GitHub account and you also need to ensure you have added a public key to provide you with the requisite privileges to ‘push’ your changes. Once you’ve set up a bare repository on GitHub, you can execute the following commands:


~/foo/mikeci-archetype-springmvc-webapp$ git remote add origin git@github.com:amleggett/mikeci-archetype-springmvc-webapp.git
~/foo/mikeci-archetype-springmvc-webapp$ git push origin master
Counting objects: 34, done.
Compressing objects: 100% (22/22), done.
Writing objects: 100% (34/34), 21.23 KiB, done.
Total 34 (delta 1), reused 0 (delta 0)
To git@github.com:amleggett/mikeci-archetype-springmvc-webapp.git
 * [new branch]      master -> master

What did I just do? The command remote add origin [url] adds the location of the remote GitHub repository to my local repository configuration and calls it ‘origin’. When I type push origin master this sends or ‘pushes’ my local changes on the ‘master’ branch to the ‘origin’ server. The ‘master’ branch is one that is created by default for me when I initialized the repository above.

Step 2 – Updating the POM

For Maven to function effectively, you should always ensure that you include project VCS information in your POM file. Now that we’ve added the archetype to a Git repository we can include the appropriate <scm> configuration:


  <scm>
   <connection>
   scm:git:ssh://github.com/amleggett/${artifactId}.git
   </connection>
   <developerConnection>
   scm:git:ssh://git@github.com/amleggett/${artifactId}.git
   </developerConnection>
   <url>
   http://github.com/amleggett/${artifactId}
   </url>
  </scm>

It’s important to understand the meaning of each of the child elements of <scm>. The <connection> element defines a read-only url and the <developerConnection> element a read+write url. For both of these elements the url must adhere to the following convention:


 scm:<scm implementation>:<scm implementation-specific path>

Finally, the <url> element content should point to a browsable location and for me this is the GitHub repository home page. Note that in all cases, I’m using an interpolated value which is my project artifactId.

One handy tip is that you can verify this configuration by using the maven-scm-plugin. This plugin offers ‘vendor’ independent access to common VCS commands by offering a set of command mappings for the configured VCS. The validate goal should confirm all is well:


~/foo/mikeci-archetype-springmvc-webapp$ mvn scm:validate
[INFO] Scanning for projects...
[INFO] Searching repository for plugin with prefix: 'scm'.
[INFO] --------------------------------------------------------------
[INFO] Building MikeCI Spring-Mvc web application archetype
[INFO]    task-segment: [scm:validate] (aggregator-style)
[INFO] --------------------------------------------------------------
[INFO] Preparing scm:validate
[INFO] No goals needed for project - skipping
[INFO] [scm:validate {execution: default-cli}]
[INFO] connectionUrl scm connection string is valid.
[INFO] project.scm.connection scm connection string is valid.
[INFO] project.scm.developerConnection scm connection string is valid.
[INFO] --------------------------------------------------------------
[INFO] BUILD SUCCESSFUL
[INFO] --------------------------------------------------------------

We also have to update the POM to tell Maven (or rather, the maven-deploy-plugin) where to deploy snapshot and released versions of our archetype. For the time being, i’m just going to specify my local filesystem as this destination, but in a real world example this would most likely point to a location appropriate for a Maven repository manager, such as Nexus or Artifactory:


 <distributionManagement>
  <repository>
   <id>release-repo</id>
   <url>file:///home/leggetta/foo/release-repository</url>
  </repository>
  <snapshotRepository>
   <id>snapshot-repo</id>
   <url>file:///home/leggetta/foo/snapshot-repository</url>
  </snapshotRepository>
 </distributionManagement>

Once satisfied with the POM modifications, I commit to my local Git repo and then push the changes to GitHub.

Step 3 – Releasing the archetype

So, i’m now almost ready to create my first early release of the archetype. I can accomplish this using the maven-release-plugin. This plugin exposes two major goals – prepare and perform.
The prepare goal does some pre-flight checking by running a build and verifying all is well before promoting the version in the pom and creating a tag of the release. The perform goal then checks out this tag, builds the project and deploys the resulting artefact to the Maven <repository> specified in your <distributionManagement> section.

Its always a good idea to use the most recent version of the maven-release-plugin, which is currently 2.0-beta-9. This will require a further POM modification and Git commit+push:


 <plugin>
  <groupId>org.apache.maven.plugins</groupId>
  <artifactId>maven-release-plugin</artifactId>
  <version>2.0-beta-9</version>
 </plugin> 

Next, run the release:prepare goal. By default this runs interactively, prompting you in your shell to provide info about the release version and subsequent development version:


:~/foo/mikeci-archetype-springmvc-webapp$ mvn release:prepare
[INFO] Scanning for projects...
[INFO] --------------------------------------------------------------
[INFO] Building MikeCI Spring-Mvc web application archetype
[INFO]    task-segment: [release:prepare] (aggregator-style)
[INFO] --------------------------------------------------------------
[INFO] [release:prepare {execution: default-cli}]
[INFO] Verifying that there are no local modifications...
[...]
[INFO] Checking dependencies and plugins for snapshots ...
What is the release version for "MikeCI Spring-Mvc web application archetype"? (com.mikeci:mikeci-archetype-springmvc-webapp) 0.1.2: : 
[...]
[INFO] Release preparation complete.
[INFO] --------------------------------------------------------------
[INFO] BUILD SUCCESSFUL
[INFO] --------------------------------------------------------------

Then run release:perform. If it all goes smoothly, you should have something akin to the following in your remote repository:


~/foo$ ls -1 release-repository/com/mikeci/mikeci-archetype-springmvc-webapp/0.1.2/
mikeci-archetype-springmvc-webapp-0.1.2.jar
mikeci-archetype-springmvc-webapp-0.1.2.jar.md5
mikeci-archetype-springmvc-webapp-0.1.2.jar.sha1
mikeci-archetype-springmvc-webapp-0.1.2.pom
mikeci-archetype-springmvc-webapp-0.1.2.pom.md5
mikeci-archetype-springmvc-webapp-0.1.2.pom.sha1
mikeci-archetype-springmvc-webapp-0.1.2-sources.jar
mikeci-archetype-springmvc-webapp-0.1.2-sources.jar.md5
mikeci-archetype-springmvc-webapp-0.1.2-sources.jar.sha1

So, to summarise, I now have the appropriate configuration management in place to make changes to my archetype and release it to a Maven repository.
In the next part of this series, I’ll look into the different ways you can integrate your archetype into the development process.

Why DVCS won’t kill Subversion in 2010

January 21, 2010

At Mike HQ we are currently implementing support for GitHub as this has been requested by a number of our private beta participants. We like Git and are currently undergoing an internal debate/argument as to whether we should switch (no pun intended) to a DVCS from Subversion for our own source code management.

I was going to write something that discussed why, although we might decide to use Git ourselves, it isn’t suitable for everyone – I firmly believe that Subversion usage will continue to thrive in 2010.

However, after searching some, I discovered a comment from this blog entry which more or less summed up my own thoughts. I’m simply going to reproduce it here verbatim (courtesy of clr_lite whoever you are):

  • distributed version control is not for everyone
  • too many people are enamored of a tool or something because it’s new
  • every organization has it’s own situation and needs
  • getting a full repository and allowing people to work in silos without collaboration is not necessarily a good thing
  • git addresses the needs of linux kernel development, with many contibutors funnelling to a gatekeeper
  • some development shops benefit from a locking checkout model which forces developers to communicate and plan
  • subversion has a lot of users and knowledge pool; this can be important for some situations
  • distributed model has it’s plusses; getting all the history, changes, diffs, etc, while offline can be real helpful, but it all depends on the nature of the development and the developers
  • no one tool or process is ‘right’ for everyone

Well put, I thought.

Working with Custom Maven Archetypes (Part 1)

January 14, 2010

Late last year we announced support for Maven on our hosted CI platform. This was something we introduced earlier than anticipated as we discovered that the majority of our private beta participants wanted to use this increasingly popular tool for their builds. We also provide support for Apache Ant and Eclipse IDE based projects that don’t include a build.xml or POM file. The rationale behind the latter is to allow developers to get up and running with CI in no time at all. We also released an (open source) Eclipse plugin – mikeclipse – that in its current incarnation allows you to generate a ‘Mike-friendly’ web application project so you can quickstart your development effort. You can create a web app with Struts2 or Spring-Mvc based project flavours. No ‘Mike-specific’ hooks are written in to the generated apps, so you can just use it as a simple wizard to generate a skeleton project. We intend to expand the features of this plugin over the course of the coming months.

Those of you who are familiar with Maven will probably be aware that this generation of skeleton projects is already available via the maven-archetype-plugin. This plugin generates a project based upon a template – called an archetype of course – that you package as discreet Maven (jar) component. Once you have installed or deployed your archetype into a Maven repository it can then be processed by the maven-archetype-plugin. At execution time, if used interactively, the plugin will prompt you with a series of questions about which archetype you wish to use and what, using Maven-ese, the ‘GAV’ (GroupId/ArtifactId/Version) coordinates are for your skeleton project.

I’ve been a user of the plugin for some years now, buts its been a little while since i’ve created a custom archetype, so I thought it might be worth sharing the process I went through to create it. Along the way I discovered some nice features that, at time of writing, did not appear to be fully documented on the plugin site, so I’ve captured them here. I’ll look into creating a issue and/or submitting a patch to the Maven project for these omissions if required later.

Step 1 – Generate Your Archetype

This was a new one for me. Previously I had always hand crafted my custom archetypes, but the plugin supports a goal that allows you to create the basis of your archetype from an existing project – very handy. So I used mikeclipse to create myself a Spring-Mvc skeleton project, did some refactorings to mavenize it – create a POM basically – and executed the following command in my shell


~/foo/example$ mvn archetype:create-from-project
[INFO] Scanning for projects...
[INFO] Searching repository for plugin with prefix: 'archetype'.
[INFO] ---------------------------------------------------------
[INFO] Building example
[INFO]    task-segment: [archetype:create-from-project] (aggregator-style)
[INFO] ---------------------------------------------------------
[INFO] Preparing archetype:create-from-project
[INFO] ---------------------------------------------------------
[...]
[INFO] Archetype created in target/generated-sources/archetype
[INFO] ---------------------------------------------------------
[INFO] BUILD SUCCESSFUL
[INFO] ---------------------------------------------------------

This created a first-cut archetype for me in target/generated-sources/archetype that i could now tweak.

Step 2 – Removing redundant files

First thing i needed to was make some small changes to remove some superfluous Eclipse metadata files. This is probably an oversight on my part as i could have generated the archetype after removing these, but since i wanted to view the results of the archetype generation in Eclipse, the plugin had hoovered them up. So I removed them from the archetype-resources sub-directory (under target/generated-sources/archetype) and also edited the two archetype descriptor files under archetype-resources/META-INF/maven. I believe that two files are created to ensure backward-compatability with earlier versions of the archetype plugin – i was using the 2.0-alpha-4 release which supports the new archetype-metadata.xml format descriptor. For example, I removed the following entry from archetype-metadata.xml after deleting the files:


  <fileSet filtered="true" encoding="UTF-8">
     <directory></directory>
     <includes>
        <include>.project</include>
        <include>.classpath</include>
     </includes>
  </fileSet>

Step 3 – Fixing Velocity warnings

Under the hood, the archetype plugin uses Apache Velocity to do its file filtering – for example, the substitution of the GAV values I discussed above. Those who have used Velocity will be aware that it can be quite verbose in its output, especially when it encounters syntax that resembles a filterable value. This manifests itself in warnings like the following when you generate a project using your archetype (I’ll cover that process shortly):


   [WARNING] ... : ${mvnVersion} is not a valid reference.

While this warning does not materially affect what gets generated it doesn’t build confidence in your archetype as it will be one of the first things your users will see. So lets see how we can fix this. In my case, its due to the fact that my template POM contains a number of properties that I don’t want to be filtered and should be left intact as part of the generation. This is not an uncommon problem with POM’s – they often contain properties that allow a runtime interpolated value to be changed in one place. We can use some simple Velocity syntactical sugar as follows. Declare the following directive at the top of your template pom.xml file under archetype-resources


  #set( $symbol_dollar = '$' )

Then, wherever you declared a property that you want to be ‘ignored’ by Velocity do something akin to the following


  <requireMavenVersion>
    <version>${symbol_dollar}{mvnVersion}</version>
  </requireMavenVersion>

Step 4 – Filtering a filename

One of the nice features in the latest version of the archetype plug-in is the ability to re-name files during the generation process. In my scenario, I want to use this feature to ensure that my Spring servlet descriptor matches up with the associated declaration in my web.xml file. However, I did have to do a little digging to find out how to do this via the plugin JIRA as when I looked it did’nt jump out from the plugin web site examples or usage pages. Anyways, its pretty simple to do. Just create the template file using the filterable property as part of its name, using double underscores as token separators. In my case:


  $ mv src/main/webapp/WEB-INF/example-servlet.xml \
  src/main/webapp/WEB-INF/__artifactId__-servlet.xml

Step 5 – Install and invoke the archetype

A quick local installation is simply accomplished by running an mvn install inside the target/generated-sources/archetype directory.

The install creates a metadata file under ~/.m2/ called archetype-catalog.xml.


~/.m2$ cat archetype-catalog.xml
  <?xml version="1.0" encoding="UTF-8"?>
  <archetype-catalog>
   <archetypes>
    <archetype>
     <groupId>com.mikeci</groupId>
     <artifactId>example-archetype</artifactId>
     <version>1.0-SNAPSHOT</version>
     <description>example-archetype</description>
    </archetype>
   </archetypes>
  </archetype-catalog>


Once installed I can invoke it using the archetype:generate goal. By default this will search the public archetype catalog from the Maven central repositories and display all the available ones plus my local one. To tell the plugin to only search my local Maven repository I can use a simple mojo parameter as follows:


~/foo$ mvn archetype:generate -DarchetypeCatalog=local
[INFO] Scanning for projects...
[INFO] Searching repository for plugin with prefix: 'archetype'.
[INFO] --------------------------------------------------------------
[INFO] Building Maven Default Project
[INFO]    task-segment: [archetype:generate] (aggregator-style)
[INFO] --------------------------------------------------------------
[INFO] Preparing archetype:generate
[...]
Choose archetype:
1: local -> example-archetype (example-archetype)
Choose a number:  (1): 1
Define value for groupId: : com.acme
Define value for artifactId: : mywebapp
Define value for version:  1.0-SNAPSHOT: :
Define value for package:  com.acme: :
Confirm properties configuration:
groupId: com.acme
artifactId: mywebapp
version: 1.0-SNAPSHOT
package: com.acme
Y: :
[INFO] --------------------------------------------------------------
[INFO] BUILD SUCCESSFUL
[INFO] --------------------------------------------------------------

Then finally:


~/foo/mywebapp$ mvn package jetty:run -P mikeci,itest
[INFO] Scanning for projects...
[INFO] --------------------------------------------------------------
[INFO] Building example
[INFO]    task-segment: [package, jetty:run]
[INFO] --------------------------------------------------------------
[INFO] [enforcer:enforce {execution: enforce-versions}]
[...]
[INFO] Started Jetty Server

Et voila, my browser displays the simple landing page for the generated web app:

Obviously, I’ll need to subsequently take the generated archetype code and turn it into a proper project in its own right, add it to version control and ultimately deploy/release it to a Maven repository so it can be shared. I’m planning on using Github for the VCS (or SCM in maven-ese) as we are currently working on adding Git support to Mike. I also want to see how the maven-release-plugin handles Git. I’ll cover these topics in the next part of my blog.

Adam

Cloud Considerations: 4 tips for getting started on Amazon EC2

December 8, 2009

You need an environment and have decided that Amazon is the way to go. There are some things that should be considered before building your OS on, or uploading to the cloud.

1. Getting started

There are a couple of ways to get started with Amazon Cloud Hosting. You can build your own AMI ( Amazon Machine Image ) or you can modify an existing public AMI.
I am not going to give a tutorial on how to go about either of these options, that is well documented by Amazon.

Firstly you need to select your operating system, Amazon has a decent selection of supported OSs, you have the main Linux flavours that you’d want to see:

  • Red Hat
  • Fedora
  • Ubuntu
  • Debian
  • Gentoo
  • openSUSE

along with:

  • OpenSolaris

and one token Windows version:

  • Windows Server 2003

After selecting your OS see if there is a public image available of the version you would like to help you make the decision of whether to build your own.

If you want to go with a public AMI you need to decide if you can trust the provider, you can boot up an instance and check out the machine, see what is installed, ensure there is no proprietary software or at least you know the software that is installed. This should be thought about as well as the considerations to be applied to building your own AMI.

The easiest way to boot your first AMI is through the AWS Management Console. It is a fairly intuitive front end to the commands you can run when you install the EC2 tools, once you learn your way around the console, it will  give you a better insight into the tasks you need to get used to when using the command line tools.

If you want to build your own AMI then you will be in control of exactly what is installed, but you should consider the next point.

2. Is it Repeatable?

Now you have decided how to get started, are you able to rebuild your environment to the same specifications in a short period of time? If the worst happens and for whatever reason your server gets shutdown, lost, deleted, can you re-provision it?

Whether you are building your own or modifying an existing AMI there are ways to make the process repeatable. The simplest way is to keep a record of the process, a list of what is installed, the location of the binaries, where they are installed and any configuration steps.

If you have read the Amazon docs, and I suggest you do, then it should be fairly clear that all of the processes involved in either building your own or starting with an existing AMI are scriptable ( at least on the Linux side of things ). Using Amazon’s EC2 tools you can control AMI instances through scripts quite nicely, boot instances, attach EBS volumes, upload files and execute commands, everything necessary for automated environment creation. This in theory could be the beginning of your own Amazon DSL, but that is another blog post…

When creating your AMI through a loopback file, once again you have almost infinite control which should make the scripting easier. Once you have finished with your masterpiece you need to be sure to bundle it and upload it to Amazon.

3. Persistence

Now you know how you want to create your AMI and make sure the process is repeatable you need to think about the persistence of your data. By default none of your information will be saved when your AMI instance is shut down, unless you re-bundle that instance to a new bucket or with a new manifest name everything you did since you booted will be lost.

Obviously this isn’t workable, Amazon have a solution, and they call it Elastic Block Storage (EBS). Again there are Amazon docs for how to create and use EBS volumes. EBS volumes are attached to your instances as devices which you can mount as you would any other device, they are easy to use and it is simple to take backups by using the snapshot feature.

The real consideration is how to structure the installation of your software so you get the benefits of data persistence. This is obviously going to vary massively on the type of system you are running and the software you are installing, but components like web applications and databases and such should be configured to use EBS for information storage. If you plan carefully you can have EBS volumes with discrete services installed that can be cloned and attached to new instances at will, allowing for a truly scaleable architecture.

4. Versioning

How can you keep a track of differences in your environments from one upload to the next? There are a couple of really manageable ways, if you are scripting your setup then you can use version control and a release or tagging process of the script to keep a track of environmental changes.

If you are only at the stage of keeping a track of changes through component lists then it is possible to bundle your AMIs to a bucket appended with a version, or indeed specify a manifest file versioned in the same way, meaning you will keep your historic changes as an entire environment. If you are version controlling your scripts I would suggest uploading to unique bucket names appended with versions to begin with anyway, it just means you can immediately revert to the old environment if anything goes wrong.

After using Amazon EC2 for a while it will become clear how environments can be used as just another architectural component, they can be provisioned on demand, have services attached and started and within minutes you have a custom “box” available for use in the cloud, which can be torn down and brought back to its initial state very quickly.

One more thing, whatever you do, don’t block your SSH port 🙂

Behind the scenes: evolving the UI

November 6, 2009

First of an occassional series of posts describing how we do development here at Mike CI. I’m sure what we do is by no means unique, but hopefully our experiences might resonate with your own project. Or at the very least give you an opportunity to point out how we can do things better!

As you’d expect for a new product start-up we are an Agile shop.  All prospective features we get are put in our Product Backlog in Agilo, which is our Scrum tool of choice. I’ll probably do another blog at some point on how we do Scrum. At the start of each sprint we take user stories from the backlog and figure out how we will implement them during the Sprint planning meeting.

The main goal of our last sprint was to add a new component – the Account Manager. Our first release of the Account Manager includes functionality to enable users to register an account, invite other users to join them, manage their profile and change their password. Simple stuff, but a core component to the platform.

For the Account Manager we wanted a cleaner look and feel than the Control Panel. Managing your account should be easy and simple to do, and the design should reflect that. We also knew we would be adding more features here so the design had to be able to cope with that, eventually users will be able to upgrade/downgrade subscriptions, view usage and change their payment methods.

Our initial step is to story board the flow on a whiteboard and then capture the flows in Balsamiq. We’re really pleased with Balsamiq as a prototyping tool. It is incredibly quick to pick up and produces great mock-ups that convey the flow and spirit of the story without restricting the design. We then review and debate these flows in the sprint planning session. As you’d expect with a team of IT geeks professionals these debates can get quite animated! We then re-factor the mock-ups and paste the images in to the relevant user story in Agilo. The flows and mock-ups are crucial as not only do they give us the spec for development but we work our test plans from them too.

Invite a user to join Mike

Here is the first cut of the Manage Users Page from Balsamiq.

As this is a new application we decided to follow this up with some Photoshop mock-ups. We don’t always do this, but on this occasion as we weren’t constrained by the Control Panel look and feel we decided to add this step.

Manage Users Design 1

Initial Design for Account Manager

We created about six different designs, variations on a theme, but they really helped us visualise what we wanted and review and discuss the designs. This was a bit of a design smack-down, there could only be one winner!

While this was going on the developers had been implementing the functionality without the design. The application is a fairly typical Java Spring application – web pages are JSPs, we use SiteMesh and a bit of Ajax here and there. The developers coded from the back-end first giving all the screens a blank design to start with. All the key elements in the screens were given IDs which helped the skinning process later on. The most crucial stage is resolving the design on the webpages. This is another iterative cycle and often what looked good in Photoshop doesn’t necessarily work when implemented in CSS and HTML. In fact, I’d recommend not spending too long on the Photoshop design stage – the sooner you start working up the designs for real the better.

Once we’d settled on a final skin design it didn’t take long to skin the app, about a day or so, with a few impromptu reviews along the way.

Manage Users Implemented Version

Final version of Manage Users screen

This is the final version, I hope you like it. It ties in more to our website and blogs than the Control Panel and that does raise us some questions about whether or not we want to align the designs more. I’m really happy with the designs and I hope our users are too. We hope to release the initial version of the Account Manager soon – so watch this space!

After a few iterations we think we now have a pretty good process for rapidly developing Mike applications. Balsmiq has been great in enabling us to define an initial design. We can then, in parallel, work up the final designs (in Photoshop or HTML) while we progress the development. The final step is to skin the pages with the final designs. Constantly review along the way and be prepared to compromise, what looked great in Photoshop might not work for real.

I hope this has been useful, comments appreciated!