Naked Objects
By Richard Pawson and Robert Matthews

A development process

The delivery phase

In the delivery phase, the system is developed, integrated, tested and released.

No code is carried forwards from the exploration phase. The style of coding adopted during exploration will have been fast and loose. There was absolutely no emphasis on rigorous design and/or testing; nor even on input validation, rule enforcement, or error prevention. Exploration assumes either that the prototype will only be used by an expert user, or that errors don't matter. To let any of this code be carried forward is to invite future problems.

The object model, and the high level definitions of responsibilities that evolved during exploration, are carried forwards. You may even want to retain the method signatures, especially where they are a direct reflection of certain responsibilities, but all the code inside those methods must be wiped clean.

We have found that allowing developers to preserve these class definitions and method signatures in the development environment helps to reduce the temptation to carry forwards some of the code. They provide a useful model of the overall design, and psychologically this helps developers to overcome the sense that they are starting all over again with a blank sheet of paper. Many actually welcome the chance to write the code again from scratch in a more disciplined fashion.

If you use an environment that can maintain a UML representation in synchronization with the code itself, bi-directionally, then you can think of the UML as what is passed from exploration to delivery - provided that you also adopt some convention for recording the higher level statements of object responsibilities. Unless you have such a tool, then we think you are better off not using UML but rather recording the responsibilities in the form of textual comments at the top of each Java class file.

During delivery, writing business functionality might involve creating some specific new sub-classes not explicitly modelled but at least foreseen during exploration. It may also involve writing some new aggregated classes that sit entirely within one of the business objects. But in the main, coding activity in the delivery phase will consist of writing methods on business objects.

Test-first coding

When writing those methods we strongly advocate adopting the discipline of test-first coding: before you start writing the method you write one or more executable unit tests that will check whether the method is correctly implemented. These tests should continue to be run throughout the development cycle to ensure that new errors haven't crept into the system. Using the Junit framework, it is possible to invoke these tests at the touch of a button. Converts to this approach often run their unit tests every few minutes during development. We have made a number of extensions to Junit to make it easier to apply within the context of the Naked Objects framework.

Extreme Programming (XP) seeks to apply this principle of writing up-front executable tests not only to unit testing but also to acceptance testing. In XP, when a particular story (or requirement) is to be implemented, the short description is fleshed out through direct discussion between developer and user. They also jointly write one or more executable acceptance tests for that story. By writing them in executable form, the developers can run these tests frequently during the development of the story, to get an indication of progress, and can run them as regression tests after subsequent refactoring [Fowler2000]. The primary role of acceptance tests in XP, however, is as a measure of value delivered: when all the acceptance tests run, that story is deemed to be implemented and the players move on to the next one.

Naked Objects makes it easier to adopt this particular XP practice. Writing executable acceptance tests for systems with graphical user interfaces (GUIs) is generally recognized as being very difficult[Kaner1997]. There are many tools that can capture and replay the keyboard and mouse events of an actual user operation, but this approach to testing has many problems[Groder1999]. Any change to the layout or style of the user interface will require these tests to be re-recorded, as, in many cases, will porting the application onto a machine other than the one where the test was recorded. Worse, from the XP viewpoint, is that these record-and-playback tests can only be captured after the system has been developed. Some of the tools provide a high-level GUI scripting language that, in theory, would allow the test scripts to be written in advance. However, with conventional systems design this still leaves the problem that it is very difficult for the user to imagine a yet-to-be-implemented user interface in sufficient detail to be able to write a detailed test script.

Writing executable acceptance tests

The Naked Objects framework utilises tests written in terms of higher level user actions[Finsterwalder2001]. When the users come to flesh out a story during delivery, they can use the exploratory prototype. Because the user interactions take a standard form, users can specify the implementation of any story in terms of direct operations upon business objects (instances or classes) that they have become used to manipulating on the prototype. The prototype is far from complete, so some stories will entail attributes, methods and associations that do not exist on the prototype, but we have found that the users have little difficulty in imagining extensions to the concrete objects that are in front of them. This is much less true if the prototype takes the standard 'scripted' form.

So, when a new story is to be started, a user and a programmer sit down together and write out the task in a formal language consisting of noun-verb style operations on the business objects and classes. (Note that this is merely a definition of a set of actions that a user may choose to follow. It is not a definition for an executable procedure that will eventually form a part of the system.)

Our original idea was to define a specialized constrained-English language for writing these acceptance test scripts, using XML. This language could then be simply converted into a Java executable test. However it soon became clear that this constrained-English was so close to Java that we might as well work the other way around. In theory, the user could write the constrained-English version alone. In practice, we found that whatever the language, a user and programmer working together were more effective and ultimately faster. So switching to a simple Java framework did not impede the process.

The programmer captures the detailed storyline, live, as a sequence of methods on specialized test classes provided as part of the Naked Objects test framework. These test classes simulate the interaction between the framework's viewing mechanism and the business objects.

When the acceptance tests for a story are completed, the programmer(s) can then start designing and coding the necessary functionality, and writing unit tests for each of the methods to be created or altered. The acceptance tests are run in a manner very similar to unit tests under Junit. Just as with the Junit approach to unit testing, we have found that some programmers use the executable acceptance tests to guide the work: in other words they address the errors thrown up by the tests in sequential order. This is a matter of personal choice.

Auto-generating the user training manual

Once you have a generic framework for writing executable acceptance tests, something else becomes possible: those test classes can be given the ability to translate themselves into a set of plain English step-by-step user instructions for performing the same task manually. This tells the user how to undertake the same acceptance test manually if they wished to. An example of this automated output is shown on this page.

This HTML user documentation was automatically generated by the Naked Objects test framework, from an executable user acceptance test. (This is one of the tests associated with Story 2 from the ECS system).

More significantly, perhaps, these auto-generated English-language user instructions represent a significant proportion of the user training manual for the system under development. After all, unlike unit tests (which are concerned primarily with technical correctness) the user acceptance tests were all defined in terms of delivering value to the user. These acceptance tests represent scenarios that the users can expect to encounter, some of them routine and some of them exceptional; and a training manual would have explicit instructions on how to cope with such scenarios on the system.

Some people have queried whether this use of scripts is not at odds with treating the user as a problem solver rather than a process follower. It is important to understand that these scripts are simulations of what a user does. They are not executed within the application itself, but within the testing framework that sits outside the application. And even when used to generate pages of a training manual, these scripts don't say 'you have to fulfil this story this way', but 'here is a way that you can fulfil this story'. There may be several alternative scripts for the same story, and there may be many ways to fulfil the same story that are not written as formal acceptance tests. Some may object that this implies that the testing is not comprehensive. That is necessarily true of any event-driven system, which effectively means anything with a GUI. But the ease with which you can now write executable acceptance tests is in practice likely to lead to more thorough testing than is typically the case for most commercial systems development.

The training manual would need other things as well, including a conceptual introduction to the application, and an explanation of the various business objects and their methods. In addition there must be some generic explanations of the user environment, equivalent to the generic instructions for any Windows-based, or for a web-browser-based, application. (We'll be providing an updated version of this generic introduction on our website).

Apart from saving on work, auto-generating the user training documentation from the executable acceptance tests guarantees that it is consistent with the operation of the system. It is as though when users are fleshing out a particular story, they are writing the page of the training manual for that story, and we are using an executable version of that page as our acceptance test.