X-Designer Replay is a simple-to-use, portable, and powerful widget-based testing tool. It is intended to provide a testing solution across the whole range of platforms that are supported by XDesigner.
Most Motif/Xt programming involves reusing the Motif widgets, and using the X Toolkit. X-Designer Replay testing focuses on the Xt widget hierarchy, both for controlling a test sequence and for checking whether a test has succeeded.
It is important to note that you are not checking whether the widgets themselves are correct - only that user interaction with those widgets produces the desired results within your application.
Not all testing can be automated in this way. There will always be a need to visually inspect an application to check whether it looks right or whether any graphics programming (e.g. in drawing areas) has worked. While there will always be a requirement for looking and thinking, the widget-based testing strategy ensures that you can focus your attention on those few parts of the application that need it.
Experience has taught us that there are three graduated approaches to the production of a testing script:
This is the simplest way of checking that user actions can be replayed exactly as they were recorded. However, the scripts can become very large and troublesome to maintain. It can also be difficult to work out which part of a test is failing. More importantly, any change to the application will mean that the whole script will need to be re-recorded.
Here a large script is split into small, self-contained scripts each of which exercises an identifiable part of the application. Since this is such an effective testing technique, we have provided a detailed example. (See Using Testing Macros.) Each fragment is expanded using a preprocessor (e.g m4 or cpp), or any programming language you feel comfortable with. This allows you to build scripts such as:
StartApplication() OpenFile(foo.c) CloseApplication()
Your preprocessor, interpreter or compiler would then translate these fragments into a full X-Designer Replay command sequence.
This simple strategy takes you away from "step-by-step" programming, and your test scripts will be far more manageable.
The language you use for expressing your tests should be carefully selected. The main criteria should be:
Class based languages such as Java or Python are ideal for this purpose. Modelling languages, tailored for symbolic processing, such as Lisp or Prolog are other obvious candidates. Preprocessors such as m4 or even the C preprocessor will get you going very quickly.
Alternatively you may prefer to build your model in the language used by your application. In this way you guarantee that it is always available when you port your software. The only rule of thumb is that if you feel you're writing a program rather than designing a set of tests, there is almost certainly an easier way.
This testing method is appropriate for most small to medium-sized applications. However, for very large applications (and XDesigner is a good example) fragmentation also has its limitations:
The next sub-section describes how to overcome these problems.
Our experience in devising tests for XDesigner has shown that the most cost-effective way of writing tests is to provide a description of each dialog and then use that description in the tests. Consider the following example where the Color Dialog is described:
ColorDialog.shell = my_color_shell ColorDialog.helpbutton = color_help ColorDialog.applybutton = color_apply ColorDialog.quit = color_quit
These descriptions can be used each time the dialog needs to be involved in a test, whatever the reason for the test, e.g.
Open(ColorDialog) CheckHelpFor(ColorDialog) Close(ColorDialog)
Each procedure listed above is general purpose and can be applied to any dialog we have described.
The definition of Close is shown below:
#define Close(dialog) in dialog.shell push dialog.quit #endif
This technique allows you to separate out the description of the interface from the actions which exercise it. It also means that any change to the interface requires only a change to the associated data description - test scripts remain unchanged. If a new dialog is introduced to the application, you simply have to write its description and any non-standard operations which may be performed on or in it.
The biggest advantage of such a strategy is that the description is simple, clear and so close to the design itself that keeping tests in sync with product development becomes a well defined and straightforward exercise.
A good test is one which has been designed to break that part of the application it is checking. The test is successful if the application does not fall over, otherwise it is a failure.
Automated replay, by itself, is a minimal form of testing. If the sequence replays without error, then you have some measure that what was expected did actually happen. It is minimal because it only tests one potential result of a user action.
Consider the action of opening a file. In a minimal test, the expected result would be that the file is opened and everything progresses smoothly. However, this test is by no means complete. You need to consider other (potential) results, e.g.
The simplest test is one which records a series of actions within your application and then replays the script to duplicate those actions. While successful execution of such a script can give some confidence in your application, you can gain even greater confidence by taking advantage of the extra commands for control flow and expressions provided by X-Designer Replay to enrich a basic script. These allow you to cater for different display types, check widget resource settings, print messages, and much more.
Consider the situation where your application displays a message when it is running on a monochrome display but displays no message when it is running on a full color display.
Clearly, you don't want to have a separate test for each display. Instead, you can insert commands at the point where you expect the message to appear and wrap these commands in an if statement, e.g.
if !IsPseudoColor message Non PseudoColor display in warning_popup push warning.OK endif
This same check will work whatever display hardware or window manager you are using.
The size of application dialogs is also important. Two dialogs shown simultaneously may both be fully visible on one display, overlap on another or be placed one on top of the other on a third. This can result in application-modal warning messages disappearing behind the main dialog, and your application apparently locking-up.
The following test script fragment demonstrates how to handle such a problem:
if !IsVisible(open_file_dialog) error The Open File dialog is off screen endif
See Display Expressions for more information on handling different display types.
If your application exhibits different behavior on different displays, your tests need to be written to accommodate this. For example, the application may put up a warning dialog to tell the user to expect some degradation of display quality.
Now consider the selection of an option from an option menu. While a standard script will certainly make the selection, a good testing script will check that the selection has been made.
The example below shows how we test that the Language option has been set to an expected value in the XDesigner Generate dialog:
if !languageOption->menuHistory:'cppButton' message FAIL: Language option error. printres languageOption->menuHistory message expected cppButton endif
There are three ways to deal with a test failure:
Each relates to a particular xdreplay command line flag:
The best way to handle failure is to prepare for it in your script. Use conditional sequences and take appropriate actions (e.g. output a message) when a failure occurs.
Another useful aid to the location of test failure is the -v command line flag. This displays commands from the script on standard out as they are executed. Once you have located the problem, you can create a smaller script to reproduce it. This can then be used (perhaps in conjunction with your favorite debugger) to identify the problem. It can also be added to your regression test suite to demonstrate that the bug has been fixed.
See also: