|
This document describes the Batik test infrastructure whose goals
are to:
- Make it easy to detect regressions
- Make it easy to run test suites
- Make it easy to write new tests and add them
to an existing test suite
The intent for the test infrastructure is that it grows
along with Batik and keeps monitoring the health of
the code base.
While the test suites in the infrastructure will be
run every day by build/test machines, they are also
intended to help the commiters and developers get confident
that their code modifications did not introduce regressions.
This document describes:
|
The following are the high level
interfaces in the infrastructure
- A
Test is performing whatever check
is needed in its run method, and each
run produces a TestReport
- A
TestReport describes whether
a Test run passed or failed and provides a
description of the failure in terms of an error
code (unique in the context of a given Test )
and a set of key/value pairs
- A
TestSuite is a Test
aggregation which can run a set of Test instances
- A
TestReportProcessor is used to
analyze a TestReport . A specific implementation
can choose to create graphs, send an email or write
an HTML file
|
The test infrastructure comes with a number of default
implementations for the interfaces described above.
Specifically:
AbstractTest . This implementation of
the Test interface is intended to make it
easier to write a 'safe' Test implementation.
See "Writing New Tests"
for a description of how to use that class.
DefaultTestReport provides a simple
implementation of the TestReport interface
that most Test implementation will be able to
use. See "Writing New Tests" for more details.
DefaultTestSuite provides an implementation
of the TestSuite interface and makes it
easy to aggregate Test instances.
SimpleTestReportProcessor is a sample
TestReportProcessor implementation that
simply traces the content of a TestReport to
an output stream
TestReportMailer is another implementation
of the TestReportProcessor interface that
emails a test report to a list of destination emails.
|
The test infrastructure is using XML-out (and XML-in
too, see "Running a test suite") as a favorite way to
generate test reports. The XMLTestReportProcessor
implementation of the TestReportProcessor interface.
outputs reports in XML in a configurable directory.
The XMLTestReportProcessor can notify a
XMLReportConsumer when it has created a new
report. There is one implementation of that interface
by default that can run an XSL stylesheet on the
XML report (e.g., to generate an HTML report) and that
is done by the XSLXMLReportConsumer . This is used
by the 'regard' rule in the Batik build to produce an HTML report for
the default regression test suite.
|
|
The infrastructure tries to make it easy to create, update and
modify test suites. This section describes how to describe a set
of tests to be run and how to actually run that test suite
|
| | | | regard: the Batik regression test suite | | | | |
The regard test suite contains all the regression tests for the Batik project.
The regard tool is a specific test suite description, regard.xml
(which you can find in the test-resources/org/apache/batik/test directory ). That
file contains a set of test suite files which sould be run.
The following describes how to use the regard tool and some of the most important tests
in the regard test suite
The regard tool lets you run either all the tests or any specific test you want in the
test suite. To run all the tests in the regard test suite, type the following at the command
line:
build.sh regard
To run a specific test in the test suite, type the qualified test id or any sub-portion of that
id:
build.sh regard <id list>
For example:
build.sh regard unitTesting.ts batikFX.svg
will run all the tests with an id containing unitTesting.ts (i.e., all the test selection
unit testing, see the test-resources/org/apache/batik/gvt/unitTesting.xml) and the accuracy
rendering test on batikFX.svg (because it is the only test with batikFX.svg it its id).
|
There is a Test implementation, SVGRenderingAccuracyTest which
checks that Batik's rendering of SVG document stays accurate. It compares reference images
with the rendering Batik produces and reports any discrepency.
|
| | | | The SVGRenderingAccuracyTest configuration | | | | |
An SVGRenderingAccuracyTest 's constructor configuration is made of
- The URL to the SVG it should render
- The URL to a reference PNG file
The default behavior for the test is to render the SVG into a PNG file and compare with
the reference image. If there is not difference, the test passes. Otherwise, it fails.
In addition to this default behavior, the SVGRenderingAccuracyTest can
take an optional configuration parameter, an image URL defined as an 'accepted' variation
around the reference image. If such a variation image is specified, then the test will pass if:
- The rasterized SVG is equal to the reference image
- Or, the difference between the rasterized SVG and the reference image is
exactly the same as the accepted variation image
Finally, to ease the process of creating 'accepted' variation images,
SVGRenderingAccuracyTest can take an optional file name (called 'saveVariation')
describing where the variation between
the rasterized SVG and the reference image will be stored in case the rasterized SVG
is different from the reference image and the difference is not equal to the variation
image, if any was defined. That way, it becomes possible to run a test. It that test fails,
the developer can review the saveVariation image and decide whether it is an acceptable
variation or not and use it in subsequent test run as the 'accepted' variation image, which
will allow the test to pass if that exact same variation remains constant.
|
Initial set up
To set up the test environment the first time, you need to:
- Check-out the latest version of the code, including the test-xx directories
(sources, resources and references) and the build.xml file
- Run the regard test suite once:
build regard
This will generate an HTML test report (report.html ) in the
test-reports/yyyy.mm.dd-HHhMMmSSs/html/html directory.
Depending on how much different text rendering is between your work environment and the
environment used to create the reference images, you will get more or less test that will fail,
because of differences in the way text is rendered on various platforms and because of
fonts not being available on some platforms. For example, a running the test on a Windows 2000
laptop against images generated on the Solaris platform caused 16 tests out of 71 to fail.
Review the HTML report to make sure that the differences are really due to text variations.
This will usually be the case and you can make sure by clicking on the diff images contained
in the report to see them at full scale. You can you can then turn the 'candidate' variations generated by
the test into 'accepted' variations by moving files from one directory to another:
mv test-references/samples/candidate-variations/*.png test-references/samples/accepted-variations/*.png
mv test-references/samples/tests/candidate-variations/*.png test-references/samples/tests/accepted-variations/*.png
You can now run the test again:
build regard
Check the newly generated HTML report in the test-reports/html directory: there should not
longer be any test failure
Daily usage
Once the intial set-up has been done, you can use regard by simply updating your
CVS copy, including the test-references. If no change occurs, your test will keep passing
with your reference images. If a test fails (e.g., if someone checks in a new reference
image from a platform different than the one you are using, you will have to check if it is
because of system specific reasons or if there is a bigger problem.
|
Regard contains over a 100 tests for checking regressions on the SVG Generator. If you use 'svggen' as an argument to
regard, all the SVG Generator tests will be run (because regard.xml points to test-resources/org/apache/batik/svggen/regsvggen.xml
which is a test suite descriptio for the SVG Generator and that the root <testSuite> element has the 'svggen' id).
|
|
|
|