Automake provides some simple support for regression tests.
There is a (terse) description of these in the automake
manual, in the section Support for test
suites, but it lacks any example. You run the tests
with make check
, after the build, but before the
component is installed.
You can set up tests as follows.
TESTS = test1 test2 check_PROGRAMS = test1 test2 test1_SOURCES = test1.f test1_LDADD = libemsf.la libems.la `cnf_link` test2_SOURCES = test2.c
The TESTS
variable lists a set of programs
which are run in turn. Each should be a program which returns
zero on success, and if all the programs return zero, the test
is reported as a success overall. If a non-portable test
makes no sense on a particular platform, the program should
return the magic value 77; such a program will not be counted
as a failure (so it's actually no different from `success',
and the difference seems rather pointless to me). A
PROGRAMS
`primary' (see Section 2.1.2 for this term) indicates that
these are programs to be built, but the `prefix'
check
indicates that they need be built only at
`make check' time, and are not to be installed.
The SOURCES
primary is as usual, but while the
test2
program is standalone (it's not clear quite
how this will test anything, but let that pass), the
test1
program needs to be linked against two
libraries, presumably part of the build. We specify these
with a LDADD
primary, but note that we specify
the two libraries which are actually under test as two
libtool libraries, with the extension
.la
, rather than using the -lemsf
-lems `cnf_link`
which `ems_link`
uses as its starting point
(this example comes from the ems
component).
That tells libtool to use the libraries in this directory,
rather than any which have been installed.
The fact that test programs must return non-zero on error
is problematic, since Fortran has no standardised way of
controlling the exit code. Many Fortran compilers will let
you use the exit
intrinsic:
to return a status. Since this is test code, it doesn't really matter that this might fail on some platforms, but if this worries you, then write the test code as a function which returns a non-zero integer value on error, and wrap it in a dummy C program:rval = 1 call exit(rval)
test1_SOURCES = test1.f test1_wrap.c test1_wrap.c: echo "int main() { exit (test1_()); }" >test1_wrap.c
If the tests you add use components other than those
declared or implied as component dependencies (see Appendix A.16), then you should
declare the full set of test dependencies using
STAR_DECLARE_DEPENDENCIES([test], [...])
.