Log in

Testing Java Code in Maths Libraries

31 May 2012 By

What was said last

My last post was about how to access native libraries with minimal performance drop. It was seen that JNA provided the easiest method in terms of writing code, but to get the real performance critical JNI calls were needed. All this was in order to have a base against which we can test our maths libraries, and it is some discussion on how we test our maths libraries that forms the content of this post.

To reiterate, the problem we encountered was that whilst using singular value decompositions (SVD), provided by Apache Commons Math 2.2 and Colt, we found that in some cases we were getting different results for the same input matrix ($A$). The differences could not be accounted for by floating point error causing the vectors in the null space of $A^T$ to form a correct but different basis. The matrices tested were not singular to machine precision either, in which case floating point error counts for a lot - these matrices were just poorly conditioned, but not pathological. So in an attempt to work out what was going on we developed our own SVD implementation and that gave results closer to those of Colt. The latest snapshot versions of Apache Commons Math also give a result similar to our SVD and Colt SVD, which means they must have changed algorithms. So, this all raised the rather massive questions of "What is the right answer?" and "How can we check that our code is correct and always gives the right answer?"

Ways of testing

In the world of non-mathematical algorithms, unit tests and coverage of code are often sufficient in demonstrating that a piece of code works. But in the world of maths, because the data ranges can often by definition fill the entire range of floating point types, and floating point considerations are rather complicated (especially in iterative methods), a different approach is needed. A few testing methods are:

  • Use prior knowledge of what's likely to trip algorithms up and invent pathological data sets for unit tests. This method has its place, but is time consuming and can only possibly test tiny subset of cases.
  • Validation by reconstruction. For a large number of algorithms the original inputs, or some result based on them, can be reconstructed from the results of running the algorithm. This gives a simple check that the results are in the "right sort of area" by comparing p-norm errors or similar on the reconstructed data in comparison to the original.
  • Comparison. Another approach is to compare against work-hardened reference implementations of the algorithms, which usually come from native code libraries - hence the last post investigating ways of accessing these.

Testing whilst developing

At OpenGamma, we like GNU Octave; in fact, we think it is great. It is also conveniently built with the backing of ATLAS/BLAS/LAPACK/Suitesparse/qrupdate/FFTW amongst others, and as a result can, with little effort, be scripted to provide IO for results and test comparisons for our code. Most conveniently this can be performed using the Java package from Octave Forge that allows the instantiation of Java objects in Octave code and therefore makes testing Java code rather easy. An example of this is given below, to demonstrate that this isn't OpenGamma trickery; we're accessing the Apache Commons Math library and using their linear algebra example.

So whilst developing, we have an Octave instance running with a harness to whatever method it is we are working on. As algorithm development is undertaken, continuous tests against fuzzy data sets are performed and results from Java are compared to the results from Octave's (which are largely backed by work hardened native libraries). If the results match for a very large number of fuzzy cases then we become more convinced that we've got the algorithm correct!

This is all well and good, however, a problem occurs when it comes to on-going testing. We use the TestNG test suite and so integration with this would be ideal, which means ideally Java has to make the calls (we know we could generate the TestNG format output etc. from Octave but that doesn't bode well for continuous integration testing).

On this basis we could programmatically hook into Octave's driving library, libOctave, with something like JNA as seen in my last post, or, we could even instantiate the Octave interpreter to make calls from Java. However, this would cause license taint as Octave is universally GPL2+, and we are predominantly Apache 2.0. Therefore, despite the ease of development Octave gives us, we can't actually release the code as part of our distro! Regardless of this, for internal development purposes it is very useful to have Octave fired up against which we can test our developmental Java code.

Continuous testing

As mentioned earlier, we'd like to do continuous testing of our maths library by fuzzing data and then forming comparisons of the output to results from native libraries. This means the native library calls have to be wrapped and we have to have compatible APIs at some level in the code. The method for wrapping these libraries and dealing with their quirks and thread safety is the subject of my next post.

This is the developer blog of OpenGamma. For more posts by the OpenGamma team, check out our main blog.

About the Author

Stuart is a Member of Technical Staff at OpenGamma.

Follow us on Twitter