by Jan Tretmans
Many hardware systems – even some Dutch dykes! – are actually “just” software. That dyke one has about 40k elements, whereas the software consists of 100M LOC. As such systems need to interact a lot, it becomes complex: many potential problems are lurking. Think of systems-of-systems, multi-disciplinarity, change, uncertainty (technically non-determinism) etc. Model-based testing is specification-based, active, black-box testing of functionality. One tests the functionality by interacting with the system, without knowing the internals of the system but knowing the what it should do – according to the given specification.
A short history of testing: manual testing is too much work, so one wants to automate. Simple scripts become too difficult to maintain. Therefore, high-level test-case languages were designed. But, you still need to write your tests. Model-based testing generates these tests based on some model. Writing the models this way becomes writing the model: in his experience more and more testers at companies have a PhD! And, while writing the model, you often already find many errors! But why creating a model if you cannot generate code from it? Modeling soemthing to take the square root is simple, but the model is declarative, so code generation is very hard. This is due to the nature of models: they abstract from all kinds of details. For testing this is very handy, but code generation requires all those details! Unfortunately, very few people use model-based testing. In the embedded world, model-based testing tools should support many different challenges, such as multiple paradigms, parallellism, statistical, underspecification, model composition, etc. There are many different tools for MBT, each having a different focus (e.g. from functional language community, or the concurrency (LTS) community).
TorXakis is based on LTSs with an “ioco”-relation: you’re only allowed to do things according to the specification. The idea is that all the possible outputs of the implementation should be a subset of the outputs of the specification. This relation is sound and exhaustive. Also, there are many different LTSs. Think e.g. of natural numbers, etc. Symbolic Transition Systems help to write better specifications. Under the hood, these are then unfolded to LTSs. Notice that TorXakis is said to treat the system as a black-box, but ioco is on LTSs. This the testability assumption: you cannot see the difference between the system or the model in terms of test/pass behavior. Another aspect is complex behavior: for this they use a process algebra, so that you can write large, even infinite LTSs with just a few lines.
In theory, this all sounds well, but practice comes with additional challenges: e.g. how do you generate complex test data, such as 10k large XML-based messages between systems. And, how to obtain specifications and create models? Also, everybody wants their own specific language. Often they define a DSL, and translate this into a process algebraic specification the tool understands. Current research, e.g. by Frits Vaandrager and his team (e.g. Joshua Moerman) is to learn models from test-executions. A nice connection with architecture mining 🙂