Test-driven Software Design Analysis


KaleidoScope is a software tool supporting developers in investigating a software system's design and architecture based on automated runtime tests. In particular, it provides different design perspectives (such as UML diagrams and dependency structure matrices) as decision support, which are automatically derived from executing scenario-based runtime tests. KaleidoScope allows the developer to navigate through the resulting perspectives as well as to tailor their scopes and abstraction levels.

As a research project, KaleidoScope is subject to an ongoing extension and improvement process. At the moment, KaleidoScope supports the analysis of systems developed in NX or XOTcl with runtime tests specified using STORM or TclSpec. It is planned for future work to provide analysis for JAVA-based software systems.

In particular, KaleidoScope supports a scenario-based and test-driven analysis of a system's software design (as-is design). Currently, we investigate the potential for assessing smells in code and design, and for planning corresponding refactorings.

For more technical and conceptual details as well as for application examples, please take a look into the Related Publications or send us an email.

Bad Smells in Code and Design

Refactoring is a technique for improving the design quality of a software system (see, e.g., Refactoring by Martin Fowler). Bad smells represent candidates for refactoring, since they negatively impact a system's maintainability and evolvability. For detecting smell candidates, in recent years multiple automatic tools have been proposed (smell detectors and refactoring-recommendation systems such as JDeodorant or DECOR).

However, smell-detection tools also produce false positives such as intentional smells (smells that are actually intended by the software developer), e.g., as result of applying a particular design pattern (see, e.g., Design Patterns by the GoF). For this reason, the identified candidates need to be assessed manually by the corresponding engineer for deciding whether, how, and when the candidate should be refactored, as well as for estimating the impact of the potential refactoring.

The cost of additional work caused by bad smells can be measured as part of a software system's Technical Debt (TD).

Decision Support in Smell Assessment

KaleidoScope provides different perspectives for assisting developers in assessing smell candidates. In particular, sequence and class diagrams of the Unified Modeling Language (UML) in different scopes as well as different kinds of dependency structure matrices (DSM; aka design structure matrices) are available reflecting the behavior of the system under test in terms of method-call interactions at runtime.

The perspectives can be tailored by software developers in different ways to provide the information relevant for answering questions on the state of smell candidates (such as specific dependencies or other behavioral details, see Fig. 1). In particular, KaleidoScope supports a scenario-driven identification and assessment of code and design smells.

figure of derivation process
Fig. 1: Selected design perspectives produced by KaleidoScope for supporting a scenario-driven design investigation.

Related Publications

Copyright Terms: This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.

  • Thorsten Haendler, Stefan Sobernig and Mark Strembeck: Towards Triaging Code-Smell Candidates via Runtime Scenarios and Method-Call Dependencies, In: Proc. of the 9th International Workshop on Managing Technical Debt (MTD 2017) at the 18th International Conference on Agile Software Development (XP 2017), Cologne, ACM, May 2017. (pdf, slides, doi)
  • Thorsten Haendler, Stefan Sobernig and Mark Strembeck: Deriving UML-based Specifications of Inter-Component Interactions from Runtime Tests, In: Proc. of the 31st ACM Symposium on Applied Computing (SAC 2016), Software Engineering Track, ACM, pp. 1583-1585, April 2016. (pdf, poster, doi)
  • Thorsten Haendler, Stefan Sobernig and Mark Strembeck: Deriving Tailored UML Interaction Models from Scenario-Based Runtime Tests, In: Software Technologies, Communications in Computer and Information Science (CCIS), Volume 586, Springer International Publishing, pp. 326-348, February 2016. (pdf, doi)
  • Thorsten Haendler, Stefan Sobernig and Mark Strembeck: An Approach for the Semi-automated Derivation of UML Interaction Models from Scenario-based Runtime Tests, In: Proc. of the 10th International Conference on Software Engineering and Applications (ICSOFT-EA 2015), Colmar, France, SciTePress, 229-240, July, 2015. Paper Award (pdf, slides, doi)


For downloading the source files, please log in.


In case of further questions, please contact:


The content of this website has been compiled with great care. However, we cannot guarantee the accuracy, completeness, or validity of the provided information. Therefore, we cannot assume any liability for it. Our website contains links to external websites which have been carefully chosen. However, as the content of these websites is not under our control, we cannot assume any liability for it. At the point in time when the links were placed, no infringements of the law were recognizable to us. As soon as an infringement of the law becomes known to us, we will immediately remove the corresponding link.