Test Maturity

Roger Stinissen
ING Blog
Published in
6 min readDec 8, 2020

--

My name is Roger and I am one of the engineers that designed the ING Test Maturity Model (iTMM). A couple of years ago I saw teams struggling with their testing and noticed that most of the teams had a lot in common when it comes to testing. This inspired me to set up a test maturity model. This model should give me insight in the test activities teams are doing and where they could improve. When I created the first version I also involved my colleague Mischa Klink and together we started improving the model and assessing teams in test maturity.

This model provided us lots of data that is input for different kinds of improvements within teams but also helped us in setting up general improvements in testing. Curious about the model?

What is iTMM?

Seven areas that are measured in iTMM

iTMM is a model that can help in improving the test maturity of teams that develop software. The model is divided in 7 areas where measurement takes place. These 7 areas are:

  • Ready Work
  • Alignment
  • Test ware
  • Test environment
  • Mastery
  • Metrics
  • Reporting

Within these areas we also defined different levels. At ING we are used to the Dreyfus model which we use in our job career framework. By analogy, we started using the levels novice until expert but you can use whatever you like. So a team can grow its test maturity from novice to expert in each area, except for Testware and Metrics. We believe that these specific areas will be important at a later stage in setting up good testing. So within the iTMM we came up with this granulation:

Seven areas of iTMM combined with different levels of the Dreyfus model

Every level within an area contains a couple of checkpoints which can be answered with yes, no or not applicable. Every checkpoint has its own weight factor. Because not every level has the same amount of checkpoints we think it’s necessary to add a weight factor to give more importance to specific checkpoints. For now we have variance of 1 to 3.

There is a total of 98 checkpoints divided over the 7 areas and all checkpoints have to be filled by the assessor. Here is an example of different checkpoints of different areas:

  • New test cases are created based on production incidents
  • Acceptance criteria are formulated before the start of each sprint (e.g. ready work) and are clear to every member of the squad.
  • Test activities like product risk analysis, preparation, test case design are performed prior to test execution, with the goal of keeping the test activities of the sprints’ critical path.
  • The use of test techniques and checklists is evaluated and adjusted

Depending on your specific situation you can define other checkpoints that are more important to the situation in your own team.

Total score

When all checkpoints are filled a score will pop up. There is a total score but also a score on different areas. And besides this we can also see the score on different topics like test automation, reporting, test strategy etc. Here is an overview of an outcome of an iTMM assessment.

Example of iTMM score

At this point we have a score, however this score is not the goal of this iTMM assessment. The outcome of this assessment is a mirror for the team. At what level are we testing and in which parts can we improve to e.g. speed up, get a higher coverage or better alignment with parties we’re relying on.

So with the outcome of the assessment teams can decide in which part they want to improve first : Find the low hanging fruit, or start immediately with large improvements. That’s all up to the team. Teams can also share the outcome of their assessments with each other. If both teams are involved in API testing, they probably can learn from each other. The score overview immediately gives insight if teams can pick up things from each other.

In essence, this model doesn’t only bring the awareness to the teams but it also enables them to start improving in their own team and pick up the success stories from other teams and implement them in their own process. And teams that are already brilliant in testing can easily share their knowledge, test automation code or How to’s with other teams.

How is the assessment executed?

Phases of setting up a iTMM assessment

First, we do a kick-off presentation with the teams that are involved in the assessment. We tell them what the assessment is all about and what we expect from all team members. And of course, everybody has the opportunity to ask questions. We want to ensure that the assessment and the outcome is for the team and not for management. It’s their journey and we want to help them improving in testing.

After this session we will ask the team to send information about their test approach. This can be the current test strategy, test cases, or reporting. All necessary test documentation that gives insight in the test approach and/or testing of a team.

Together with the team we plan a date for the assessment and for the feedback session. Normally we take a minimum of one week between the date when we received the information of the team and the assessment.

The assessment takes place with the entire team including product owner and customer journey experts. The session will take two hours. We ask the team to be as open as possible. Only in this way can we give the team proper feedback and help them out in improving.

A week after the assessment we will present the result and also define improvements. Together with the team we can define user stories where the team can start with. If needed we can also give support in fulfilling their user stories. When the user stories are finished the team can pick up new improvement user stories. And after 6 or 12 months we can do another iTMM assessment to measure the test maturity. This model supports in continuous learning and improving.

What have we learned from this iTMM model?

We already did quite a lot assessments and although teams have different technology we see that they have a lot of common issues. We also saw that some teams already have good test automation solutions that good be implemented very easily at other teams. And although the solution was good we saw that there still was some space to improve.

A lot of teams are developing API’s and are using Cucumber/Java to execute their testing. We help teams in improve in the way they set up their test cases in Gherkin by using test techniques. By applying test techniques we structure the tests so the teams test what they really need to test. This structure helps us in defining the correct Gherkin statements which can be reused easily. By doing this the tests were standardized and we could distribute this solution to a lot of other teams.

Apart from the technology most of the teams are doing the same thing when it comes to testing. And we think that teams can learn a lot from each other and also work together in setting up solutions that will help them out in speeding up, come to higher coverage and also focus on testing parts of the software that really matters.

We already introduced this model to other companies that wants to improve their testing. Also want to start with your test maturity? We have made a more generic version that you can use. Of course you have to make some checkpoints more specific for your organization or situation. We would really like to help you out with this and explain you more about this model. Contact us and we will make the model available for you. You can clone the model from here:

https://gitlab.com/testimprovers/itmm

--

--

Roger Stinissen
ING Blog

Currently work for ING as a DEV engineer. Most of the time involved in improving testing in the organization and teams.