Sunday, February 21, 2016

Agile Program Management: Independent Verification Teams

How do Independent Verification Teams (IVT) relate to Agile Program Management? Should Program Managers consider IVT as yet another team and limit their responsibilities to focusing on traditional program management activities or do something different such as understanding the purpose of IVTs, key issues, and related metrics?  Let us find answers to these questions in this post.
Independent Verification Teams? What? Agile teams are cross-functional. Agile teams deliver tested software. Why do we need IVT?  Are we crazy? Are we going back to erstwhile non-Agile methodologies? Are these your questions?
Good questions!  When we run large projects consisting of multiple cross-functional teams, we do need IVTs – at least one or two IVTs.  Typically, an IVT is a team of 7 to 10 with one leader and team members. Obviously all of them are experts in testing and test automation.  In an ecosystem like this, each cross-functional team would deliver working software.  When we create an integrated build of software delivered by multiple cross-functional teams, don’t we need a team to test the integrated build? Yes.  Of course, Yes.  We need Independent Verification Teams for this.  When we program manage multiple related projects, yes, we will need Independent Verification Teams.
The number of IVTs would depend on the size of the project or program.  When we see from Agile Program Management perspective, it is important that program managers know
  • The purpose or goal of cross-functional teams and IVTs
  • The team size, composition, and distribution of team members in IVT
  • The tools they use
  • The right approach to measure their progress -  IVTs are not going to produce working software but they need to have tangible deliverables with ‘Definition of Done’
  • What can go wrong with IVTs
  • Ways to maximize the benefits of IVT
Yes. With IVT in place, cross-functional teams can’t renounce their responsibility of unit testing and let IVTs find and report unit level defects. Also, they can’t burden IVTs because of their attempt to deliver as many user stories in every iteration. I mean, cross-functional team can’t attempt to deliver extra user stories by cutting short on unit test automation and/or perform selective unit testing.
When IVTs spend time in finding and reporting unit level defects, IVTs may not get sufficient time to perform their role effectively and end up accumulating technical debt.  This can happen because of lack of or delay in test automation.  This can happen because of ineffective tool usage – for example, lack of traceability of test cases or scripts to corresponding user stories. Sounds familiar?
Also, delays in tool identification or any decision to change tools late in the game is not going to enable IVTs in accomplishing their goals.  Tool identification, adequate training and effective tool usage are very important for the team to focus on the product or applications under development.
IVTs require significant expertise in terms of thought leadership as well as execution.  Their test strategy needs to be clear enough in terms of tools, test data generation, team composition, and all essential parameters including the technical approach they decide to adopt.  In other words, the goals of IVT are not limited adopting tools and implementing test scripts. There is more to it.   A shared vision among all team members in terms of ‘what’, ‘why’ and ‘how’ of their goals is essential.
  1. Program managers need to know all these factors and select the right kind of measures to know if IVTs are delivering or not.  Here are some guidelines to do this.
  2. Automation Goals:  Measure automation %. This is typically a function of number of test cases automated versus total number of test cases
  3. Traceability: This is a function of number of test cases versus the number test cases that are traceable to user stories
  4. Defect Trends:  Trends of defects reported by IVT
  5. Defect Quality:  Number of unit level defects reported by IVT versus total number of defects reported by IVT
  6. Execution Trends:  Number of automated scripts executed versus number of manual test cases executed
  7. Post-release or post-production defects (Defect removal efficiency)
  8. Technical Debt - more specifically, testing debt accumulated in terms of inadequate automation, design flaws in test automation suite, and so on.
Consider Independent Verification Teams! IVTs are valuable! We need them in large-projects as well as program consisting of multiple interconnected projects. Does this synchronize with your experience and thoughts?  Let us discuss.
 

No comments: