01202 006 464
learndirectPathways

Building and Executing a Risk-Based Test Plan

Podcast episode 59: Building and Executing a Risk-Based Test Plan. Alex and Sam explore key concepts from the Pearson BTEC Higher Nationals in Digital Technologies. Full transcript included.

Series: HTQ Digital Technologies: The Study Podcast  |  Module: Unit 4 (L5): Risk Analysis and Systems Testing  |  Episode 59 of 80  |  Hosts: Alex with Sam, Digital Technologies Specialist
Key Takeaways
  • A test plan translates the test strategy into a specific schedule of test activities, with individual test cases, assigned testers, expected results, timelines and resource requirements.
  • Writing effective test cases requires a clear understanding of both the expected behaviour of the system and the ways in which it might fail: the most valuable test cases are often those that probe boundary conditions and error-handling logic.
  • Defect management, the process of recording, categorising, prioritising and tracking bugs from discovery through resolution and verification, is an essential discipline that ensures no identified problem is lost or forgotten.
  • Test execution should be accompanied by meticulous documentation of results, including both the actual outputs observed and any deviations from the expected behaviour: this record provides the evidence base for release decisions and post-project learning.
  • Regression testing, running previously passed tests again after changes have been made, is essential for ensuring that fixes and new features have not inadvertently broken existing functionality.
Listen to This Episode

Listen to the full episode inside the course. Enrol to access all 80 episodes, plus assignments, tutor support and Student Finance funding.

Start learning →
Full Transcript

Alex: Welcome back. Today we're looking at building and executing a risk-based test plan, which translates the strategy into specific actions. Sam, what's the distinction between the strategy and the plan?

Sam: The strategy is the 'what and why': what we're going to test and why we're approaching it in the way we are. The test plan is the 'how and when': the specific test cases, who will execute them, when they'll be run and what resources are needed. The plan is more detailed and more operational than the strategy.

Alex: What makes a good test case?

Sam: A good test case has a clear, specific description of what is being tested. It specifies the preconditions: what state must the system be in before this test can be run? It specifies the test data: exactly what inputs will be used? It specifies the steps to execute the test in sufficient detail that anyone with appropriate knowledge could run it. And it specifies the expected result: exactly what should happen if the system is working correctly? Without a clear expected result, you can't determine whether the test passed or failed.

Alex: How do you prioritise which test cases to execute first?

Sam: Directly from the risk assessment. High-risk test cases should be executed early in the testing cycle, for two reasons. First, if there's a serious problem in a high-risk area, you want to find it as early as possible when there's still time to fix it. Second, if testing time is cut short, as it often is, the most important tests will have been run. Testing in risk priority order means that whatever tests you don't get to are the less important ones.

Alex: How do you manage defects through the testing process?

Sam: Every defect found during testing should be recorded in a defect management system with sufficient information for it to be reproduced, investigated and fixed. That includes: the test case that found it, the actual versus expected behaviour, the severity of the defect and its priority for fixing, the environment it was found in and any relevant screenshots or logs. Defects should be reviewed regularly by the project team to ensure high-priority ones are being addressed promptly and that the overall quality picture is understood.

Alex: What's the difference between severity and priority?

Sam: Severity is a measure of the technical impact of the defect on the system: a crash is higher severity than a minor cosmetic issue. Priority is a measure of how urgently the defect needs to be fixed, taking into account both the severity and the business context. A low-severity defect that appears on the home page of a consumer website might be high priority because of the visibility. A high-severity defect in a rarely-used admin function might be lower priority if it can be worked around. Both dimensions need to be assessed independently.

Alex: Really practical and directly applicable to assessments. Thanks, Sam.