01202 006 464
learndirectPathways

Database Testing: Verifying Against Requirements

Podcast episode 23: Database Testing: Verifying Against Requirements. Alex and Sam explore key concepts from the Pearson BTEC Higher Nationals in Computing. Full transcript included.

Series: HTQ Computing: The Study Podcast  |  Module: Unit 4: Database Design and Development  |  Episode 23 of 80  |  Hosts: Alex with Sam, Computing Specialist
Key Takeaways
  • Database testing verifies that the system behaves correctly under a range of conditions, not just in ideal scenarios.
  • Functional testing checks that queries, stored procedures, and triggers produce the expected results for valid inputs.
  • Data integrity testing ensures that constraints, relationships, and validation rules are enforced correctly by the database engine.
  • Performance testing identifies slow queries and bottlenecks before they become problems in a live production environment.
  • A well-documented test plan with defined test cases, expected outcomes, and actual results provides clear evidence of a rigorous testing process.
Listen to This Episode

Listen to the full episode inside the course. Enrol to access all 80 episodes, plus assignments, tutor support and Student Finance funding.

Start learning →
Full Transcript

Alex: Today we're covering database testing. Sam, what does testing a database actually involve?

Sam: Database testing verifies that the database system works correctly and reliably under a range of conditions. It's not just about checking that you can insert a record and retrieve it again; it encompasses checking that constraints are enforced, that relationships are maintained correctly, that queries return accurate results, that performance is acceptable, and that the system handles errors and edge cases gracefully.

Alex: What's the connection between testing and requirements?

Sam: Every test should trace back to a requirement. If a requirement says the system must prevent duplicate email addresses in the customer table, you write a test that tries to insert two records with the same email address and verifies that the database rejects the second one. If a requirement says the system must return customer orders within two seconds, you write a performance test that measures how long that query takes under realistic data volumes. The requirements document is your test specification.

Alex: What are the main types of database testing?

Sam: Functional testing checks that the database does what it's supposed to do: that CRUD operations work correctly, that stored procedures and triggers behave as intended, and that queries return accurate results. Data integrity testing specifically checks that the constraints you've defined are being enforced by the database engine. Schema testing verifies that the tables, columns, data types, and relationships match the specification. Performance testing checks response times under realistic or peak loads.

Alex: How do you actually run these tests?

Sam: For functional and schema tests, you write test SQL scripts that set up test data, perform the operation being tested, and then check the result. Many development teams use database testing frameworks like DbUnit for Java or pytest with SQLAlchemy for Python to structure and automate these tests. For performance testing, tools like Apache JMeter can simulate load and measure response times.

Alex: What are some of the edge cases you should always test?

Sam: Boundary values: what happens when you insert a value at the exact maximum length of a VARCHAR column? What happens with negative numbers where you'd expect only positive ones? Null values: what happens when you don't provide a value for a column that has a NOT NULL constraint? Referential integrity violations: what happens when you try to delete a record that other records depend on? Concurrent access: what happens when two users try to update the same record simultaneously?

Alex: How do you document the testing process?

Sam: A good test plan documents each test case with a unique identifier, a description of what's being tested, the preconditions, the test steps, the expected result, and the actual result. This documentation serves several purposes: it provides evidence of a rigorous testing process, it creates a record of what was tested and found, and it enables the same tests to be re-run after future changes to check that nothing has been broken.

Alex: And what happens when a test fails?

Sam: The failure is recorded and the defect is investigated and fixed. Then the test is re-run to confirm the fix. Any related tests are also re-run to check for regression, where fixing one thing breaks another. This cycle of test, fix, re-test is central to quality assurance in any software or database project.

Alex: Thanks Sam. Our final Unit 4 lesson covers documentation next.