- ✓API security testing should cover the OWASP API Security Top 10, a widely referenced list of the most critical API security risks including broken object-level authorisation, broken authentication and excessive data exposure.
- ✓Performance testing of APIs involves simulating realistic production traffic loads and measuring response time, throughput and error rates under different load conditions using tools such as Apache JMeter or k6.
- ✓API documentation evaluation should assess not just completeness and accuracy but also usability: good documentation should enable a competent developer who has never used the API before to make a successful first call within minutes.
- ✓Versioning review is an important part of API evaluation: the strategy for managing API evolution must be assessed for its ability to support backward compatibility whilst allowing the API to change and improve over time.
- ✓Evaluating an API solution requires taking the perspective of the developer who will consume it, not just the developer who built it: walking through the developer experience from authentication setup through to a complete use case is the most reliable way to identify friction and usability problems.
Listen to the full episode inside the course. Enrol to access all 80 episodes, plus assignments, tutor support and Student Finance funding.
Start learning →Alex: Hello and welcome back to The Study Podcast. Today we're looking at how to evaluate API solutions once they've been built. Sam, evaluation at this level is really about checking against the promise of the original design.
Sam: Exactly. The design made claims about what the API would do, how it would be secured and how it would perform. Evaluation is the systematic process of verifying those claims against the reality of the implementation. And it often reveals gaps that the implementation process introduced between the intention of the design and the actual behaviour of the system.
Alex: Let's start with security evaluation, because it has the most serious potential consequences if it's inadequate.
Sam: The OWASP API Security Top 10 is the standard reference for API security testing. The most critical issues it identifies include broken object-level authorisation, where an API allows a user to access or modify another user's resources simply by changing an ID in the request; broken authentication, where the mechanism that verifies who is making a request is flawed; excessive data exposure, where the API returns more data than the consumer needs or is entitled to; and lack of rate limiting, which leaves the API vulnerable to brute force and denial of service attacks. Testing systematically against each of these categories identifies the most significant security vulnerabilities.
Alex: How do you test API performance?
Sam: Performance testing involves sending realistic volumes of requests to the API and measuring how it responds. Tools like Apache JMeter, k6 and Gatling allow you to define load test scenarios: a gradually increasing load to find the point at which performance degrades, a sustained load to identify stability issues over time and a spike test to see how the system recovers from sudden traffic increases. The metrics you measure include response time at different percentiles (the average response time hides a lot: the 95th or 99th percentile shows the worst experience that a significant minority of users have), throughput and error rate under load.
Alex: And documentation evaluation. How do you assess whether documentation is good enough?
Sam: The most useful test is to give the documentation to a developer who hasn't worked on the API and ask them to use it to make their first successful API call. Observe where they get stuck, what questions they have to ask and how long it takes. Good documentation enables a competent developer to make a successful call within a few minutes of first encountering the API. Beyond that first call, good documentation covers authentication setup clearly, describes all endpoints with their parameters and responses, explains error codes and how to handle them, and provides practical examples for the most common use cases.
Alex: And how does the evaluation feed back into improvement?
Sam: Every finding from the evaluation should be categorised, prioritised and translated into specific improvements. Critical security vulnerabilities need immediate remediation. Performance issues might require architectural changes or infrastructure scaling. Documentation gaps need to be filled before the API is made available to external consumers. The evaluation report is both an assessment of the current state and a roadmap for improvement.
Alex: Really thorough evaluation framework. Thanks, Sam.