Artifact content
This artifact highlights practical non-functional testing experience as part of broader QA work. In addition to functional validation, I also worked with JMeter to design and execute load-testing scenarios and to evaluate product behavior under increased usage and stress.
- Tool: Apache JMeter
- Focus: load testing, response time, stability, throughput, concurrency, and performance-related risk visibility
- Practical context: web applications and API-backed systems where functional correctness alone was not enough
The goal of this work was not “run a tool for the sake of it,” but answer useful QA questions such as:
- How does the system behave under higher request volume or concurrent usage?
- Do critical flows remain stable when load increases?
- Where do bottlenecks start to appear: client-side wait time, API response latency, backend processing, or supporting infrastructure?
- What non-functional risks should be communicated before release?
In practice, this involved designing load-oriented scenarios, choosing the right target flows, preparing meaningful input data, executing runs, reviewing results, and summarizing findings in a way that supported release and engineering discussion.
This experience complements functional QA work because it shows that quality was evaluated not only from a correctness perspective, but also from a performance and stability perspective. That is especially important for web products where slowdowns, timeouts, or unstable behavior can affect user trust even when functional steps appear correct.
For recruiter review, this artifact supports the claim that my QA background includes both functional and non-functional testing, with hands-on use of JMeter for load-testing scenarios.