QA Project
Real manual QA case study for a browser-based SPA with explicit responsive and cross-browser coverage across Chrome, Firefox, Edge, Safari and breakpoints 320, 375, 768, 1024, and 1440.
Scope: Application shell, interface layout, controls, settings and fullscreen behavior, viewport handling, cross-browser checks in Chrome, Firefox, Edge, and Safari, breakpoints 320/375/768/1024/1440, public technical endpoints such as robots.txt and sitemap.xml, documentation of 76 manual test cases, and a high-severity visual integrity bug report.
Goal: Assess a real single-page web application for responsive behavior, browser consistency, practical release-quality risks, and bug-reporting quality using structured manual testing and explicit test-case design.
Key findings: Prepared a 76-case test set across UI, Auth, Game, E2E, Security, and Wallet coverage, executed responsive checks at 320/375/768/1024/1440, compared Chrome, Firefox, Edge, and Safari behavior, captured a high-severity BUG-001 around stale reel-canvas symbols with impact and root-cause analysis, and confirmed a separate configuration defect where /robots.txt and /sitemap.xml returned the main HTML application shell.
Result / impact: This case study now shows end-to-end manual QA work on a real web application: coverage planning, explicit browser/device scope, structured execution, serious bug reporting, technical investigation, and recruiter-friendly evidence.
QA Project
Static testing case study backed by a public SRS-style artifact: 91 requirements with priorities, browser support, API contracts, database assumptions, and traceability to test coverage.
Scope: Requirement review, acceptance criteria, browser support, API contracts, database assumptions, priority levels, traceability matrix, and testability analysis.
Goal: Reduce avoidable rework by clarifying expected behavior, measurable acceptance criteria, interfaces, and traceability before implementation and test execution expand.
Key findings: The public requirements artifact demonstrates structured coverage across Auth, Game, UI, API, SQL, NFR, and Wallet areas, with explicit priorities, browser support, API and database assumptions, and requirement-to-test traceability that helps surface ambiguity and missing measurable outcomes early.
Result / impact: This case study now shows practical skill in analyzing requirements for completeness, measurability, and testability, not only executing tests after implementation is finished.
- Structured requirements documentation
QA Project
API testing case study with public Postman, Swagger, and SQL evidence: functional test cases, automated chained scenarios, collection variables, CRUD-style coverage, negative checks, and response validation.
Scope: Functional API test cases for auth, game, wallet, history, and provably fair flows; chained requests using collection variables; CRUD-style state transitions; negative and security checks; review of Swagger / OpenAPI documentation; and SQL reference review for persistence and constraint validation.
Goal: Verify stable behavior for authenticated API workflows, chained business actions, and risk-heavy edge cases using a structured Postman suite aligned with documented API contracts and database-aware validation.
Key findings: The public suite demonstrates functional API test cases, chained scenario execution through shared variables such as access_token and session_id, CRUD-style state validation, negative paths, and security risks such as IDOR, SQL injection, XSS payloads, and rate limiting.
Result / impact: This case study now shows concrete API QA capability with public Postman evidence, automated assertions, multi-step interrelated requests, and database-aware validation instead of generic tool-name claims.
QA Project
Web testing case study that also demonstrates full QA documentation flow in TestRail / Confluence style: planning, detailed test cases, checklist coverage, bug reporting, and summary reporting.
Scope: Catalog, product page, cart, checkout, validation messages, order confirmation, UI-data consistency, and the full documentation lifecycle from planning to summary reporting.
Goal: Ensure the catalog, cart, and checkout flow works correctly for desktop and mobile users and that high-risk issues are found and documented clearly before release.
Key findings: Detected layout breaks on tablet width, stale cart totals on mobile, inconsistent coupon validation, and an order-summary mismatch between UI and backend data. Coverage and defects were recorded in a structured TestRail / Confluence-style documentation flow.
Result / impact: Critical checkout and mobile issues were fixed before release, while the documentation package made coverage, open risks, and release status clear for the team.