Definition of Done vs Acceptance Criteria: Agile Testing Standards

Published on December 14, 2025 | 10-12 min read | Manual Testing & QA
WhatsApp Us

Definition of Done vs Acceptance Criteria: Your Guide to Agile Testing Standards

In the fast-paced world of Agile development, delivering high-quality software consistently is the ultimate goal. Yet, without clear standards, teams can fall into the trap of "it's done when I say it's done," leading to buggy releases and frustrated stakeholders. Two of the most critical tools for preventing this are the Definition of Done (DoD) and Acceptance Criteria (AC). While they sound similar, confusing them can derail your project's quality.

This guide will demystify these essential Agile testing standards. You'll learn what they are, how they differ, and how to use them together to build a robust quality gate. Whether you're a new QA engineer, a developer, or a product owner, mastering these concepts is fundamental to succeeding in modern software projects and aligns perfectly with industry-standard frameworks like the ISTQB Foundation Level syllabus.

Key Takeaway

Acceptance Criteria define what needs to be built for a single user story to be correct. The Definition of Done defines how every user story must be completed to be considered shippable. Think of AC as the "feature spec" and DoD as the "project-wide quality checklist."

What is Acceptance Criteria? The "What" of a User Story

Acceptance Criteria (AC) are a set of conditions that a software product must satisfy to be accepted by a user, customer, or other stakeholder. They are the functional boundaries of a single User Story or feature. In Agile QA, AC are your primary source of truth for creating test cases.

From a testing perspective, well-written AC answer the question: "How will I test this?" They transform vague requirements into concrete, verifiable statements.

How to Write Effective Acceptance Criteria

Good AC are clear, testable, and concise. A popular format is the "Given-When-Then" (GWT) structure, which is excellent for behavior-driven scenarios.

  • Given a certain precondition or context...
  • When a specific action is taken...
  • Then a verifiable outcome is observed.

Example: Manual Testing Context

User Story: As a registered user, I want to reset my password so I can regain access to my account.

Acceptance Criteria (GWT Format):

  1. Given I am on the login page and click "Forgot Password"
  2. When I enter my registered email address and submit the form
  3. Then I receive a success message stating "Password reset instructions have been sent to your email."
  4. Given I have received the password reset email
  5. When I click the unique reset link within 24 hours
  6. Then I am taken to a page where I can enter a new password twice for confirmation.
  7. When I submit a new password that meets the security policy (e.g., min 8 chars, one number)
  8. Then I am logged in automatically and see a confirmation message.

A manual tester would use these exact steps to create and execute test cases, checking each "Then" statement.

What is the Definition of Done? The "How" for Every Story

The Definition of Done (DoD) is a shared, formal checklist of all the activities required to get a product increment to a potentially shippable state. It is a team agreement and applies to every user story in a sprint. While AC defines *what* is built, the DoD defines *how well* it is built.

The DoD acts as the project's ultimate quality gate. A story is not "Done" until every item on the DoD checklist is satisfied, regardless of how well it meets its AC.

Typical Components of a Definition of Done

A robust DoD covers multiple dimensions of quality. Here’s what it often includes:

  • Code Complete: Code is written, reviewed, and merged to the main branch.
  • Unit Tests: All unit tests are written, passed, and code coverage meets the agreed threshold.
  • Integration Tests: Relevant integration tests are passed.
  • QA Verification: Feature is tested against all AC; no critical/open bugs remain.
  • UI/UX Review: Design match is verified (for front-end stories).
  • Performance Check: No significant performance regression is introduced.
  • Documentation Updated: User guides, API docs, or release notes are updated.
  • Product Owner Acceptance: The PO has reviewed and accepted the story.

DoD vs AC: The Critical Differences Side-by-Side

Understanding the distinction is crucial for effective Agile testing.

Aspect Acceptance Criteria (AC) Definition of Done (DoD)
Scope Specific to a single User Story or feature. Universal; applies to every User Story in the project/sprint.
Purpose Define what the feature should do (functional correctness). Define how the work should be completed (overall quality and completeness).
Ownership Primarily owned by the Product Owner/Business Analyst, with QA input. Owned and agreed upon by the entire Agile team (Dev, QA, PO, Scrum Master).
Nature Variable; changes with each story. Static; remains consistent across stories (evolves slowly as a team standard).
Testing Focus Basis for functional and system test cases (validating behavior). Basis for non-functional checks, process adherence, and quality gates.

Practical Insight: A story can pass all its AC (it works as specified) but fail the DoD (e.g., no code review was done, or documentation is missing). In a robust Agile QA process, that story is not done.

The ISTQB Foundation Level Perspective

The ISTQB Foundation Level syllabus provides a standardized framework for understanding these concepts within the broader context of software testing.

How this topic is covered in ISTQB Foundation Level

ISTQB formally defines acceptance criteria as "criteria that a component or system must satisfy in order to be accepted by a user, customer, or other authorized entity." It links them directly to acceptance testing. The Definition of Done is discussed within the Agile testing framework as a key practice for establishing a shared understanding of quality and completion, often acting as a test oracle and an exit criterion for testing at the story level.

The syllabus emphasizes that testers must be involved in creating both AC and DoD to ensure testability and quality are baked in from the start—a principle called "early testing."

How this is applied in real projects (beyond ISTQB theory)

While ISTQB provides the theory, real-world application requires nuance. In practice, a DoD is often a living document on the team's wiki. Teams might have a "hard" DoD (mandatory for every story) and a "soft" DoD for the sprint (e.g., "run performance benchmark"). Furthermore, in manual testing-heavy environments, the DoD might explicitly include steps like "tested on browsers X, Y, Z" or "verified on mobile device resolutions." The key is that the team lives by it; it's not just a theoretical list.

For those looking to bridge this theory-practice gap, an ISTQB-aligned Manual Testing Course that focuses on real project workflows can be invaluable.

Building Your Quality Gates: Integrating DoD and AC

Together, AC and DoD create a powerful, multi-layered defense against poor quality. Here’s how they work in sequence during a sprint:

  1. Story Refinement: The team, including QA, reviews the User Story and its AC for clarity and testability.
  2. Development & Unit Testing: Developer completes code, ensuring it meets AC and passes unit tests (part of DoD).
  3. Code Review & Merge: Another developer reviews the code (DoD item).
  4. QA Verification: The tester executes tests derived from the AC. This is the primary functional validation.
  5. DoD Checklist Run: Beyond functional testing, QA or the team verifies all other DoD items: cross-browser check, documentation update, PO demo, etc.
  6. Done: Only when both the AC are satisfied AND the DoD checklist is complete is the story moved to "Done."

Common Pitfalls and How to Avoid Them

  • Pitfall 1: Vague Acceptance Criteria. AC like "user can log in" are untestable.
    Solution: Use the GWT format to enforce specificity.
  • Pitfall 2: Treating DoD as Optional. Teams skip "non-essential" DoD items under pressure.
    Solution: The DoD is binary. If it's not met, the story is not done. Period.
  • Pitfall 3: QA Not Involved Early. Testers receive stories with pre-written, poor AC.
    Solution: Insist on tester participation in all refinement sessions to ask clarifying questions.
  • Pitfall 4: Static DoD. The DoD never evolves as the team and product mature.
    Solution: Review and update the DoD during sprint retrospectives.

Mastering the creation and enforcement of AC and DoD is a core skill for any Agile professional. It moves quality from an afterthought to a built-in feature of your development process. For a hands-on deep dive into applying these and other quality standards in real testing scenarios, consider exploring practical training that goes beyond theory.

FAQs: Definition of Done & Acceptance Criteria

"I'm new to QA. Do I write the Acceptance Criteria or does the Product Owner?"
Typically, the Product Owner (PO) owns the "what" and drafts the initial AC. However, as a QA, your critical role is to challenge and refine them during backlog grooming. Ask questions like "How would we test this?" to make AC concrete and unambiguous. It's a collaborative effort.
"Can a story be 'Done' if it has a minor bug?"
It depends on your DoD. A common DoD item is "no open critical/high-priority bugs." A minor (low-priority) bug might be documented and scheduled for a future fix, allowing the story to be considered done. However, this must be an explicit team agreement, not an ad-hoc decision.
"What's the difference between DoD and Acceptance Criteria? They seem the same."
AC are the feature-specific rules (e.g., "The login button must be blue"). The DoD is the project-wide checklist for completeness (e.g., "Code is reviewed, tests are written, PO accepted"). A story must satisfy both its unique AC and the universal DoD.
"Our team doesn't have a Definition of Done. How do I start?"
Propose a retrospective topic. Start simple! Ask the team: "What must always be true for us to call a story truly finished?" List items (code review, testing, documentation). Write them down and agree to try them for one sprint. A good foundation in manual testing fundamentals will help you contribute effective quality-focused items.
"As a manual tester, how do I use the DoD?"
The DoD is your authority to say "this isn't ready." Before you start detailed functional testing (based on AC), check if the DoD's pre-conditions are met (e.g., "Is the code merged to the test branch?"). After testing, you also verify DoD items like "tested on supported browsers."
"Who is responsible for checking the Definition of Done?"
The entire team is collectively responsible. Developers check code review and unit test items. QA checks verification and bug status. The PO checks acceptance. The Scrum Master often facilitates adherence. It's a shared quality commitment.
"Can Acceptance Criteria change during a sprint?"
In Agile, change is expected, but late changes can be disruptive. Ideally, AC are locked down before the sprint starts (during refinement). If new information emerges, the PO can propose a change, but the team must assess its impact on the current sprint's scope and re-negotiate if necessary.
"How detailed should Acceptance Criteria be?"
Detailed enough to be unambiguous and testable, but not a step-by-step technical spec. They should describe the desired outcome and behavior, not the implementation. A good rule of thumb: another team member should be able to write a basic test case from them without asking further questions.
"Is the Definition of Done the same for every team?"
No. A DoD is a team agreement. A startup's DoD might be leaner than a bank's. A front-end team's DoD might include "design approval," while a back-end team's includes "API documentation." The key is that it's consistent within the team and reflects their quality standards.
"How do I prepare for ISTQB questions on this topic?"
Focus on the standard definitions and the relationship between the concepts. Understand that AC are for acceptance testing and define "what," while DoD is a broader quality checklist for the increment. Practicing with scenarios that ask you to identify which is which is very helpful. Combining ISTQB theory with a practical course that includes