API reference writing

How to write an API reference that passes code review

API references fail for the same reasons code fails review: missing context, inconsistent conventions, and examples that have never been run. The fix is applying the same standards engineers already use for code — to the documentation that describes it.

API reference writing Documentation standards 8 min read

When an engineer submits a pull request, the reviewer checks for correctness, consistency, missing cases, and whether the code actually does what it says it does. These are not special criteria for code. They are the baseline for any technical artefact meant to be used reliably by someone else.

API references are technical artefacts. They describe contracts that other engineers build against. But most of them would not survive the first pass of a code review. Parameters are listed without explaining what values are valid. Examples reference tokens that do not exist. Response shapes differ between endpoints with no documented reason. The reference says one thing; the API does another.

The question is not whether your docs are thorough. It is whether they would hold up if you applied the same review criteria your team uses for code.

What "passes review" actually means for a reference

In a code review, a comment like "what does this return when the input is empty?" is not pedantry. It is a gap that will eventually cause a bug. The same question applies to every parameter, every endpoint, and every response field in your API reference.

A reference that passes review satisfies four criteria:

  • Correct — every statement matches what the API actually does. Not what it was designed to do, not what the ticket says — what it does when you call it today.
  • Complete at the boundaries — edge cases, null values, empty arrays, and error states are documented. Not exhaustively, but for every case a reasonable integration would encounter.
  • Consistent — naming conventions, parameter formats, and response shapes follow the same patterns across every endpoint. Inconsistencies that exist in the API are called out explicitly, not silently passed through.
  • Testable — every example in the reference can be run against the API and return the documented response.

None of these criteria require a technical writer. They require the same discipline a senior engineer applies when reviewing a pull request.

Parameter descriptions that carry information

The most common failure in API references is parameter descriptions that describe the name, not the value. "The ID of the user" tells a developer nothing they could not infer from the field name user_id. A description that passes review tells a developer something the name does not.

For each parameter, the description should answer at least one of the following:

  • What values are valid? If the field accepts an enum, list the values. If it accepts a string, state the format, maximum length, and whether it is case-sensitive.
  • What is the default? If omitting the parameter has a defined behaviour, document it. "Defaults to created_at descending" is more useful than "optional".
  • What happens at the boundary? If there is a minimum, maximum, or required format, state it. A developer who sends a string where an integer is expected should not have to discover that from a 400 response.
  • Is it tied to another parameter? Conditional requirements — "required if type is scheduled" — must be documented explicitly. They will not be inferred.

A parameter description that answers none of these questions is a placeholder, not documentation. Treat it the same way you would treat a code comment that says // does the thing.

Examples that have actually been run

An example that has never been executed is a guess formatted as fact. It may be directionally correct. It will eventually be wrong — and when it is, a developer will make an integration decision based on the incorrect example and spend hours debugging something that was never a bug in their code.

The standard for examples in a reference that passes review:

  • Use real values, not placeholders. YOUR_API_KEY is acceptable for credentials. string, integer, and value are not acceptable for data fields. Use values that show the shape and format of real data.
  • Show the full request. Base URL, headers, authentication, and body. A developer should be able to copy the example, replace the credentials, and run it without reading anything else.
  • Show the response that example produces. Not a schema. A real JSON object with the same structure and realistic field values a production call would return.
  • Test every example before publishing. Run each one against a staging environment. Record the response. If the example drifts from the API on a future release, the test will catch it before the developer does.

If your publishing pipeline does not include a step that runs documentation examples against the API, you are publishing guesses. They may be accurate today. You have no mechanism to know when they stop being accurate.

Error responses are part of the contract

A code reviewer who sees a function with no documented failure modes will ask: "what happens when this throws?" That same question belongs in every API reference. Error responses are not edge cases. They are half of the contract. An integration that cannot handle errors correctly is not a finished integration.

For each endpoint, the reference should document:

  • Every error code the endpoint can return, not just the generic 400, 401, and 500 set. An endpoint that can return a 409 for a duplicate resource should say so, with the exact condition that triggers it.
  • The error response structure. If your API returns {"error": {"code": "...", "message": "..."}}, document that shape. A developer writing error handling code needs to know what they are parsing.
  • What the developer should do. "Invalid request body" tells a developer what went wrong. "Check that all required fields are present and that date fields are in ISO 8601 format" tells them how to fix it.
  • Common causes for ambiguous codes. A 422 on a particular endpoint might mean one of three different validation failures. Document each one, not just the status code.

Consistency is a reviewable standard

In code, inconsistency is a smell. A function that returns null in some cases and an empty array in others, with no documented reason for the difference, will produce bugs. The same is true for an API. If your reference documents inconsistencies without explaining them, developers will treat them as mistakes — or worse, they will code around them in ways that break when the inconsistency is later removed.

  • Pagination. If most endpoints paginate using cursor but two use page and per_page, document the reason for the difference. If there is no reason, flag it as a known inconsistency and note whether it will be addressed.
  • Date formats. Pick one — ISO 8601 is the standard — and document it explicitly. If different endpoints return dates in different formats, document each one at the endpoint level and note the inconsistency at the API level.
  • ID formats. If some IDs are integers and others are UUIDs, document the type for every resource. A developer who assumes all IDs are the same type will write code that breaks on the exception.
  • Naming conventions. snake_case vs camelCase across request and response fields should be consistent. If it is not, document it.

Document what the API actually does. Flag known inconsistencies rather than hiding them. A developer who encounters an undocumented inconsistency will assume it is a bug.

Assign ownership the same way you assign code

Code without an owner drifts. Nobody updates it when the requirements change. Nobody notices when it breaks. Documentation without an owner does the same thing, on a longer timeline and with less visibility. The reference that was accurate at launch will not remain accurate without someone responsible for keeping it that way.

  • Documentation changes ship in the same PR as code changes. A parameter rename that goes out without a docs update is an incomplete release. Make this a requirement in your review checklist, not a follow-up task.
  • Assign doc ownership to teams, not individuals. An individual leaves. A team is accountable. The team that owns an endpoint owns the documentation for that endpoint.
  • Review documentation the same way you review code. If a PR changes an endpoint's behaviour, the reviewer checks whether the reference reflects that change before approving. This is not extra work. It is part of the definition of done.
  • Run a reference audit on a regular cadence. Quarterly is a reasonable starting point for a growing API. The audit checks every documented parameter, example, and error code against the current API behaviour.

The standard already exists — apply it

You do not need a separate framework for API reference quality. The framework is code review. Apply the same criteria: correctness, completeness at the boundaries, consistency, and the ability to verify behaviour against a real system.

The difference between a reference that passes that standard and one that does not is not how much time was spent writing it. It is whether the team treats documentation as a reviewable deliverable or as a publishing task that happens after the real work is done.

DocNova gives engineering teams a structured workspace to author, review, and publish API references alongside code — with OpenAPI import, version-controlled content, and publishing workflows that keep the reference in sync with every release.

API documentation platform

An API reference your engineering team will actually review.

DocNova gives teams structured authoring, OpenAPI import, and publishing workflows that keep your reference accurate across every release — not just the first one.