Skip to content

Prompt

Source file: src/prompt/builder.ts

The prompt builder bridges Markdown specifications and LLMs. It takes a test file’s content and wraps it in structured instructions that tell the LLM exactly what to do: read the codebase in the current directory, evaluate each test scenario described in the Markdown, and return structured JSON results.

This is the most critical module for result quality — the LLM only knows what the prompt tells it. The prompt specifies the exact JSON schema for responses, including which fields to include for each status type (passing tests just need an ID and status; failing tests need the expectation, what was actually observed, the file location, and a suggested resolution). If the prompt is unclear or ambiguous, the LLM produces unreliable output that the parser and validator then have to work harder to handle.

Constructs the full prompt sent to the LLM CLI. The prompt is a single string that includes:

  1. Role assignment — “You are a semantic test evaluator”
  2. Test file content — the full Markdown content of the test file, embedded under a heading
  3. Instructions — step-by-step directions for the LLM
  4. Response format — exact JSON schema the LLM must return
You are a semantic test evaluator. Your task is to evaluate whether the
codebase in the current directory meets ALL test scenarios described in
the file below.
## Semantic Test File: {testName}
{testContent}
## Instructions
1. Examine the codebase in the current working directory.
2. Identify ALL distinct test scenarios or expectations in the file.
3. For each test scenario, extract an ID or slug that identifies it.
4. Evaluate each test scenario against the codebase.
5. Respond with ONLY a JSON array (no markdown fencing, no extra text).

The LLM must return a JSON array. Each element describes one test scenario:

{ "id": "my-test-id", "status": "pass" }
{
"id": "my-test-id",
"status": "fail",
"expectation": "what the spec requires",
"observed": "what the code actually does",
"location": "src/path/to/file.ts",
"resolution": "how to fix it"
}
{ "id": "", "status": "invalid" }
{ "id": "my-test-id", "status": "skip" }
  • The prompt asks for raw JSON (no markdown fencing) to simplify parsing. However, the parser has fallback strategies if the LLM wraps the response in code fences anyway.
  • Each test file may contain multiple test scenarios — the LLM identifies and evaluates all of them in a single pass.
  • The prompt is passed differently depending on the adapter: Claude receives it as a positional argument, while other CLIs receive it via stdin.