The Intent Assertion Language (IAL)
The Intent Assertion Language (IAL) is a term rewriting engine that translates natural language assertions into executable tests. It's the core technology that powers Intent-Driven Development in NTNT.
This post explains how IAL works today, where it's headed, and why term rewriting is a powerful approach to bridging human intent and machine verification.
More Than HTTP Testing
IAL isn't just for testing web endpoints. It spans the full spectrum from unit tests to integration tests to acceptance tests, all in one unified language.
Unit Testing: Test pure functions with keyword syntax in glossary terms:
Feature: Text Utilities
id: feature.text_utilities
Scenario: Blog titles become clean URLs
When generating a slug from a title
→ slug is URL-safe
→ result matches expected
Scenario: Slugs are predictable
When generating a slug from a title
→ slug is predictable
---
## Glossary Extensions (Unit Testing)
| Term | Means |
|------|-------|
| generating a slug from a title | call: slugify({title}), source: lib/text.tnt, input: test_data.slugify_examples |
| slug is URL-safe | check: invariant.url_slug |
| slug is predictable | property: deterministic |
| result matches expected | result is {expected} |
---
## Test Data
Test Cases: Slugify Examples
id: test_data.slugify_examples
| title | expected |
| "Hello World" | "hello-world" |
| "My First Post!" | "my-first-post" |
The keywords (call:, source:, input:, property:, check:) connect natural language scenarios to function calls, test data, and property checks.
Integration Testing: Test how components work together:
Feature: User Service
Scenario: Create user writes to database
When calling create_user("[email protected]")
→ result is Ok
→ row exists where email = "[email protected]"
Acceptance Testing: Test full HTTP flows:
Feature: User Registration
Scenario: New user signs up
When POST /api/users with {"email": "[email protected]"}
→ status 201
→ they see "Welcome"
One language, all levels of testing. The same glossary-based approach works everywhere.
The Problem IAL Solves
When you write a requirement like "the homepage should load successfully," what does that actually mean in code? It might mean:
- HTTP status code is 200
- Response has a Content-Type header containing "text/html"
- Response time is under 500ms
- Body is not empty
Traditional testing frameworks force you to write these checks explicitly in code. The natural language requirement lives in a document somewhere, disconnected from the tests that verify it.
IAL takes a different approach: you write the natural language, and the system expands it into executable checks automatically.
Term Rewriting: The Core Mechanism
IAL is a term rewriting system. You define terms, and IAL recursively expands them until it reaches primitives that can be executed directly.
Here's a simple example. You define these terms in your glossary:
| Term | Means |
|------|-------|
| page loads successfully | status 200, returns HTML |
| returns HTML | header "Content-Type" contains "text/html" |
When IAL encounters "page loads successfully" in your intent file, it rewrites:
page loads successfully
→ status 200, returns HTML
→ status 200
→ Check(Equals, response.status, 200)
→ returns HTML
→ header "Content-Type" contains "text/html"
→ Check(Contains, response.headers.content-type, "text/html")
The rewriting continues until every term resolves to a primitive. Primitives are the leaf nodes that actually execute.
What's Available Today
The current IAL implementation supports HTTP-based testing with the following capabilities:
| Capability | Status |
|---|---|
| Glossary definitions with parameterized terms | Available |
HTTP status assertions (status 200, status 2xx) |
Available |
Body content assertions (body contains, body not contains) |
Available |
Header assertions (header "X" contains "Y") |
Available |
@implements annotations linking code to intent |
Available |
| Intent Studio visualization | Available |
| Term composition (terms referencing other terms) | Available |
The Three Layers
IAL has three layers of vocabulary, checked in order:
1. Your Glossary
Project-specific terms you define in your .intent file. These capture your domain language.
| Term | Means |
|------|-------|
| user is logged in | header "Authorization" exists |
| cart is empty | body contains "Your cart is empty" |
| checkout succeeds | status 200, body contains "Order confirmed" |
2. Standard Terms
Built-in terms that work in any project. These handle common patterns:
status 200 → Check(Equals, response.status, 200)
status 2xx → Check(InRange, response.status, 200-299)
body contains "x" → Check(Contains, response.body, "x")
body not contains "x"→ Check(NotContains, response.body, "x")
content-type is json → Check(Contains, response.headers.content-type, "application/json")
response time < 500ms→ Check(LessThan, response.time_ms, 500)
code is valid → Check(Equals, code.quality.passed, true)
exits successfully → Check(Equals, cli.exit_code, 0)
3. Primitives
The base operations that actually execute. You never write these directly; IAL resolves to them automatically.
IAL Primitives
Every assertion eventually resolves to a primitive operation. These are the base cases that actually execute:
Http(method, path, body?, headers?)
Executes an HTTP request and captures the response into context. After execution, you can check response.status, response.body, response.headers.*, and response.time_ms.
Cli(command, args?)
Executes a command-line operation. Captures cli.exit_code, cli.stdout, and cli.stderr.
CodeQuality(path)
Runs lint and validation checks on source files. Sets code.quality.passed, code.quality.error_count, and code.quality.warning_count.
FunctionCall(name, args)
Calls an NTNT function directly for unit testing. The return value is stored in result.
PropertyCheck(function, property_type, inputs)
Verifies function properties like determinism (same input always gives same output) or idempotence (f(f(x)) equals f(x)).
Check(operation, path, expected)
The universal assertion primitive. Compares a context value against an expected value using one of the check operations.
Check Operations
The Check primitive supports these comparison operations:
| Operation | Description |
|---|---|
Equals |
Exact equality |
NotEquals |
Not equal |
Contains |
String contains substring, or array contains element |
NotContains |
Does not contain |
Matches |
Regex pattern match |
StartsWith |
String starts with prefix |
EndsWith |
String ends with suffix |
Exists |
Value exists (not null) |
NotExists |
Value does not exist |
LessThan |
Numeric comparison |
GreaterThan |
Numeric comparison |
InRange |
Value within numeric range (e.g., 200-299) |
IsType |
Check value type |
HasLength |
Check length equals value |
The Context System
IAL uses a context object to store values during test execution. Primitives write to context, and Check operations read from it.
Context paths use dot notation:
# HTTP response context
response.status # HTTP status code (number)
response.body # Response body (string)
response.headers.content-type # Specific header
response.time_ms # Response time in milliseconds
# CLI context
cli.exit_code # Process exit code
cli.stdout # Standard output
cli.stderr # Standard error
# Code quality context
code.quality.passed # Whether lint passed (bool)
code.quality.error_count # Number of errors
# Function call context
result # Return value from function calls
When you write status 200, IAL resolves it to Check(Equals, response.status, 200). The Check primitive reads response.status from context and compares it to 200.
Parameterized Terms
Glossary terms can include parameters using {param} syntax:
| Term | Means |
|------|-------|
| they see "{text}" | body contains "{text}" |
| user {name} exists | body contains "User: {name}" |
| response time under {ms}ms | response time < {ms}ms |
When you write they see "Welcome", IAL substitutes the parameter and rewrites to body contains "Welcome", which then resolves to Check(Contains, response.body, "Welcome").
Composing Terms
Terms can reference other terms, allowing you to build complex assertions from simple pieces:
| Term | Means |
|------|-------|
| valid API response | status 2xx, content-type is json |
| user created successfully | valid API response, body has field "id" |
| error response | status 4xx, body contains "error" |
The term "user created successfully" expands through multiple levels:
user created successfully
→ valid API response, body has field "id"
→ status 2xx, content-type is json, body has field "id"
→ Check(InRange, response.status, 200-299)
→ Check(Contains, response.headers.content-type, "application/json")
→ Check(Exists, response.body.id)
Why Term Rewriting?
Term rewriting has several advantages over traditional test frameworks:
Natural language stays connected to execution. You write "page loads successfully" and that exact phrase appears in test output. There's no translation layer where meaning gets lost.
Domain terms are reusable. Define "user is authenticated" once in your glossary, use it in dozens of scenarios. Change the definition, and all scenarios update automatically.
The system is extensible without code changes. New assertions are vocabulary entries, not new primitive implementations. The engine is fixed; all customization happens in the glossary.
Resolution chains are inspectable. Intent Studio shows exactly how each assertion expands, making it easy to debug when tests fail.
AI agents can work at the right level. Agents write natural language terms that match human requirements, not low-level test code. The system handles the translation.
A Complete Example
Here's how a scenario flows through IAL:
# Intent file
Feature: User Registration
Scenario: New user signs up
When POST /api/users with {"email": "[email protected]"}
→ user created successfully
→ they see "[email protected]"
# Glossary
| Term | Means |
|------|-------|
| user created successfully | status 201, content-type is json, body has field "id" |
| they see "{text}" | body contains "{text}" |
IAL execution:
- Execute
Http(POST, /api/users, {"email": "[email protected]"}) - Response stored in context:
response.status = 201,response.body = {"id": 42, "email": "[email protected]"} - Resolve "user created successfully":
Check(Equals, response.status, 201)→ PASSCheck(Contains, response.headers.content-type, "application/json")→ PASSCheck(Exists, response.body.id)→ PASS
- Resolve "they see [email protected]":
Check(Contains, response.body, "[email protected]")→ PASS
Using IAL in Your Projects
IAL is built into NTNT. To use it:
- Create a
.intentfile with a Glossary section - Define your domain terms
- Write scenarios using your terms and standard terms
- Run
ntnt intent checkto execute - Use
ntnt intent studioto visualize resolution chains
What's Next for IAL
IAL's vocabulary will continue to expand. Here's what we're exploring:
- Database assertions —
record is created,row exists where email = "...",row count increases by 1 - Schema verification —
table has column,column is type,index exists(essential for agents generating migrations) - Event verification —
email is sent to,event is emitted,message is logged - Behavioral properties —
when repeated N times, still succeeds,makes at most N queries,data is unchanged - Components — Reusable intent blocks with inherent behaviors that cascade verification
- Domain extensions — Specialized vocabularies for temporal logic, streaming, spatial, probabilistic (ML), mobile, and hardware testing
The goal: if you can describe what correct behavior looks like in natural language, IAL should be able to verify it.
Check the NTNT repository for the latest implementation status.
