datoon
LLM Data Optimization

Convert JSON to TOON only when it really saves tokens

datoon evaluates payload shape, depth, and expected savings before converting. It keeps raw JSON when TOON is not a net win, and returns a transparent conversion report for every run.

01

Why teams use datoon

Built for practical AI/data pipelines where prompt size, readability, and deterministic behavior all matter.

Decision

Decision-first conversion

Auto mode checks uniform tabular structure, payload depth, and minimum savings before converting to TOON.

Reporting

Transparent reporting

Every run emits a report with token estimates, conversion decision, reason, and savings ratio.

Integration

CLI and plugin ready

Use as a Python CLI in CI/pipelines or install as Claude Code plugin via marketplace.

02

Benchmark snapshot

Local suite compares JSON baseline, forced TOON conversion, and datoon auto mode across five reference payloads.

Metric Value
Forced TOON avg reduction 26.8%
Auto mode avg reduction 28.1%
Auto decisions 3 / 5 payloads converted
Failed forced conversions 0 / 5

Reproduce locally with PYTHONPATH=src python benchmarks/run.py.

03

Install and run

uv sync

# stdin usage
echo '{"users":[{"id":1,"name":"Ada"}]}' | datoon --report-stdout

# file usage
datoon ./input.json -o ./output.toon --report ./report.json

Requires Python 3.12+ and Node.js for TOON CLI execution through npx --yes @toon-format/cli@2.