Give your agent eyes — and an eraser
Your AI agent can write code, search the web, and manage files. Now it can edit images too. Peelaway's API is designed for autonomous agent workflows — discoverable, predictable, and simple enough for any agent to orchestrate.
Built for autonomous workflows
Three layers of agent discoverability — plus an API simple enough that any agent can use it without human guidance.
SKILL.md — agent skill definition
A structured skill file that tells coding agents exactly how to use Peelaway: when to invoke it, the 3-step workflow, request/response formats, and error handling. Drop it into Claude Code, Cline, or any agent that supports skill files.
llms.txt — LLM-readable service description
A plain-text service overview optimized for LLM consumption. Describes what Peelaway does, lists all endpoints, and links to the OpenAPI spec. Discoverable at /llms.txt.
OpenAPI spec — machine-readable API contract
Full OpenAPI 3.0 schema at /openapi.json. Any agent framework with HTTP tool support can auto-generate client code or use it for function calling.
Three-endpoint async API
Submit (POST /process), poll (GET /status), download (GET /result). Simple enough for any agent to orchestrate without complex state management.
Base64 in, base64 out
Images are sent and received as base64-encoded strings in JSON. No multipart uploads, no file handling — just JSON payloads that agents handle natively.
Scales to zero, pay only for what you use
Serverless backend — no cost when idle, and a fraction of professional editing costs per image. Agents can call it on-demand without provisioning or managing infrastructure.
APIs that agents actually understand
Most image editing APIs are designed for human developers reading documentation pages. They have complex auth flows, multipart uploads, webhook callbacks, and SDK-specific patterns that agents struggle with.
Peelaway is agent-first. Base64 JSON in, base64 JSON out. Three endpoints, deterministic status codes, no SDK required. Plus /SKILL.md and /llms.txt that tell agents exactly what the service does and how to use it — before they make a single API call.
Currently open for agent use. Authentication will be added — but the API contract stays the same.
Submit → poll → download. Processing takes 30s–2min. Agents can do other work while waiting, or poll in a loop.
JSON responses with consistent schemas. Status is always pending, processing, done, or error. No ambiguous outputs for agents to parse.
400 for bad input, 404 for unknown job, 202 for not-ready-yet. Agents can branch on status codes without parsing error messages.
Works with every agent ecosystem
Any AI tool that can make HTTP requests can use Peelaway. Here's what's already compatible.
Coding Agents & AI IDEs
Agent Frameworks
MCP-Compatible Clients
LLM Providers & Platforms
Automation & Workflow
Agent integration in 30 seconds
Discover, submit, poll, download. That's the whole workflow.
# LLM-readable overview
curl https://jakeloo--peelaway-web.modal.run/llms.txt
# Structured skill definition
curl https://jakeloo--peelaway-web.modal.run/SKILL.md
# Machine-readable spec
curl https://jakeloo--peelaway-web.modal.run/openapi.json # Submit
POST /process
{"image": "<base64>"}
# → {"job_id": "abc123"}
# Poll
GET /status?job_id=abc123
# → {"status": "done"}
# Download
GET /result?job_id=abc123
# → {"image": "<base64>"}