Introduction
What is Echostash and why use it
What is Echostash?
Echostash is a prompt engineering platform for teams building AI applications. Store prompts in a versioned library, structure them as role-based messages with tools and schemas, render them dynamically with Echo DSL, test with automated evals, and deploy with confidence through quality gates.
Key Features
- Prompt Library - Store, version, and organize prompts across projects
- Messages & Tools - First-class support for multi-turn conversations and function calling. Prompts are structured as role-based messages (system/user/assistant) with optional tool definitions
- Dynamic Templating - Powered by Echo PDK: variables, conditionals, sections, AI gates. Only send relevant context to the LLM
- Structured Output - Define JSON schemas directly in prompts for type-safe LLM responses
- 5 Provider Converters - One prompt, any LLM. Convert to OpenAI, Anthropic, Google, Vercel AI, or LangChain format
- Server-side Rendering - Render prompts on the server with variables, get back resolved messages + tools + model config
- Playground - Interactive editor with visual mode, code mode, render preview, and LLM run with tool mocking
- Evals - 16 assertion types, test suites, A/B testing, quality gates
- Version Control - Commit, diff, publish, deploy to named targets
- API + SDKs - JavaScript and Python SDKs with fluent API
Workflow
A typical workflow looks like this:
- Write your prompt in the Playground with visual or code mode, using Echo DSL for dynamic content
- Test with evals -- define assertions, run test suites, compare versions
- Deploy through quality gates to named targets (dev, staging, production)
- Fetch with the SDK using the fluent API:
es.prompt("id").vars({...}).openai() - Convert to your provider's format and send to the LLM
Getting Started
Ready to ship? Check out the Quick Start to go from signup to your first eval run in under five minutes.