Echostash SDK
Universal prompt SDK with messages, tools, and 5 provider converters
Echostash SDK
The Echostash SDK (@goreal-ai/echostash) provides a fluent API for fetching prompts and converting them to any LLM provider format. Available for JavaScript and Python.
Installation
npm install @goreal-ai/echostashQuick Start
import { Echostash } from "@goreal-ai/echostash"
const es = new Echostash("https://api.echostash.app", {
apiKey: process.env.ECHOSTASH_API_KEY
})
// One-liner: fetch → set variables → convert to OpenAI format
const result = await es
.prompt("welcome-email")
.vars({ name: "Alice", tier: "pro" })
.openai()
// result.messages = [{ role: "system", content: "..." }, { role: "user", content: "..." }]
// result.tools = [{ type: "function", function: { name: "...", ... } }]
// result.model = "gpt-4o"
// result.temperature = 0.7Fluent API Chain
The SDK uses a chainable pattern: prompt() → version() → vars() → provider()
.prompt(id)— Fetch a prompt by slug or numeric ID.version('published')— Pin to published, staging, or specific version number.vars({...})— Set template variables for rendering
Provider Converters (Terminal Methods)
Each method has two overloads: with no arguments it returns the full prompt result (messages, tools, model config) ready to spread into the provider API. With options, it returns a single message in that provider's format.
| Method | Provider | No-arg Result |
|---|---|---|
.openai() | OpenAI | { messages, tools?, model?, temperature?, max_tokens? } |
.anthropic() | Anthropic | { system?, messages, tools?, model?, max_tokens? } |
.google() | Google Gemini | { contents, tools?, generationConfig? } |
.vercel() | Vercel AI SDK | { messages, tools? } |
.langchain() | LangChain | { messages, tools? } |
Server-Side Render
// Render on the server — returns resolved content
const rendered = await es.prompt("greeting").render({ name: "Alice" })
// rendered.content = "Hello Alice! Welcome to our platform."Batch Render (up to 50 prompts)
const batch = await es.batchRender([
{ promptId: 1, variables: { name: "Alice" } },
{ promptId: 2, variables: { name: "Bob" } },
])
// batch.results = { "1": { content: "...", versionNo: 3 }, "2": { content: "..." } }
// batch.successCount = 2Messages & Tools
const loaded = await es.prompt("support-agent").vars({ tier: "pro" }).get()
loaded.messages // [{ role: "system", content: [...] }, { role: "user", content: [...] }]
loaded.tools // [{ type: "function", function: { name: "search", ... } }]
loaded.renderMeta // { provider: "openai", model: "gpt-4o", temperature: 0.7 }Rate Limiting
The SDK automatically retries on 429 responses (up to 3 times) with exponential backoff. EchostashError includes isRateLimited and retryAfter for custom handling.