The 5KB AI SDK. Zero dependencies. Just works.
"The missing middle between raw API calls and over-engineered frameworks."
npm install tinyai// Vercel AI SDK - 186KB, lots of setup
import { openai } from '@ai-sdk/openai';
import { generateText } from 'ai';
const { text } = await generateText({
model: openai('gpt-4o-mini'),
prompt: 'Summarize this text...',
});
// TinyAI - 5KB, one line
import { summarize } from 'tinyai';
const summary = await summarize(text);import { tinyai } from 'tinyai';
// Configure once
const ai = tinyai({
provider: 'openai',
apiKey: process.env.OPENAI_API_KEY // or auto-detects from env
});
// Use anywhere
const summary = await ai.summarize(longArticle);
const sentiment = await ai.classify(review, ['positive', 'negative', 'neutral']);
const data = await ai.extract(email, { name: 'string', date: 'date', amount: 'number' });
const spanish = await ai.translate(text, 'spanish');
const answer = await ai.ask("What's the capital of France?");
const embedding = await ai.embed(text);TinyAI supports completely free AI providers:
Fast, free cloud AI with generous limits (14K tokens/min).
import { tinyai } from 'tinyai';
// Get free API key: https://console.groq.com/keys
const ai = tinyai({
provider: 'groq',
apiKey: process.env.GROQ_API_KEY
});
const summary = await ai.summarize(text); // Free!Models: Llama 3.3 70B, Llama 3.1 8B, Mixtral, Gemma2
Run AI 100% locally on your machine. No API key needed.
# 1. Install Ollama: https://ollama.ai
# 2. Pull a model
ollama pull llama3.2import { tinyai } from 'tinyai';
// No API key needed!
const ai = tinyai({
provider: 'ollama',
model: 'llama3.2' // or mistral, codellama, phi, gemma2
});
const summary = await ai.summarize(text); // 100% free, runs locally| Provider | Cost | Speed | Privacy | Setup |
|---|---|---|---|---|
| Ollama | Free | Depends on hardware | 100% local | Install app |
| Groq | Free tier | Very fast | Cloud | Get API key |
| OpenAI | Paid | Fast | Cloud | Get API key |
const invoice = await ai.extract(pdfText, {
vendor: 'string',
total: 'number',
items: [{ name: 'string', price: 'number' }],
dueDate: 'date',
});
// TypeScript knows the exact shape:
// { vendor: string, total: number, items: { name: string, price: number }[], dueDate: Date }// Just set OPENAI_API_KEY env var
import { summarize, classify, extract } from 'tinyai';
const summary = await summarize(text); // Works instantlyfor await (const chunk of ai.stream.summarize(text)) {
process.stdout.write(chunk);
}Works everywhere: Node.js, Deno, Bun, Cloudflare Workers, Vercel Edge.
// Cloudflare Worker
export default {
async fetch(req: Request) {
const summary = await summarize(await req.text());
return new Response(summary);
}
};Creates a TinyAI instance.
const ai = tinyai({
provider: 'openai', // 'openai' | 'anthropic' | 'groq' | 'ollama'
apiKey: '...', // Optional, uses env vars by default
model: 'gpt-4o', // Optional, defaults to gpt-4o-mini
});Summarizes text.
const summary = await ai.summarize(longText);
const brief = await ai.summarize(text, { maxLength: 50 });Classifies text into one of the provided categories.
const sentiment = await ai.classify(review, ['positive', 'negative', 'neutral']);
const category = await ai.classify(email, ['urgent', 'normal', 'spam']);Extracts structured data with full TypeScript inference.
const person = await ai.extract(bio, {
name: 'string',
age: 'number',
skills: ['string'],
contact: {
email: 'string',
phone: 'string',
},
});Supported types: 'string' | 'number' | 'boolean' | 'date'
Translates text to another language.
const spanish = await ai.translate('Hello, world!', 'spanish');Answers questions, optionally with context.
const answer = await ai.ask("What's 2+2?");
const specific = await ai.ask("What's the total?", { context: invoiceText });Generates embedding vectors.
const embedding = await ai.embed("Hello, world!");
// Returns number[] with 1536 dimensions (OpenAI)Low-level text generation.
const poem = await ai.generate("Write a haiku about coding");
const story = await ai.generate("Once upon a time...", {
system: "You are a creative storyteller"
});| Feature | TinyAI | Vercel AI SDK | LangChain |
|---|---|---|---|
| Bundle size | 5KB | 186KB | 500KB+ |
| Dependencies | 0 | 12+ | 50+ |
| TypeScript inference | Native | Partial | Plugin |
| Setup time | 1 min | 10 min | 30 min |
| Learning curve | None | Medium | Steep |
All primitives work standalone without creating an instance:
import { summarize, classify, extract } from 'tinyai';
// Just set OPENAI_API_KEY in your environment
const summary = await summarize(text);
const sentiment = await classify(text, ['positive', 'negative']);
const data = await extract(text, { name: 'string' });import { toReadableStream, collectStream } from 'tinyai';
// Convert to ReadableStream for HTTP responses
const stream = toReadableStream(ai.stream.summarize(text));
return new Response(stream);
// Collect stream to string
const full = await collectStream(ai.stream.summarize(text));- Core client with OpenAI provider
-
summarize()- text summarization -
classify()- text classification -
extract()- structured data extraction -
translate()- translation -
ask()- Q&A -
embed()- embeddings - TypeScript inference
- Streaming support
- Provider: Groq (free cloud)
- Provider: Ollama (free local)
- Provider: Anthropic
-
pipe()- composable pipelines - CLI tool
MIT