The only fully local production-grade Super SDK that provides a simple, unified, and powerful interface for calling more than 200+ LLMs.
- Production-ready and used by enterprises.
- Fully local and NOT a proxy. You can deploy it anywhere.
- Comes with batching, retries, caching, callbacks, and OpenTelemetry support.
- Supports custom plugins for caching, logging, HTTP client, and more. You can use it like LEGOs and make it work with your infrastructure.
- Supports plug-and-play providers. You can run fully custom providers and still leverage all the benefits of Adaline Gateway.
- π§ Strongly typed in TypeScript
- π¦ Isomorphic - works everywhere
- π 100% local and private and NOT a proxy
- π οΈ Tool calling support across all compatible LLMs
- π Batching for all requests with custom queue support
- π Automatic retries with exponential backoff
- β³ Caching with custom cache plug-in support
- π Callbacks for full custom instrumentation and hooks
- π OpenTelemetry to plug tracing into your existing infrastructure
- π Plug-and-play custom providers for local and custom models
npm install @adaline/gateway @adaline/types @adaline/openai @adaline/anthropic
Gateway object maintains the queue, cache, callbacks, implements OpenTelemetry, etc. You should use the same Gateway object everywhere to get the benefits of all the features.
import { Gateway } from "@adaline/gateway";
const gateway = new Gateway();
Provider object stores the types/information about all the models within that provider. It exposes the list of all the chat openai.chatModelLiterals()
and embedding openai.embeddingModelLiterals()
models.
import { Anthropic } from "@adaline/anthropic";
import { OpenAI } from "@adaline/openai";
const openai = new OpenAI();
const anthropic = new Anthropic();
Model object enforces the types from roles, to config, to different modalities that are supported by that model. You can also provide other keys like baseUrl
, organization
, etc.
Model object also exposes functions:
transformModelRequest
that takes a request formatted for the provider and converts it into the Adaline super-types.getStreamChatData
that is then used to compose other provider calls. For example, calling an Anthropic model from Bedrock.- and many more to enable deep composability and provide runtime validations.
const gpt4o = openai.chatModel({
modelName: "gpt-4o",
apiKey: "your-api-key",
});
const haiku = anthropic.chatModel({
modelName: "claude-3-haiku-20240307",
apiKey: "your-api-key",
});
Config object provides type checks and also accepts generics that can be used to add max, min, and other validation checks per model.
import { Config } from "@adaline/types";
const config = Config().parse({
maxTokens: 200,
temperature: 0.9,
});
Message object is the Adaline super-type that supports all the roles and modalities across 200+ LLMs.
import { MessageType } from "@adaline/types";
const messages: MessageType[] = [
{
role: "system",
content: [{
modality: "text",
value: "You are a helpful assistant. You are extremely concise.
}],
},
{
role: "user",
content: [{
modality: "text",
value: `What is ${Math.floor(Math.random() * 100) + 1} + ${Math.floor(Math.random() * 100) + 1}?`,
}],
},
];
await gateway.streamChat({
model: gpt4o,
config: config,
messages: messages,
});
await gateway.completeChat({
model: haiku,
config: config,
messages: messages,
});