Documentation Index
Fetch the complete documentation index at: https://portkey-docs-log-export-guide-1773064217.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
Quick Start
Get started with Lambda Labs in under 2 minutes:
from portkey_ai import Portkey
# 1. Install: pip install portkey-ai
# 2. Add @lambda provider in model catalog
# 3. Use it:
portkey = Portkey(api_key="PORTKEY_API_KEY")
response = portkey.chat.completions.create(
model="@lambda/llama3.1-8b-instruct",
messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)
Add Provider in Model Catalog
Before making requests, add Lambda Labs to your Model Catalog:
- Go to Model Catalog → Add Provider
- Select Lambda Labs
- Enter your Lambda API key
- Name your provider (e.g.,
lambda)
Complete Setup Guide
See all setup options and detailed configuration instructions
Lambda Capabilities
Streaming
Stream responses for real-time output:
from portkey_ai import Portkey
portkey = Portkey(api_key="PORTKEY_API_KEY", provider="@lambda")
stream = portkey.chat.completions.create(
model="llama3.1-8b-instruct",
messages=[{"role": "user", "content": "Tell me a story"}],
stream=True
)
for chunk in stream:
print(chunk.choices[0].delta.content or "", end="", flush=True)
Function Calling
Use Lambda’s function calling capabilities:
from portkey_ai import Portkey
portkey = Portkey(api_key="PORTKEY_API_KEY", provider="@lambda")
tools = [{
"type": "function",
"function": {
"name": "getWeather",
"description": "Get the current weather",
"parameters": {
"type": "object",
"properties": {
"location": {"type": "string", "description": "City and state"},
"unit": {"type": "string", "enum": ["celsius", "fahrenheit"]}
},
"required": ["location"]
}
}
}]
response = portkey.chat.completions.create(
model="llama3.1-8b-instruct",
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "What's the weather in Delhi?"}
],
tools=tools,
tool_choice="auto"
)
print(response.choices[0].message)
Supported Endpoints and Parameters
| Endpoint | Supported Parameters |
|---|
/chat/completions | messages, max_tokens, temperature, top_p, stream, presence_penalty, frequency_penalty, tools, tool_choice |
/completions | model, prompt, max_tokens, temperature, top_p, n, stream, logprobs, echo, stop, presence_penalty, frequency_penalty, best_of, logit_bias, user, seed, suffix |
Check Lambda’s documentation for more details.
Next Steps
Gateway Configs
Add fallbacks, load balancing, and more
Observability
Monitor and trace your Lambda requests
Prompt Library
Manage and version your prompts
Metadata
Add custom metadata to requests
For complete SDK documentation:
SDK Reference
Complete Portkey SDK documentation