Documentation Index
Fetch the complete documentation index at: https://docs.zenbase.ai/llms.txt
Use this file to discover all available pages before exploring further.
Zenbase also supports online optimization, where models are continuously optimized for each request. This is particularly useful for applications that require real-time adaptation.
This technique dynamically optimizes the model or function for each incoming request. It adapts the model in real-time based on the current input and recent interactions.
This approach ensures that the model can handle a wide variety of queries effectively, making it ideal for applications where the input context may change frequently or unpredictably.
Step 1: Install Required Libraries
pip install pydantic requests
Step 2: Set Up API Configuration
import os
import requests
import json
os.environ["OPENAI_API_KEY"] = "YOUR OPENAI API KEY"
BASE_URL = "https://orch.zenbase.ai/api"
API_KEY = "YOUR ZENBASE API KEY"
def api_call(method, endpoint, data=None):
url = f"{BASE_URL}/{endpoint}"
headers = {
"Content-Type": "application/json",
"Authorization": f"Api-Key {API_KEY}"
}
response = requests.request(method, url, headers=headers, data=json.dumps(data) if data else None)
return response
Step 3: Define Your Models
Define the input and output models for your task. Here we define the input and output models for the textual entailment task, which involves a premise and a hypothesis.
from pydantic import BaseModel, Field
from typing import Literal
class TextualEntailmentInput(BaseModel):
premise: str = Field(..., description="The sentence serving as the context or background information.")
hypothesis: str = Field(..., description="The sentence to be evaluated in relation to the premise.")
class TextualEntailmentOutput(BaseModel):
relationship: Literal["entailment", "contradiction", "neutral"] = Field(..., description="The logical relationship between the premise and hypothesis.")
Step 4: Create a Function
Create a function that encapsulates the task’s objective, how the prompt is structured, and the expected inputs and outputs. In this case, we define a function for the textual entailment task.
from textwrap import dedent
function_data = {
"name": "Textual Entailment Analysis",
"description": "Analyze the logical relationship between a premise and a hypothesis.",
"prompt": dedent("""Analyze the logical relationship between the given premise and hypothesis.
Determine if the relationship is entailment, contradiction, or neutral.
Definitions:
- Entailment: The hypothesis logically follows from the premise.
- Contradiction: The hypothesis contradicts the premise.
- Neutral: The hypothesis is neither entailed by nor contradicts the premise.
"""),
"input_schema": TextualEntailmentInput.model_json_schema(),
"output_schema": TextualEntailmentOutput.model_json_schema(),
"api_key": os.environ.get("OPENAI_API_KEY"),
"model": "gpt-4o-mini"
}
function = api_call("POST", "functions/", function_data)
function_id = function.json()['id']
Step 5: Create a Dataset
Create a dataset that includes the input and output examples for your function.
dataset_data = {
"name": "Textual Entailment Dataset",
"description": "Dataset for Textual Entailment",
"embedding_api_key": os.environ.get("OPENAI_API_KEY"),
"embedding_type": "OPENAI",
"model_name": 'text-embedding-3-small'
}
dataset = api_call("POST", "datasets/", dataset_data)
dataset_id = dataset.json()['id']
Step 6: Add Dataset Items to the Magic Dataset
Add input and output examples to the magic dataset.
dataset_items = [
{
"inputs": {"premise": "All roses are flowers.", "hypothesis": "Some flowers are roses."},
"outputs": {"relationship": "entailment"}
},
{
"inputs": {"premise": "The Earth orbits the Sun.", "hypothesis": "The Moon is made of cheese."},
"outputs": {"relationship": "neutral"}
},
{
"inputs": {"premise": "Water freezes at 0°C.", "hypothesis": "Water boils at 0°C."},
"outputs": {"relationship": "contradiction"}
},
{
"inputs": {"premise": "All mammals give birth to live young.", "hypothesis": "Whales give birth to live young."},
"outputs": {"relationship": "entailment"}
},
{
"inputs": {"premise": "The Eiffel Tower is in Paris.", "hypothesis": "Paris is the capital of Italy."},
"outputs": {"relationship": "contradiction"}
},
{
"inputs": {"premise": "Gold is a precious metal.", "hypothesis": "Silver is used in jewelry."},
"outputs": {"relationship": "neutral"}
},
{
"inputs": {"premise": "All squares are rectangles.", "hypothesis": "All rectangles are squares."},
"outputs": {"relationship": "contradiction"}
},
{
"inputs": {"premise": "The Pacific Ocean is the largest ocean.", "hypothesis": "The Atlantic Ocean exists."},
"outputs": {"relationship": "neutral"}
},
{
"inputs": {"premise": "Oxygen is necessary for human survival.", "hypothesis": "Humans need to breathe to live."},
"outputs": {"relationship": "entailment"}
}
]
for item in dataset_items:
item["dataset"] = dataset_id
response = api_call("POST", "dataset-items/", item)
Step 7: Configuration an Online Optimizer
Create an online optimizer configuration that defines the function, datasets, and parameters for the optimization process. In this case, we create an online optimizer configuration for the textual entailment function using the train, validation, and test datasets.
optimizer_data = {
"function": function_id,
"magical_set": dataset_id,
"parameters": {
"shots": 5,
},
"api_key": os.environ.get("OPENAI_API_KEY"),
"model": "gpt-4o-mini",
"optimizer_type": "dynamic_fewshot"
}
optimizer = api_call("POST", "optimizer-configurations/", optimizer_data)
optimizer_id = optimizer.json()['id']
Step 8: Use the Optimized Function
Use the optimized function with any input and get the optimized output.
test_input = {
"premise": "B: I do not know. I wonder where he gets it? You know, you must, I think TV is bad. Because they, uh, show all sorts of violence on, A: That and I do not think a lot of parents, I mean, I do not know how it is in the Air Force base. But, uh, I just do not think a lot of people, because of the economy, both need to work, you know. I just do not think a lot of parents are that involved any more.",
"hypothesis": "a lot of parents are that involved"
}
result = api_call("POST", f"optimizer-configurations/{optimizer_id}/run/", {"inputs": test_input})
print(f"Optimized model prediction: {result.json()}")