Optimizers
Static Optimizer
A static (or offline) optimizer adjusts prompt templates and model parameters using algorithms before inference time. This process is exportable and aiming to improve the function’s overall performance and not touch the prompt during the inference. A Static optimizer accepts the following parameters:function(int): The id of the function this optimizer is for.external_id(str): A unique identifier for the optimizer that can be used to call and retrieve it.train_set(int): The id of the train dataset this optimizer is trained on.validation_set(int): The id of the validation dataset this optimizer is validated on.test_set(int): The id of the test dataset this optimizer is evaluated on.magical_set(int): The id of the magical dataset, if any of the above datasets is missing this will be used.parameters(dict): The parameters for the optimizer with the following keys:shots(int): The number of shots to use for the optimizer.samples(int): The number of samples to use for the optimizer.model_keywords(dict): The model keywords for the optimizer with the following keys:temperature(float): The temperature for the optimizer
schedule(dict): The schedule for the optimizer with the following keys:cron(str): The cron schedule for the optimizer
api_key(str): The API key for the optimizermodel(str): The name of the model to use for the optimizerbase_url(str): Optional base URL for custom LiteLLM server deploymentsoptimizer_type(str): The type of optimizer to use, which for the static optimizer is alwaysfewshot.
Example
Example of how to create a static optimizer:Dynamic Optimizer
A dynamic (or online) optimizer adjusts prompt templates and model parameters for a specific task using both static and dynamic techniques. It uses input data to optimize the function’s parameters and enhance performance during inference time. A Dynamic Optimizer accepts the following parameters:function(int): The id of the function this optimizer is for.external_id(str): A unique identifier for the optimizer that can be used to call and retrieve it.training_set(int): The id of the training dataset this optimizer is trained on.validation_set(int): The id of the validation dataset this optimizer is validated on.test_set(int): The id of the test dataset this optimizer is evaluated on.magical_set(int): The id of the magical dataset, if any of the above datasets is missing this will be used.parameters(dict): The parameters for the optimizer with the following keys:shots(int): The number of shots to use for the optimizer.model_keywords(dict): The model keywords for the optimizer with the following keys:temperature(float): The temperature for the optimizer
schedule(dict): The schedule for the optimizer with the following keys:cron(str): The cron schedule for the optimizer
api_key(str): The API key for the optimizermodel(str): The name of the model to use for the optimizerbase_url(str): Optional base URL for custom LiteLLM server deploymentsoptimizer_type(str): The type of optimizer to use, which for the dynamic optimizer is alwaysdynamic_fewshot.
Example
Example of how to create a dynamic optimizer:Finetune Optimizer
A finetune optimizer can be static or dynamic optimizer. It will finetune a model to optimize the performance of the function. A finetune optimizer accepts the following parameters:function(int): The id of the function this optimizer is forexternal_id(str): A unique identifier for the optimizer that can be used to call and retrieve it.train_set(int): The id of the train dataset this optimizer is trained onvalidation_set(int): The id of the validation dataset this optimizer is validated ontest_set(int): The id of the test dataset this optimizer is evaluated onmagical_set(int): The id of the magical dataset, if any of the above datasets is missing this will be usedparameters(dict): The parameters for the optimizer with the following keys:shots(int): The number of shots to use for the optimizersamples(int): The number of samples to use for the optimizer (only for static optimizer)hyperparameter(dict): The hyperparameters for fine-tuning the model with the following keys:n_epochs(int): The number of epochs for fine-tuning
model_keywords(dict): The model keywords for the optimizer with the following keys:temperature(float): The temperature for the optimizer
schedule(dict): The schedule for the optimizer with the following keys:cron(str): The cron schedule for the optimizer
api_key(str): The API key for the optimizermodel(str): The name of the model to use for the optimizerbase_url(str): The base URL for the optimizerbase_fine_tuning_model(str): The base model for fine-tuning the optimizer, for now we support the openai models. you can find the list of models hereoptimizer_type(str): The type of optimizer to use, currently we supportfewshotanddynamic_fewshot