Skip to main contentWhat do we do?
At Zenbase AI, we’re building an ecosystem that automates the workflow of creating LLM applications. This includes building functions with structured output using various LLM models, optimizing functions through automated prompt engineering algorithms, and improving performance by incorporating user feedback and closing your LLM Data Flywheel.
Who are we?
Our founders are core contributors to DSPy and lifelong builders. We’re on a mission to help people use DSPy and its algorithms in production environments, reducing the human workload in the LLM development lifecycle. This process can be error-prone and time-consuming, so we aim to make it easier.
What do we work on?
Our main focus is open-source development alongside providing better experiences through managed services. We contribute to DSPy and are passionate about ensuring its longevity. We’ve also developed an optimizer layer version of it, an open-source library that separates DSPy’s optimizer layer for easier use with existing frameworks in zenbase-ai/core. Additionally, we offer hosted and on-premise services to help developers scale and deploy their LLM apps more easily.
Open Source Library
The zenbase-ai/core library helps people use the DSPy layer and experiment locally more easily than rewriting their entire application in DSPy. It’s an excellent option for those wanting to try DSPy optimizers quickly.
API Cloud Hosting
Our API cloud hosting is the service you’re seeing in these documents. We offer a hosted version of our library with enhancements to make it easier for developers to use, scale, and host. It handles all the lower layers, performs optimization, schedules optimization, and closes the loop in building LLM functions for you.
On-Prem Service
We offer this for clients who require more privacy and don’t want to share their data with us. We can set up cloud hosting in your own infrastructure to ensure compliance with your needs.
FAQs
How do you find the best prompt & model?
We use AI to continuously experiment with prompts and models to determine the most effective way to execute your function. We employ algorithms developed in-house and those researched by our team at DSPy. Automated prompt engineering combined with expert prompting regularly leads to double-digit percentage improvements over expert prompting alone.
Do you offer an on-premise solution / SOC II / HIPAA?
Yes, schedule a demo to learn more.
Can we start with a small POC and adopt Zenbase incrementally?
Absolutely. You can refactor parts of your codebase to use Zenbase piece by piece and adopt it incrementally.
Can I export the prompt & model?
Yes! Our backend lets you eject the prompt and model config at any time.