Skip to main content
W&B Weave is an observability and evaluation platform for building reliable LLM applications. Weave helps you understand what your AI application is doing, measure how well it performs, and systematically improve it over time. Building LLM applications is fundamentally different from traditional software development. LLM outputs are non-deterministic, making debugging harder. Quality is subjective and context-dependent. Small prompt changes can cause unexpected behavior changes. Traditional testing approaches fall short. Weave addresses these challenges by providing:
  • Visibility into every LLM call, input, and output in your application
  • Systematic evaluation to measure performance against curated test cases
  • Version tracking for prompts, models, and data so you can understand what changed
  • Feedback collection to capture human judgments and production signals

The main threads of Weave

Traces

Tracks end-to-end how a specific LLM application comes to its response.
  • See inputs and outputs of each application usage.
  • See source documents used to produce the LLM feedback.
  • See cost, token count, latency of the LLM call.
  • Drill down into specific prompts and how answers are produced.
  • Collect feedback on responses from users.
  • In your code, you can use Weave ops and calls to track what your functions are doing.
Get started with tracing

Evaluations

Systematic benchmarking of your application for you to know how good it is performing and be confident to deploy it to production.
  • Easily track which versions of model/prompt resulted in what performance.
  • Define metrics to evaluate responses using one or more scoring functions.
  • Compare two or more different evaluations over multiple metrics. Contrast specific samples for their performance.
Build an evaluation pipeline

Version everything

Weave tracks versions of your prompts, datasets, and model configurations. When something breaks, you can see exactly what changed. When something works, you can reproduce it. Learn about versioning

Experiment with prompts and models

Bring your API keys and quickly test prompts and compare responses from various commercial models using the Playground. Experiment in the Weave Playground

Collect feedback

Capture human feedback, annotations, and corrections from production use. Use this data to build better test cases and improve your application. Collect feedback

Monitor production

Score production traffic with the same scorers you use in evaluation. Set up guardrails to catch issues before they reach users. Set up guardrails and monitors

Get started using Weave

Weave provides SDKs for Python and TypeScript. Both SDKs support tracing, evaluation, datasets, and the core Weave features. Some advanced features like class-based Models and Scorers are currently not available for the Weave TypeScript SDK. To get started using Weave:
  1. Create a Weights & Biases account at https://wandb.ai/site and get your API key from https://wandb.ai/authorize
  2. Install Weave:
pip install weave
  1. In your script, import Weave and initialize a project::
import weave
client = weave.init('your-team/your-project-name')
You’re now ready to start using Weave!
  1. Weave integrates with popular LLM providers and frameworks. When you use a supported integration, Weave automatically traces LLM calls without additional code changes. However, to log traces to custom methods, add a one line decorator weave.op to any function. Works in development or production.
    # Decorate your function
    @weave.op
    async def my_function(){
      ...  }
    
To try it out with a guided tutorial, see Get started with tracing.