LLM observability dashboard (beta)

Last updated:

|Edit this page

The LLM observability dashboard provides an overview of your LLM usage and performance. It includes insights on:

  • Users
  • Traces
  • Costs
  • Generations
  • Latency
LLM observability dashboard

It can be filtered like any dashboard in PostHog, including by event, person, and group properties. Our observability SDKs autocapture especially useful properties like provider, tokens, cost, model, and more.

This dashboard is a great starting point for understanding your LLM usage and performance. You can use it to answer questions like:

  • Are users using our LLM-powered features?
  • What are my LLM costs by customer, model, and in total?
  • Are generations erroring?
  • How many of my users are interacting with my LLM features?
  • Are there generation latency spikes?

To dive into specific generation events, click on the generations or traces tabs to get a list of each captured by PostHog.

Questions? Ask Max AI.

It's easier than reading through 571 docs articles.

Community questions

Was this page useful?

Next article

LLM traces and generations (beta)

Once you install PostHog's LLM observabilty SDK, it autocaptures LLM generations and traces. You can then view these in PostHog. Generations Generations are an event that capture an LLM request. The generations tab lists them along with the properties autocaptured by the PostHog like the person, model, total cost, token usage, and more. When you expand a generation, it includes the properties and metadata that every event has along with a conversation history with the role (system, user…

Read next article