DocsPrompt ManagementGet Started

Get Started with Prompt Management

This quickstart helps you to create your first prompt and use it in your application.

Get API keys

  1. Create Langfuse account or self-host Langfuse.
  2. Create new API credentials in the project settings.

Create a prompt

Use the Langfuse UI to create a new prompt or update an existing one.

Use prompt

At runtime, you can fetch the latest production version from Langfuse. Learn more about control (versions/labels) here.

from langfuse import get_client   # Initialize Langfuse client langfuse = get_client()

Text prompt

# Get current `production` version of a text prompt prompt = langfuse.get_prompt("movie-critic")   # Insert variables into prompt template compiled_prompt = prompt.compile(criticlevel="expert", movie="Dune 2") # -> "As an expert movie critic, do you like Dune 2?"

Chat prompt

# Get current `production` version of a chat prompt chat_prompt = langfuse.get_prompt("movie-critic-chat", type="chat") # type arg infers the prompt type (default is 'text')   # Insert variables into chat prompt template compiled_chat_prompt = chat_prompt.compile(criticlevel="expert", movie="Dune 2") # -> [{"role": "system", "content": "You are an expert movie critic"}, {"role": "user", "content": "Do you like Dune 2?"}]

Optional parameters

# Get specific version prompt = langfuse.get_prompt("movie-critic", version=1)   # Get specific label prompt = langfuse.get_prompt("movie-critic", label="staging")   # Get latest prompt version. The 'latest' label is automatically maintained by Langfuse. prompt = langfuse.get_prompt("movie-critic", label="latest")

Attributes

# Raw prompt including {{variables}}. For chat prompts, this is a list of chat messages. prompt.prompt   # Config object prompt.config

You can link the prompt to the LLM generation span that used the prompt. This linkage enables tracking of metrics by prompt version and name directly in the Langfuse UI and see which prompt performed best.

Decorators

from langfuse import observe, get_client   langfuse = get_client()   @observe(as_type="generation") def nested_generation():  prompt = langfuse.get_prompt("movie-critic")    langfuse.update_current_generation(  prompt=prompt,  )   @observe() def main():  nested_generation()   main()

Context Managers

from langfuse import get_client   langfuse = get_client()   prompt = langfuse.get_prompt("movie-critic")   with langfuse.start_as_current_generation(  name="movie-generation",  model="gpt-4o",  prompt=prompt ) as generation:  # Your LLM call here  generation.update(output="LLM response")

If a fallback prompt is used, no link will be created.

End-to-end examples

The following example notebooks include end-to-end examples of prompt management:

We also used Prompt Management for our Docs Q&A Chatbot and traced it with Langfuse. You can get view-only access to the project by signing up to the public demo.

Was this page helpful?