Documentation is available at: https://serafinski.github.io/LangGraph-Compare/
This Python package facilitates the parsing of run logs generated by LangGraph. During execution, logs are stored in an SQLite database in an encoded format (using msgpack). These logs are then decoded and exported to a json format. Subsequently, the json files are transformed into csv files for further analysis.
Once in csv format, the data can be analyzed using methods from the py4pm library. These methods calculate specific statistics related to the multi-agent infrastructure's performance and enable visualizations of the process behavior and execution flow.
This pipeline provides a streamlined approach for extracting, transforming, and analyzing logs, offering valuable insights into multi-agent systems.
This package requires Python 3.9 or higher. Check below for more information on creating environment.
If you would like to develop this package, use poetry with Python 3.10 - since 3.10 is the needed minimum by Sphinx. Install needed dependencies with:
poetry install --with dev,test,docsThis package requires Graphviz to be installed on your system.
Download the Graphviz installer from the Graphviz website.
Install Graphviz using Homebrew:
brew install graphvizFor Debian, Ubuntu, use the following command:
sudo apt-get install graphvizFor Fedora, Rocky Linux, RHEL or CentOS use the following command:
sudo dnf install graphvizTo create virtual environment (using conda), use the following commands:
conda create -n langgraph_compare python=3.9 conda activate langgraph_compare pip install langgraph_compareThis example is based on the Building a Basic Chatbot from LangGraph documentation.
It will require You to install the following packages (besides langgraph_compare):
pip install python-dotenv langchain-openaiExample:
from dotenv import load_dotenv from typing import Annotated from typing_extensions import TypedDict from langchain_openai import ChatOpenAI from langgraph.graph import StateGraph, START, END from langgraph.graph.message import add_messages from langgraph_compare import * exp = create_experiment("main") memory = exp.memory load_dotenv() class State(TypedDict): messages: Annotated[list, add_messages] graph_builder = StateGraph(State) llm = ChatOpenAI(model="gpt-4o-mini") def chatbot(state: State): return {"messages": [llm.invoke(state["messages"])]} graph_builder.add_node("chatbot_node", chatbot) graph_builder.add_edge(START, "chatbot_node") graph_builder.add_edge("chatbot_node", END) graph = graph_builder.compile(checkpointer=memory) print() run_multiple_iterations(graph, 1, 5, {"messages": [("user", "Tell me a joke")]}) print() graph_config = GraphConfig( nodes=["chatbot_node"] ) prepare_data(exp, graph_config) print() event_log = load_event_log(exp) print_analysis(event_log) print() generate_artifacts(event_log, graph, exp)When You have multiple architectures analyzed, You can use the following code to compare them (by default, it will look in experiments directory):
from langgraph_compare import compare infrastructures = ["main", "other1", "other2"] compare(infrastructures)This should generate a file in a comparison_reports directory, with the name: main_vs_other1_vs_other2.html.