Enhancing Our Chatbot with Tools – Part II
In Part I [https://dev.to/sreeni5018/langgraph-uncovered-building-stateful-multi-agent-applications-with-llms-part-i-p86], we added short-term memory to our chatbot, enabling it to retain context within a session. Now, in Part II, we’ll take it a step further by introducing tools to our chatbot.
Tools allow the chatbot to fetch real-time data from external sources, making it more dynamic and useful. For example, Large Language Models (LLMs) are trained on a vast amount of data, but they have a limitation—they do not have real-time knowledge. If you ask an LLM about a current event or a trending topic, it might provide outdated or incorrect information. This phenomenon is called hallucination, where the model generates plausible but incorrect responses.
To overcome this limitation, we can equip our chatbot with tools that fetch real-time data. In this blog, we’ll add a simple tool to retrieve the current time. This is just one example, but the same concept can be applied to access APIs, databases, or even live news feeds.
Beyond tools, there are other strategies to improve LLM responses, such as Retrieval-Augmented Generation (RAG) and Prompt Engineering, but for now, let's focus on integrating tools. Let’s get started! 🚀
# Import necessary libraries and modules from langgraph.graph import StateGraph, START, END, add_messages, MessagesState # StateGraph, nodes, and messages for managing the workflow from dotenv import load_dotenv # Load environment variables from a .env file from langchain_openai.chat_models import AzureChatOpenAI # LangChain's Azure-based ChatGPT model for communication from typing_extensions import TypedDict, Annotated # For type hinting from langchain.schema import AIMessage, HumanMessage # For message roles in the chatbot from langgraph.prebuilt import ToolNode # ToolNode to integrate tools into the chatbot workflow from datetime import datetime # To get current date and time from langchain_core.tools import tool # LangChain's tool decorator # Load environment variables load_dotenv() # Define a custom tool to fetch the current time @tool def get_current_time(): """Call this tool to get the current time""" return datetime.now() # Returns the current time # Register the tool by adding it to the tools list tools = [get_current_time] tool_node = ToolNode(tools) # Create a ToolNode with the tool # Initialize the language model (LLM) and bind the tool to it llm = AzureChatOpenAI( azure_deployment="gpt-4o-mini", # Deployment version of the model api_version="2024-08-01-preview", # API version temperature=0, # Set temperature for deterministic output max_tokens=None, # No token limit timeout=None, # No timeout max_retries=2 # Allow up to 2 retries in case of failure ).bind_tools(tools) # Bind the tool(s) to the LLM # Define the condition to decide if the chatbot should continue using tools or end the conversation def should_continue(state: MessagesState): messages = state["messages"] # Access the messages in the state last_message = messages[-1] # Get the last message in the conversation if last_message.tool_calls: # If a tool was called in the last message, continue to the "tools" node return "tools" return END # Otherwise, end the conversation # Define the chat node, which sends messages to the LLM and gets a response def chat(state: MessagesState): messages = state["messages"] # Access the messages in the state response = llm.invoke(messages) # Send the messages to the LLM for a response return {"messages": [response]} # Return the LLM response wrapped in the "messages" key # Build the state graph (workflow) to manage the chatbot flow workflow = StateGraph(MessagesState) # Create a StateGraph with MessagesState workflow.add_node("chat", chat) # Add the "chat" node for regular conversation handling workflow.add_node("tools", tool_node) # Add the "tools" node for invoking tools workflow.add_edge(START, "chat") # Start the workflow by connecting START to the "chat" node workflow.add_conditional_edges("chat", should_continue, ["tools", END]) # Based on the condition, continue to "tools" or END workflow.add_edge("tools", "chat") # After using a tool, go back to the "chat" node # Compile the workflow into an executable app app = workflow.compile() # Generate a visual representation of the state graph (workflow) and save as a PNG image image = app.get_graph().draw_mermaid_png() # Generate the graph as a mermaid PNG image with open("sreeni_chatbot_with_tools.png", "wb") as file: # Save the image to a file file.write(image) # Start an interactive loop to handle user input and responses while True: query = input("Enter your query or question? ") # Prompt user for input if query.lower() in ["quit", "exit"]: # If user types 'quit' or 'exit', terminate the program print("bye bye") exit(0) # Process the user input using the app's workflow and print the response for chunk in app.stream( {"messages": [("human", query)]}, stream_mode="values" # Stream the response for better handling ): chunk["messages"][-1].pretty_print() # Print the formatted response message
@tool Decorator: This decorator is used to define a tool (like getting the current time) that can be used within the LangChain workflow. The get_current_time() function is registered as a tool.
llm.invoke(messages): This method sends the messages to the language model (LLM) and returns the generated response.
should_continue(): This function checks if a tool was called in the last message and decides whether the workflow should continue with the tool or end.
StateGraph Workflow: The StateGraph defines the flow of the chatbot conversation. It has two nodes—chat (for normal conversation) and tools (for invoking external tools). Conditional edges determine when to use tools or finish the conversation.
ToolNode: The ToolNode connects the tool (in this case, get_current_time()) to the workflow, allowing the chatbot to invoke external functionality.
Graph Visualization: The graph is visualized using the draw_mermaid_png() function and saved as an image to represent the chatbot’s workflow.
This code demonstrates how to integrate external tools into a LangChain-powered chatbot and manage conversational flow using LangGraph. The example tool is a simple one that provides the current time, but this can be extended to more complex API calls or data retrieval operations.
In this part, we successfully enhanced our chatbot by integrating tools to fetch real-time information, such as the current time, addressing the limitations of LLMs and improving the chatbot's responsiveness. Tools like these help bridge the gap between the static knowledge of LLMs and the dynamic, real-time world. By leveraging external APIs, we can keep the chatbot informed with up-to-date information, mitigating hallucinations and improving the user experience.
Next, in Part III, we will take our chatbot a step further by introducing Human-in-the-Loop (HIL). This addition will allow for human oversight and intervention at key points in the conversation, ensuring more accurate, reliable, and contextually appropriate responses. Stay tuned to see how we integrate HIL and improve our chatbot's decision-making process!
Thanks
Sreeni Ramadorai
Top comments (0)