- Notifications
You must be signed in to change notification settings - Fork 10.8k
New cookbook - Supply-Chain Copilot with OpenAI Agent SDK and Databricks MCP Servers #1935
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
"\n", | ||
"The easiest way is to add a profile to `~/.databrickscfg`. The snippet’s `WorkspaceClient(profile=...)` call will pick that up. It tells the SDK which of those stored credentials to load, so your code never needs to embed tokens. Another option would be to create environment variables such as `DATABRICKS_HOST` and `DATABRICKS_TOKEN`, but using `~/.databrickscfg` is recommended.\n", | ||
"\n", | ||
"`pip install openai databricks-sdk`" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would have this runnable in a separate cell:
! pip install openai databricks-sdk
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done, thanks
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think there are quite a lot of other dependencies I would add here too getting further along:
! pip install openai databricks-sdk databricks-cli databricks-mcp openai-agents
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah I would revert to:
! pip install databricks-cli
because as you pointed out yesterday we don't actually run code in the notebook, just the databricks commands to ensure cli works properly
@@ -377,6 +377,11 @@ alexl-oai: | |||
website: "https://www.linkedin.com/in/alex-lowden01/" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My main comment would be I found notebook 06_Vector_Search
failed because i need to set my openai api key up. I was following these instructions - https://github.com/lararachidi/agent-supply-chain/blob/main/README.md and notebook 6 fails automatically, should it have an instruction to set openai_api_key before we run the notebook?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A comment for outside of this PR but I would update the https://github.com/lararachidi/agent-supply-chain/blob/main/README.md to set the OpenAI API key before running all notebooks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch, I’ve moved the OpenAI API Key definition to notebook 1 instead of notebook 6, and will update the README to reflect this change.
"\n", | ||
"The easiest way is to add a profile to `~/.databrickscfg`. The snippet’s `WorkspaceClient(profile=...)` call will pick that up. It tells the SDK which of those stored credentials to load, so your code never needs to embed tokens. Another option would be to create environment variables such as `DATABRICKS_HOST` and `DATABRICKS_TOKEN`, but using `~/.databrickscfg` is recommended.\n", | ||
"\n", | ||
"`pip install openai databricks-sdk`" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah I would revert to:
! pip install databricks-cli
because as you pointed out yesterday we don't actually run code in the notebook, just the databricks commands to ensure cli works properly
} | ||
}, | ||
"source": [ | ||
"## Trace Agent calls in the OpenAI API Dashboard\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could add a link to open the OpenAI API dashboard in the text to make this simpler
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done!
Summary
This walkthrough demonstrates how to build a supply‑chain copilot using the OpenAI Agent SDK and Databricks Managed MCP, highlighting Tracing, Guardrails, and seamless integration with enterprise data sources. It shows how an agent can blend structured inventory tables, forecasting models, graph‑based BOM logic, vector‑indexed e‑mails behind a single conversational interface, then surface the results through FastAPI and a React chat UI. Readers learn to authenticate against Databricks, expose Unity Catalog functions and Vector Search (Databricks vector index) as MCP tools, trace every call in the OpenAI dashboard, and enforce guardrails, all while keeping the underlying data governance intact.
Motivation
We currently don’t have a cookbook that explains how to integrate with Databricks data sources. We lack an example that shows how to connect an Agent to Databricks using Databricks‑managed MCP servers. This guide remedies that by delivering an agent capable of tackling supply‑chain questions such as “Can we meet demand?” and “How much revenue is at risk if we can’t produce the forecasted amount of a certain product?”.
For new content
When contributing new content, read through our contribution guidelines, and mark the following action items as completed:
We will rate each of these areas on a scale from 1 to 4, and will only accept contributions that score 3 or higher on all areas. Refer to our contribution guidelines for more details.