Skip to content

Commit c407daf

Browse files
authored
Merge pull request #1207 from guardrails-ai/docs/other-llms-details
Update using_llms.md
2 parents 932790c + a9c06d3 commit c407daf

File tree

1 file changed

+13
-1
lines changed

1 file changed

+13
-1
lines changed

docs/how_to_guides/using_llms.md

Lines changed: 13 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -287,8 +287,20 @@ for chunk in stream_chunk_generator
287287
```
288288

289289
## Other LLMs
290+
As mentioned at the top of this page, over 100 LLMs are supported through our litellm integration, including (but not limited to)
290291

291-
See LiteLLM’s documentation [here](https://docs.litellm.ai/docs/providers) for details on many other llms.
292+
- Anthropic
293+
- AWS Bedrock
294+
- Anyscale
295+
- Huggingface
296+
- Mistral
297+
- Predibase
298+
- Fireworks
299+
300+
301+
Find your LLM in LiteLLM’s documentation [here](https://docs.litellm.ai/docs/providers). Then, follow those same steps and set the same environment variables they guide you to use, but invoke a `Guard` object instead of the litellm object.
302+
303+
Guardrails will wire through the arguments to litellm, run the Guarding process, and return a validated outcome.
292304

293305
## Custom LLM Wrappers
294306
In case you're using an LLM that isn't natively supported by Guardrails and you don't want to use LiteLLM, you can build a custom LLM API wrapper. In order to use a custom LLM, create a function that accepts a positional argument for the prompt as a string and any other arguments that you want to pass to the LLM API as keyword args. The function should return the output of the LLM API as a string.

0 commit comments

Comments
 (0)