|  | 
|  | 1 | +{ | 
|  | 2 | + "cells": [ | 
|  | 3 | + { | 
|  | 4 | + "cell_type": "markdown", | 
|  | 5 | + "id": "ef55abc9", | 
|  | 6 | + "metadata": {}, | 
|  | 7 | + "source": [ | 
|  | 8 | + "[](https://colab.research.google.com/github/openlayer-ai/examples-gallery/blob/main/monitoring/llms/monitoring-llms.ipynb)\n", | 
|  | 9 | + "\n", | 
|  | 10 | + "\n", | 
|  | 11 | + "# <a id=\"top\">Monitoring LLMs</a>\n", | 
|  | 12 | + "\n", | 
|  | 13 | + "This notebook illustrates a typical monitoring flow for LLMs using Openlayer. For more details, refer to the [How to set up monitoring guide](https://docs.openlayer.com/docs/set-up-monitoring) from the documentation.\n", | 
|  | 14 | + "\n", | 
|  | 15 | + "\n", | 
|  | 16 | + "## <a id=\"toc\">Table of contents</a>\n", | 
|  | 17 | + "\n", | 
|  | 18 | + "1. [**Creating a project and an inference pipeline**](#inference-pipeline) \n", | 
|  | 19 | + "\n", | 
|  | 20 | + "2. [**Publishing production data**](#publish-batches)\n", | 
|  | 21 | + "\n", | 
|  | 22 | + "3. [(Optional) **Uploading a reference dataset**](#reference-dataset)\n", | 
|  | 23 | + "\n", | 
|  | 24 | + "4. [(Optional) **Publishing ground truths**](#ground-truths)\n", | 
|  | 25 | + "\n", | 
|  | 26 | + "Before we start, let's download the sample data and import pandas." | 
|  | 27 | + ] | 
|  | 28 | + }, | 
|  | 29 | + { | 
|  | 30 | + "cell_type": "code", | 
|  | 31 | + "execution_count": null, | 
|  | 32 | + "id": "3d193436", | 
|  | 33 | + "metadata": {}, | 
|  | 34 | + "outputs": [], | 
|  | 35 | + "source": [ | 
|  | 36 | + "%%bash\n", | 
|  | 37 | + "\n", | 
|  | 38 | + "if [ ! -e \"fine_tuning_dataset.csv\" ]; then\n", | 
|  | 39 | + " curl \"https://openlayer-static-assets.s3.us-west-2.amazonaws.com/examples-datasets/monitoring/llms/fine_tuning_dataset.csv\" --output \"fine_tuning_dataset.csv\"\n", | 
|  | 40 | + "fi\n", | 
|  | 41 | + "\n", | 
|  | 42 | + "if [ ! -e \"prod_data_no_ground_truths.csv\" ]; then\n", | 
|  | 43 | + " curl \"https://openlayer-static-assets.s3.us-west-2.amazonaws.com/examples-datasets/monitoring/llms/prod_data_no_ground_truths.csv\" --output \"prod_data_no_ground_truths.csv\"\n", | 
|  | 44 | + "fi\n", | 
|  | 45 | + "\n", | 
|  | 46 | + "if [ ! -e \"prod_ground_truths.csv\" ]; then\n", | 
|  | 47 | + " curl \"https://openlayer-static-assets.s3.us-west-2.amazonaws.com/examples-datasets/monitoring/llms/prod_ground_truths.csv\" --output \"prod_ground_truths.csv\"\n", | 
|  | 48 | + "fi" | 
|  | 49 | + ] | 
|  | 50 | + }, | 
|  | 51 | + { | 
|  | 52 | + "cell_type": "code", | 
|  | 53 | + "execution_count": null, | 
|  | 54 | + "id": "9dce8f60", | 
|  | 55 | + "metadata": {}, | 
|  | 56 | + "outputs": [], | 
|  | 57 | + "source": [ | 
|  | 58 | + "import pandas as pd" | 
|  | 59 | + ] | 
|  | 60 | + }, | 
|  | 61 | + { | 
|  | 62 | + "cell_type": "markdown", | 
|  | 63 | + "id": "c4ea849d", | 
|  | 64 | + "metadata": {}, | 
|  | 65 | + "source": [ | 
|  | 66 | + "## <a id=\"inference-pipeline\"> 1. Creating a project and an inference pipeline </a>\n", | 
|  | 67 | + "\n", | 
|  | 68 | + "[Back to top](#top)" | 
|  | 69 | + ] | 
|  | 70 | + }, | 
|  | 71 | + { | 
|  | 72 | + "cell_type": "code", | 
|  | 73 | + "execution_count": null, | 
|  | 74 | + "id": "05f27b6c", | 
|  | 75 | + "metadata": {}, | 
|  | 76 | + "outputs": [], | 
|  | 77 | + "source": [ | 
|  | 78 | + "!pip install openlayer" | 
|  | 79 | + ] | 
|  | 80 | + }, | 
|  | 81 | + { | 
|  | 82 | + "cell_type": "code", | 
|  | 83 | + "execution_count": null, | 
|  | 84 | + "id": "8504e063", | 
|  | 85 | + "metadata": {}, | 
|  | 86 | + "outputs": [], | 
|  | 87 | + "source": [ | 
|  | 88 | + "import openlayer\n", | 
|  | 89 | + "\n", | 
|  | 90 | + "client = openlayer.OpenlayerClient(\"YOUR_API_KEY_HERE\")" | 
|  | 91 | + ] | 
|  | 92 | + }, | 
|  | 93 | + { | 
|  | 94 | + "cell_type": "code", | 
|  | 95 | + "execution_count": null, | 
|  | 96 | + "id": "5377494b", | 
|  | 97 | + "metadata": {}, | 
|  | 98 | + "outputs": [], | 
|  | 99 | + "source": [ | 
|  | 100 | + "from openlayer.tasks import TaskType\n", | 
|  | 101 | + "\n", | 
|  | 102 | + "project = client.create_project(\n", | 
|  | 103 | + " name=\"Python QA\",\n", | 
|  | 104 | + " task_type=TaskType.LLM,\n", | 
|  | 105 | + ")" | 
|  | 106 | + ] | 
|  | 107 | + }, | 
|  | 108 | + { | 
|  | 109 | + "cell_type": "markdown", | 
|  | 110 | + "id": "ed0c9bf6", | 
|  | 111 | + "metadata": {}, | 
|  | 112 | + "source": [ | 
|  | 113 | + "Now that you are authenticated and have a project on the platform, it's time to create an inference pipeline. Creating an inference pipeline is what enables the monitoring capabilities in a project." | 
|  | 114 | + ] | 
|  | 115 | + }, | 
|  | 116 | + { | 
|  | 117 | + "cell_type": "code", | 
|  | 118 | + "execution_count": null, | 
|  | 119 | + "id": "147b5294", | 
|  | 120 | + "metadata": {}, | 
|  | 121 | + "outputs": [], | 
|  | 122 | + "source": [ | 
|  | 123 | + "inference_pipeline = project.create_inference_pipeline()" | 
|  | 124 | + ] | 
|  | 125 | + }, | 
|  | 126 | + { | 
|  | 127 | + "cell_type": "markdown", | 
|  | 128 | + "id": "3c8608ea", | 
|  | 129 | + "metadata": {}, | 
|  | 130 | + "source": [ | 
|  | 131 | + "## <a id=\"publish-batches\"> 2. Publishing production data </a>\n", | 
|  | 132 | + "\n", | 
|  | 133 | + "[Back to top](#top)\n", | 
|  | 134 | + "\n", | 
|  | 135 | + "In production, as the model makes predictions, the data can be published to Openlayer. This is done with the `publish_batch_data` method. \n", | 
|  | 136 | + "\n", | 
|  | 137 | + "The data published to Openlayer can have a column with **inference ids** and another with **timestamps** (UNIX sec format). These are both optional and, if not provided, will receive default values. The inference id is particularly important if you wish to publish ground truths at a later time. " | 
|  | 138 | + ] | 
|  | 139 | + }, | 
|  | 140 | + { | 
|  | 141 | + "cell_type": "code", | 
|  | 142 | + "execution_count": null, | 
|  | 143 | + "id": "918da1f7", | 
|  | 144 | + "metadata": {}, | 
|  | 145 | + "outputs": [], | 
|  | 146 | + "source": [ | 
|  | 147 | + "production_data = pd.read_csv(\"prod_data_no_ground_truths.csv\")" | 
|  | 148 | + ] | 
|  | 149 | + }, | 
|  | 150 | + { | 
|  | 151 | + "cell_type": "code", | 
|  | 152 | + "execution_count": null, | 
|  | 153 | + "id": "deec9e95", | 
|  | 154 | + "metadata": {}, | 
|  | 155 | + "outputs": [], | 
|  | 156 | + "source": [ | 
|  | 157 | + "batch_1 = production_data.loc[:9]\n", | 
|  | 158 | + "batch_2 = production_data.loc[9:18]\n", | 
|  | 159 | + "batch_3 = production_data.loc[18:]" | 
|  | 160 | + ] | 
|  | 161 | + }, | 
|  | 162 | + { | 
|  | 163 | + "cell_type": "code", | 
|  | 164 | + "execution_count": null, | 
|  | 165 | + "id": "25b66229", | 
|  | 166 | + "metadata": {}, | 
|  | 167 | + "outputs": [], | 
|  | 168 | + "source": [ | 
|  | 169 | + "batch_1.head()" | 
|  | 170 | + ] | 
|  | 171 | + }, | 
|  | 172 | + { | 
|  | 173 | + "cell_type": "markdown", | 
|  | 174 | + "id": "1bcf399a", | 
|  | 175 | + "metadata": {}, | 
|  | 176 | + "source": [ | 
|  | 177 | + "### <a id=\"publish-batches\"> Publish to Openlayer </a>\n", | 
|  | 178 | + "\n", | 
|  | 179 | + "Here, we're simulating three calls to `publish_batch_data`. In practice, this is a code snippet that lives in your inference pipeline and that gets called after the model predictions." | 
|  | 180 | + ] | 
|  | 181 | + }, | 
|  | 182 | + { | 
|  | 183 | + "cell_type": "code", | 
|  | 184 | + "execution_count": null, | 
|  | 185 | + "id": "1b8f28f8", | 
|  | 186 | + "metadata": {}, | 
|  | 187 | + "outputs": [], | 
|  | 188 | + "source": [ | 
|  | 189 | + "batch_config = {\n", | 
|  | 190 | + " \"inputVariableNames\": [\"question\"],\n", | 
|  | 191 | + " \"outputColumnName\": \"answer\",\n", | 
|  | 192 | + " \"inferenceIdColumnName\": \"inference_id\",\n", | 
|  | 193 | + "}\n" | 
|  | 194 | + ] | 
|  | 195 | + }, | 
|  | 196 | + { | 
|  | 197 | + "cell_type": "code", | 
|  | 198 | + "execution_count": null, | 
|  | 199 | + "id": "bde01a2b", | 
|  | 200 | + "metadata": {}, | 
|  | 201 | + "outputs": [], | 
|  | 202 | + "source": [ | 
|  | 203 | + "inference_pipeline.publish_batch_data(\n", | 
|  | 204 | + " batch_df=batch_1,\n", | 
|  | 205 | + " batch_config=batch_config\n", | 
|  | 206 | + ")" | 
|  | 207 | + ] | 
|  | 208 | + }, | 
|  | 209 | + { | 
|  | 210 | + "cell_type": "code", | 
|  | 211 | + "execution_count": null, | 
|  | 212 | + "id": "bfc3dea6", | 
|  | 213 | + "metadata": {}, | 
|  | 214 | + "outputs": [], | 
|  | 215 | + "source": [ | 
|  | 216 | + "inference_pipeline.publish_batch_data(\n", | 
|  | 217 | + " batch_df=batch_2,\n", | 
|  | 218 | + " batch_config=batch_config\n", | 
|  | 219 | + ")" | 
|  | 220 | + ] | 
|  | 221 | + }, | 
|  | 222 | + { | 
|  | 223 | + "cell_type": "code", | 
|  | 224 | + "execution_count": null, | 
|  | 225 | + "id": "159b4e24", | 
|  | 226 | + "metadata": {}, | 
|  | 227 | + "outputs": [], | 
|  | 228 | + "source": [ | 
|  | 229 | + "inference_pipeline.publish_batch_data(\n", | 
|  | 230 | + " batch_df=batch_3,\n", | 
|  | 231 | + " batch_config=batch_config\n", | 
|  | 232 | + ")" | 
|  | 233 | + ] | 
|  | 234 | + }, | 
|  | 235 | + { | 
|  | 236 | + "cell_type": "markdown", | 
|  | 237 | + "id": "d00f6e8e", | 
|  | 238 | + "metadata": {}, | 
|  | 239 | + "source": [ | 
|  | 240 | + "**That's it!** You're now able to set up goals and alerts for your production data. The next sections are optional and enable some features on the platform." | 
|  | 241 | + ] | 
|  | 242 | + }, | 
|  | 243 | + { | 
|  | 244 | + "cell_type": "markdown", | 
|  | 245 | + "id": "39592b32", | 
|  | 246 | + "metadata": {}, | 
|  | 247 | + "source": [ | 
|  | 248 | + "## <a id=\"reference-dataset\"> 3. Uploading a reference dataset </a>\n", | 
|  | 249 | + "\n", | 
|  | 250 | + "[Back to top](#top)\n", | 
|  | 251 | + "\n", | 
|  | 252 | + "A reference dataset is optional, but it enables drift monitoring. Ideally, the reference dataset is a representative sample of the training/fine-tuning set used to train the deployed model. In this section, we first load the dataset and then we upload it to Openlayer using the `upload_reference_dataframe` method." | 
|  | 253 | + ] | 
|  | 254 | + }, | 
|  | 255 | + { | 
|  | 256 | + "cell_type": "code", | 
|  | 257 | + "execution_count": null, | 
|  | 258 | + "id": "31809ca9", | 
|  | 259 | + "metadata": {}, | 
|  | 260 | + "outputs": [], | 
|  | 261 | + "source": [ | 
|  | 262 | + "fine_tuning_data = pd.read_csv(\"./fine_tuning_dataset.csv\")" | 
|  | 263 | + ] | 
|  | 264 | + }, | 
|  | 265 | + { | 
|  | 266 | + "cell_type": "markdown", | 
|  | 267 | + "id": "a6336802", | 
|  | 268 | + "metadata": {}, | 
|  | 269 | + "source": [ | 
|  | 270 | + "### <a id=\"upload-reference\"> Uploading the dataset to Openlayer </a>" | 
|  | 271 | + ] | 
|  | 272 | + }, | 
|  | 273 | + { | 
|  | 274 | + "cell_type": "code", | 
|  | 275 | + "execution_count": null, | 
|  | 276 | + "id": "0f8e23e3", | 
|  | 277 | + "metadata": {}, | 
|  | 278 | + "outputs": [], | 
|  | 279 | + "source": [ | 
|  | 280 | + "dataset_config = {\n", | 
|  | 281 | + " \"inputVariableNames\": [\"question\"],\n", | 
|  | 282 | + " \"groundTruthColumnName\": \"ground_truth\",\n", | 
|  | 283 | + " \"label\": \"reference\"\n", | 
|  | 284 | + "}" | 
|  | 285 | + ] | 
|  | 286 | + }, | 
|  | 287 | + { | 
|  | 288 | + "cell_type": "code", | 
|  | 289 | + "execution_count": null, | 
|  | 290 | + "id": "f6cf719f", | 
|  | 291 | + "metadata": {}, | 
|  | 292 | + "outputs": [], | 
|  | 293 | + "source": [ | 
|  | 294 | + "inference_pipeline.upload_reference_dataframe(\n", | 
|  | 295 | + " dataset_df=fine_tuning_data,\n", | 
|  | 296 | + " dataset_config=dataset_config\n", | 
|  | 297 | + ")" | 
|  | 298 | + ] | 
|  | 299 | + }, | 
|  | 300 | + { | 
|  | 301 | + "cell_type": "markdown", | 
|  | 302 | + "id": "fbc1fca3", | 
|  | 303 | + "metadata": {}, | 
|  | 304 | + "source": [ | 
|  | 305 | + "## <a id=\"ground-truths\"> 4. Publishing ground truths for past batches </a>\n", | 
|  | 306 | + "\n", | 
|  | 307 | + "[Back to top](#top)\n", | 
|  | 308 | + "\n", | 
|  | 309 | + "The ground truths are needed to create Performance goals. The `publish_ground_truths` method can be used to update the ground truths for batches of data already published to the Openlayer platform. The inference id is what gets used to merge the ground truths with the corresponding rows." | 
|  | 310 | + ] | 
|  | 311 | + }, | 
|  | 312 | + { | 
|  | 313 | + "cell_type": "code", | 
|  | 314 | + "execution_count": null, | 
|  | 315 | + "id": "03355dcf", | 
|  | 316 | + "metadata": {}, | 
|  | 317 | + "outputs": [], | 
|  | 318 | + "source": [ | 
|  | 319 | + "ground_truths = pd.read_csv(\"prod_ground_truths.csv\")" | 
|  | 320 | + ] | 
|  | 321 | + }, | 
|  | 322 | + { | 
|  | 323 | + "cell_type": "markdown", | 
|  | 324 | + "id": "903480c8", | 
|  | 325 | + "metadata": {}, | 
|  | 326 | + "source": [ | 
|  | 327 | + "### <a id=\"publish-truth\">Publish ground truths </a>" | 
|  | 328 | + ] | 
|  | 329 | + }, | 
|  | 330 | + { | 
|  | 331 | + "cell_type": "code", | 
|  | 332 | + "execution_count": null, | 
|  | 333 | + "id": "ccd906c2", | 
|  | 334 | + "metadata": {}, | 
|  | 335 | + "outputs": [], | 
|  | 336 | + "source": [ | 
|  | 337 | + "inference_pipeline.publish_ground_truths(\n", | 
|  | 338 | + " df=ground_truths,\n", | 
|  | 339 | + " ground_truth_column_name=\"ground_truth\",\n", | 
|  | 340 | + " inference_id_column_name=\"inference_id\",\n", | 
|  | 341 | + ")" | 
|  | 342 | + ] | 
|  | 343 | + }, | 
|  | 344 | + { | 
|  | 345 | + "cell_type": "code", | 
|  | 346 | + "execution_count": null, | 
|  | 347 | + "id": "f3749495", | 
|  | 348 | + "metadata": {}, | 
|  | 349 | + "outputs": [], | 
|  | 350 | + "source": [] | 
|  | 351 | + } | 
|  | 352 | + ], | 
|  | 353 | + "metadata": { | 
|  | 354 | + "kernelspec": { | 
|  | 355 | + "display_name": "Python 3 (ipykernel)", | 
|  | 356 | + "language": "python", | 
|  | 357 | + "name": "python3" | 
|  | 358 | + }, | 
|  | 359 | + "language_info": { | 
|  | 360 | + "codemirror_mode": { | 
|  | 361 | + "name": "ipython", | 
|  | 362 | + "version": 3 | 
|  | 363 | + }, | 
|  | 364 | + "file_extension": ".py", | 
|  | 365 | + "mimetype": "text/x-python", | 
|  | 366 | + "name": "python", | 
|  | 367 | + "nbconvert_exporter": "python", | 
|  | 368 | + "pygments_lexer": "ipython3", | 
|  | 369 | + "version": "3.8.13" | 
|  | 370 | + } | 
|  | 371 | + }, | 
|  | 372 | + "nbformat": 4, | 
|  | 373 | + "nbformat_minor": 5 | 
|  | 374 | +} | 
0 commit comments