Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
144 changes: 144 additions & 0 deletions samples/oci-monitoring-metric-export-python/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,144 @@
## Export Metrics to Object Storage

This sample Python function demonstrates how to submit a Monitoring Query Language (MQL) query to OCI Monitoring Service which will extract a collection of historical metrics and then push this package to Object Storage as a json file.

![Export Process](./images/Export-Function.png)

This sample is intended to help you get up and running quickly by using default values for most parameters. In fact, there is only one required parameter: **compartmentId** --> this specifies the Object Storage Compartment to create the storage bucket (if it does not already exist). Supply values for optional parameters to select target resources, specific metrics to collect, define time ranges, etc., per your requirements. Advanced users may also replace the export function within the code to push metrics to your downstream application api.

The full list of *optional* parameters and their default values are presented below in the **Deploy** and **Test** sections. But first let's cover Prerequisites and the Function Development Environment setup.


As you make your way through this tutorial, look out for this icon ![user input icon](./images/userinput.png).
Whenever you see it, it's time for you to perform an action.


## Prerequisites

Before you deploy this sample function, make sure you have run steps A, B
and C of the [Oracle Functions Quick Start Guide for Cloud Shell](https://www.oracle.com/webfolder/technetwork/tutorials/infographics/oci_functions_cloudshell_quickview/functions_quickview_top/functions_quickview/index.html)
* A - Set up your tenancy
* B - Create application
* C - Set up your Cloud Shell dev environment


## List Applications

Assuming you have successfully completed the prerequisites, you should see your
application in the list of applications.

```
fn ls apps
```


## Create or Update your Dynamic Group

In order to use other OCI Services, your function must be part of a dynamic
group. For information on how to create a dynamic group, refer to the
[documentation](https://docs.cloud.oracle.com/iaas/Content/Identity/Tasks/managingdynamicgroups.htm#To).

When specifying the *Matching Rules*, we suggest matching all functions in a compartment with:

```
ALL {resource.type = 'fnfunc', resource.compartment.id = 'ocid1.compartment.oc1..aaaaaxxxxx'}
```


## Create or Update IAM Policies

Create a new policy that allows the dynamic group to read metrics and manage object storage.


![user input icon](./images/userinput.png)

Your policy should look something like this:
```
Allow dynamic group <group-name> to read metrics in compartment <compartment-name>
Allow dynamic group <group-name> to manage buckets in compartment <compartment-name>
Allow dynamic group <group-name> to manage objects in compartment <compartment-name>
```

For more information on how to create policies, go [here](https://docs.cloud.oracle.com/iaas/Content/Identity/Concepts/policysyntax.htm).


## Review and customize the function

Review the following files in the current folder:
* the code of the function, [func.py](./func.py)
* its dependencies, [requirements.txt](./requirements.txt)
* the function metadata, [func.yaml](./func.yaml)


## Deploy the function

In Cloud Shell, run the `fn deploy` command to build *this* function and its dependencies as a Docker image,
push the image to the specified Docker registry, and deploy *this* function to Oracle Functions
in the application created earlier:

![user input icon](./images/userinput.png)
```
fn -v deploy --app <app-name>
```
For example,
```
fn -v deploy --app myapp
```

### Test

You may test the function using default values by executing the following command, specifying the only mandatory parameter (compartmentId). This will create an Object Storage bucket named metrics-export, if it does not already exist, and upload a json file containing per-minute avg cpu levels during the preceding hour for all VMs in the specified compartment.

![user input icon](./images/userinput.png)
```
echo '{"compartmentId":"<compartment-ocid>"}' | fn invoke <app-name> <function-name>
```
For example:
```
echo '{"compartmentId":"<your ocid1.compartment.oc1..aaaaxxxxxxx>"}' | fn invoke myapp oci-monitoring-metric-export-python
```

There are several optional parameters to customize the query and time range. In addition to compartmentId, use any combination of additional parameters in the following format:

```
echo '{"compartmentId":"ocid1.compartment.oc1..aaaaxxxxxxx", "parameter_name_1":"<your parameter_value_1>", "parameter_name_2":"<your parameter_value_2>"}' | fn invoke myapp oci-monitoring-metric-export-python
```

For example, to select a specific namespace and custom query use this format:

```
echo '{"compartmentId":"ocid1.compartment.oc1..aaaaxxxxxxx", "namespace":"<your namespace>", "query":"<your query>"}' | fn invoke myapp oci-monitoring-metric-export-python
```

The following `parameters` are supported, along with default values if not specified.



**namespace**
* Description: Category of metrics (see OCI Monitoring Documentation)
* Default: *oci_computeagent*

**resource_group**
* Description: Subcategory of metrics (see OCI Monitoring Documentation)
* Default: *none*

**query**
* Description: MQL query to filter metrics (see OCI Monitoring Documentation)
* Default: *CpuUtilization[1m].mean()*

**startdtm**
* Description: Timestamp defining start of collection period
* Default: *1 hour prior to current time*

**enddtm**
* Description: Timestamp defining end of collection period
* Default: *current time*

**resolution**
* Description: Aggregation resolution (see OCI Monitoring Documentation)
* Default: *1m*


## Monitoring Functions
Learn how to configure basic observability for your function using metrics, alarms and email alerts:
* [Basic Guidance for Monitoring your Functions](../basic-observability/functions.md)
183 changes: 183 additions & 0 deletions samples/oci-monitoring-metric-export-python/func.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,183 @@
#
# oci-monitoring-metric-export-python version 1.0.
#
# Copyright (c) 2021 Oracle, Inc. All rights reserved.
# Licensed under the Universal Permissive License v 1.0 as shown at https://oss.oracle.com/licenses/upl.
#


import io
import oci
import requests
import logging
import json
from datetime import datetime, timedelta
from fdk import response



"""
Create Object Storage bucket 'metrics-export' in the specified Compartent, if it does not already exist.
Default access: Private
"""
def createBucketIfNotExists(_source_namespace, _compartmentId, _bucketname, _object_storage_client, logger):
try:
LiveBucketList = set()
LiveBucketResponse = _object_storage_client.list_buckets(_source_namespace, _compartmentId)
for bucket in LiveBucketResponse.data:
LiveBucketList.add(bucket.name)
if _bucketname not in LiveBucketList:
request = oci.object_storage.models.CreateBucketDetails()
request.compartment_id = _compartmentId
request.name = _bucketname
bucket = _object_storage_client.create_bucket(_source_namespace, request)
except Exception as e:
logger.error("Error in createBucketIfNotExists(): {}".format(str(e)))
raise

"""
Delete target objectname if it already exists.
Due to filenames containing embedded timestamps this scenario is rare, but could occur if re-executing a previous export with specific start/end timestamps.
"""
def deleteObjectIfExists(_source_namespace, _bucketname, _objectname, _object_storage_client, logger):
liveObjects = set()
try:
response = _object_storage_client.list_objects(namespace_name=_source_namespace, delimiter='/', bucket_name=_bucketname, fields='name,timeCreated,size')
for obj in response.data.objects:
if (obj.name == _objectname):
_object_storage_client.delete_object(_source_namespace, _bucketname, _objectname)
except Exception as e:
logger.error("Error in deleteObjectIfExists(): {}".format(str(e)))
raise

"""
Perform api call to pull metrics
"""
def export_metrics(monitoring_client, _compartmentId, _namespace, _resource_group, _query, _startdtm, _enddtm, _resolution, logger):
try:
_dataDetails = oci.monitoring.models.SummarizeMetricsDataDetails ()
_dataDetails.namespace=_namespace
_dataDetails.query=_query
if (_resource_group.strip() != ""):
_dataDetails.resource_group=_resource_group
_dataDetails.start_time=_startdtm
_dataDetails.end_time=_enddtm
_dataDetails.resolution=_resolution

print(_resource_group)
print(_dataDetails)

summarize_metrics_data_response = monitoring_client.summarize_metrics_data (
compartment_id=_compartmentId, summarize_metrics_data_details =_dataDetails)
return summarize_metrics_data_response.data
except Exception as e:
logger.error("Error in export_metrics(): {}".format(str(e)))
raise
"""
Upload (put) metrics json file to bucket 'metrics-export'
"""
def putObject(_source_namespace, _bucketname, _objectname, _content, _object_storage_client, logger):
try:
put_object_response = _object_storage_client.put_object(_source_namespace, _bucketname, _objectname, _content)
except Exception as e:
logger.error("Error in putObject(): {}".format(str(e)))
raise


"""
Entrypoint and initialization
"""
def handler(ctx, data: io.BytesIO=None):
logger = logging.getLogger()
logger.info("function start")
signer = oci.auth.signers.get_resource_principals_signer()
configinfo = {'region': signer.region, 'tenancy': signer.tenancy_id}
monitoring_client = oci.monitoring.MonitoringClient(config={}, signer=signer)
object_storage_client = oci.object_storage.ObjectStorageClient(config={}, signer=signer)
source_namespace = object_storage_client.get_namespace(retry_strategy=oci.retry.DEFAULT_RETRY_STRATEGY).data

# Retrieve the Function configuration values
# Parse input parameters and assign default values as needed
try:
cfg = dict(ctx.Config())
try:
_compartmentId = cfg["compartmentId"]
except Exception as e:
logger.error('Mandatory key compartmentId not defined')
raise

try:
_namespace = cfg["namespace"]
except:
logger.info('Optional configuration key namespace unavailable. Will assign default value')
_namespace = "oci_computeagent"

try:
_resource_group = cfg["resource_group"]
except:
logger.info('Optional configuration key resource_group unavailable. Will assign default value')
_resource_group = ""

try:
_query = cfg["query"]
except:
logger.info('Optional configuration key query unavailable. Will assign default value')
_query = "CpuUtilization[1m].mean()"

try:
_startdtm = cfg["startdtm"]
except:
logger.info('Optional configuration key startdtm unavailable. Will assign default value')
_startdtm = (datetime.now() + timedelta(hours=-1)).strftime("%Y-%m-%dT%H:%M:%S.%fZ")

try:
_enddtm = cfg["enddtm"]
except:
logger.info('Optional configuration key enddtm unavailable. Will assign default value')
_enddtm = datetime.now().strftime("%Y-%m-%dT%H:%M:%S.%fZ")

try:
_resolution = cfg["resolution"]
except:
logger.info('Optional configuration key resolution unavailable. Will assign default value')
_resolution = "1m"


bucketname = "metrics-export"
dt_string = _enddtm

if (_resource_group.strip()):
objectname = _namespace + "-" + _resource_group + "-" + dt_string[0:19] + ".json"
else:
objectname = _namespace + "-" + dt_string[0:19] + ".json"

logger.info("compartmentId: {}".format(str(_compartmentId)))
logger.info("namespace: {}".format(str(_namespace)))
logger.info("resource_group: {}".format(str(_resource_group)))
logger.info("query: {}".format(str(_query)))
logger.info("startdtm: {}".format(str(_startdtm)))
logger.info("enddtm: {}".format(str(_enddtm)))
logger.info("resolution: {}".format(str(_resolution)))
logger.info("source_namespace: {}".format(str(source_namespace)))

except Exception as e:
logger.error("Error in retrieving and assigning configuration values")
raise

# Main tasks
try:
createBucketIfNotExists(source_namespace, _compartmentId, bucketname, object_storage_client, logger)
deleteObjectIfExists(source_namespace, bucketname, objectname, object_storage_client, logger)
listContent = export_metrics(monitoring_client, _compartmentId, _namespace, _resource_group, _query, _startdtm, _enddtm, _resolution, logger)
putObject(source_namespace, bucketname, objectname, str(listContent), object_storage_client, logger)
except Exception as e:
logger.error("Error in main process: {}".format(str(e)))
raise

# The function is complete, return info tbd
logger.info("function end")
return response.Response(
ctx,
response_data="",
headers={"Content-Type": "application/json"}
)
7 changes: 7 additions & 0 deletions samples/oci-monitoring-metric-export-python/func.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
schema_version: 20180708
name: oci-monitoring-metric-export-python
version: 0.0.8
runtime: python
entrypoint: /python/bin/fdk /function/func.py handler
memory: 256

Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
3 changes: 3 additions & 0 deletions samples/oci-monitoring-metric-export-python/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
fdk
requests
oci