You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -390,15 +389,15 @@ An example event for `executor` looks as following:
390
389
"version": "8.5.1"
391
390
},
392
391
"elastic_agent": {
393
-
"id": "c5e2a51e-e10a-4561-9861-75b38aa09f4b",
392
+
"id": "a6bdbb4a-4bac-4243-83cb-dba157f24987",
394
393
"snapshot": false,
395
-
"version": "8.1.0"
394
+
"version": "8.8.0"
396
395
},
397
396
"event": {
398
397
"agent_id_status": "verified",
399
398
"dataset": "apache_spark.executor",
400
-
"duration": 32964497,
401
-
"ingested": "2022-04-11T08:29:59Z",
399
+
"duration": 2849184715,
400
+
"ingested": "2023-09-28T09:26:49Z",
402
401
"kind": "metric",
403
402
"module": "apache_spark",
404
403
"type": "info"
@@ -407,21 +406,18 @@ An example event for `executor` looks as following:
407
406
"architecture": "x86_64",
408
407
"containerized": true,
409
408
"hostname": "docker-fleet-agent",
410
-
"ip": [
411
-
"172.23.0.7"
412
-
],
413
-
"mac": [
414
-
"02:42:ac:17:00:07"
415
-
],
409
+
"id": "e8978f2086c14e13b7a0af9ed0011d19",
410
+
"ip": "172.20.0.7",
411
+
"mac": "02-42-AC-14-00-07",
416
412
"name": "docker-fleet-agent",
417
413
"os": {
418
414
"codename": "focal",
419
415
"family": "debian",
420
-
"kernel": "5.4.0-107-generic",
416
+
"kernel": "3.10.0-1160.90.1.el7.x86_64",
421
417
"name": "Ubuntu",
422
418
"platform": "ubuntu",
423
419
"type": "linux",
424
-
"version": "20.04.3 LTS (Focal Fossa)"
420
+
"version": "20.04.6 LTS (Focal Fossa)"
425
421
}
426
422
},
427
423
"metricset": {
@@ -440,6 +436,7 @@ An example event for `executor` looks as following:
440
436
| Field | Description | Type |
441
437
|---|---|---|
442
438
|@timestamp| Event timestamp. | date |
439
+
| agent.id | Unique identifier of this agent (if one exists). Example: For Beats this would be beat.id. | keyword |
443
440
| apache_spark.executor.application_name | Name of application. | keyword |
444
441
| apache_spark.executor.bytes.read | Total number of bytes read. | long |
445
442
| apache_spark.executor.bytes.written | Total number of bytes written. | long |
@@ -470,6 +467,7 @@ An example event for `executor` looks as following:
470
467
| apache_spark.executor.id | ID of executor. | keyword |
471
468
| apache_spark.executor.jvm.cpu_time | Elapsed CPU time the JVM spent. | long |
472
469
| apache_spark.executor.jvm.gc_time | Elapsed time the JVM spent in garbage collection while executing this task. | long |
470
+
| apache_spark.executor.mbean | The name of the jolokia mbean. | keyword |
473
471
| apache_spark.executor.memory.direct_pool | Peak memory that the JVM is using for direct buffer pool. | long |
474
472
| apache_spark.executor.memory.jvm.heap | Peak memory usage of the heap that is used for object allocation. | long |
475
473
| apache_spark.executor.memory.jvm.off_heap | Peak memory usage of non-heap memory that is used by the Java virtual machine. | long |
@@ -509,6 +507,12 @@ An example event for `executor` looks as following:
509
507
| apache_spark.executor.threadpool.current_pool_size | The size of the current thread pool of the executor. | long |
510
508
| apache_spark.executor.threadpool.max_pool_size | The maximum size of the thread pool of the executor. | long |
511
509
| apache_spark.executor.threadpool.started_tasks | The number of tasks started in the thread pool of the executor. | long |
510
+
| cloud.account.id | The cloud account or organization id used to identify different entities in a multi-tenant environment. Examples: AWS account id, Google Cloud ORG Id, or other unique identifier. | keyword |
511
+
| cloud.availability_zone | Availability zone in which this host, resource, or service is located. | keyword |
512
+
| cloud.instance.id | Instance ID of the host machine. | keyword |
513
+
| cloud.provider | Name of the cloud provider. Example values are aws, azure, gcp, or digitalocean. | keyword |
514
+
| cloud.region | Region in which this host, resource, or service is located. | keyword |
515
+
| container.id | Unique container id. | keyword |
512
516
| data_stream.dataset | Data stream dataset. | constant_keyword |
513
517
| data_stream.namespace | Data stream namespace. | constant_keyword |
514
518
| data_stream.type | Data stream type. | constant_keyword |
@@ -519,6 +523,7 @@ An example event for `executor` looks as following:
519
523
| event.module | Name of the module this data is coming from. If your monitoring agent supports the concept of modules or plugins to process events of a given source (e.g. Apache logs), `event.module` should contain the name of this module. | keyword |
520
524
| event.type | This is one of four ECS Categorization Fields, and indicates the third level in the ECS category hierarchy. `event.type` represents a categorization "sub-bucket" that, when used along with the `event.category` field values, enables filtering events down to a level appropriate for single visualization. This field is an array. This will allow proper categorization of some events that fall in multiple event types. | keyword |
521
525
| host.ip | Host ip addresses. | ip |
526
+
| host.name | Name of the host. It can contain what `hostname` returns on Unix systems, the fully qualified domain name, or a name specified by the user. The sender decides which value to use. | keyword |
522
527
| service.address | Address where data about this service was collected from. This should be a URI, network address (ipv4:port or [ipv6]:port) or a resource path (sockets). | keyword |
523
528
| service.type | The type of the service data is collected from. The type can be used to group and correlate logs and metrics from one service type. Example: If logs or metrics are collected from Elasticsearch, `service.type` would be `elasticsearch`. | keyword |
524
529
| tags | List of keywords used to tag each event. | keyword |
0 commit comments