0

I have a cluster in VirtualBox to learn kubernetes. I have a deployment that contains MySQL and phpMyAdmin. I created a DemonSet that has the fluentd image and collects the logs to transmit them to elastics at ip 10.0.2.11.

I don't understand why it doesn't connect to elactic and this log appears:

2023-08-28 11:28:49 +0000 [warn]: #0 [in_tail_container_logs] pattern not matched: "2023-08-28T11:28:48.393333576Z stdout F 2023-08-28 11:28:48.393 [INFO][64] monitor-addresses/autodetection_methods.go 103: Using autodetected IPv4 address on interface enp0s8: 192.168.88.34/24" 2023-08-28 11:29:03 +0000 [warn]: #0 [in_tail_container_logs] pattern not matched: "2023-08-28T11:29:02.751461094Z stdout F 2023-08-28 11:29:02.751 [INFO][60] felix/summary.go 100: Summarising 22 dataplane reconciliation loops over 1m1.5s: avg=19ms longest=172ms (resync-filter-v4,resync-mangle-v4,update-filter-v4)" 2023-08-28 11:29:32 +0000 [warn]: #0 [in_tail_container_logs] pattern not matched: "2023-08-28T11:29:31.817507758Z stdout F 2023-08-28 11:29:31.817 [INFO][60] felix/int_dataplane.go 1836: Received *proto.HostMetadataV4V6Update update from calculation graph msg=hostname:\"k8s-worker1\" ipv4_addr:\"192.168.88.34/24\" labels:<key:\"beta.kubernetes.io/arch\" value:\"amd64\" > labels:<key:\"beta.kubernetes.io/os\" value:\"linux\" > labels:<key:\"kubernetes.io/arch\" value:\"amd64\" > labels:<key:\"kubernetes.io/hostname\" value:\"k8s-worker1\" > labels:<key:\"kubernetes.io/os\" value:\"linux\" > " 2023-08-28 11:29:49 +0000 [warn]: #0 [in_tail_container_logs] pattern not matched: "2023-08-28T11:29:48.394164263Z stdout F 2023-08-28 11:29:48.393 [INFO][64] monitor-addresses/autodetection_methods.go 103: Using autodetected IPv4 address on interface enp0s8: 192.168.88.34/24" 

configmap.yaml

kind: ConfigMap metadata: name: fluentd-config namespace: fluentd data: fluent.conf: |- @include ignore_fluent_logs.conf @include containers.conf @include kubernetes.conf @include pods-with-annotation.conf #@include file-fluent.conf @include elasticsearch.conf ignore_fluent_logs.conf: |- # Do not collect fluentd logs <label @FLUENT_LOG> <match fluent.**> @type null @id ignore_fluent_logs </match> </label> tail_container_parse.conf: |- <parse> @type "#{ENV['FLUENT_CONTAINER_TAIL_PARSER_TYPE'] || 'json'}" time_format "#{ENV['FLUENT_CONTAINER_TAIL_PARSER_TIME_FORMAT'] || '%Y-%m-%dT%H:%M:%S.%NZ'}" </parse> containers.conf: |- <source> @type tail @id in_tail_container_logs path "#{ENV['FLUENT_CONTAINER_TAIL_PATH'] || '/var/log/containers/*.log'}" pos_file /var/log/fluentd-containers.log.pos tag "#{ENV['FLUENT_CONTAINER_TAIL_TAG'] || 'kubernetes.*'}" exclude_path "#{ENV['FLUENT_CONTAINER_TAIL_EXCLUDE_PATH'] || use_default}" read_from_head true @include tail_container_parse.conf </source> kubernetes.conf: |- <filter kubernetes.**> @type kubernetes_metadata @id filter_kube_metadata annotation_match [ "fluentd.active"] de_dot false kubernetes_url "#{ENV['FLUENT_FILTER_KUBERNETES_URL'] || 'https://' + ENV.fetch('KUBERNETES_SERVICE_HOST') + ':' + ENV.fetch('KUBERNETES_SERVICE_PORT') + '/api'}" verify_ssl "#{ENV['KUBERNETES_VERIFY_SSL'] || true}" ca_file "#{ENV['KUBERNETES_CA_FILE']}" skip_labels "#{ENV['FLUENT_KUBERNETES_METADATA_SKIP_LABELS'] || 'false'}" skip_container_metadata "#{ENV['FLUENT_KUBERNETES_METADATA_SKIP_CONTAINER_METADATA'] || 'false'}" skip_master_url "#{ENV['FLUENT_KUBERNETES_METADATA_SKIP_MASTER_URL'] || 'false'}" skip_namespace_metadata "#{ENV['FLUENT_KUBERNETES_METADATA_SKIP_NAMESPACE_METADATA'] || 'false'}" watch "#{ENV['FLUENT_KUBERNETES_WATCH'] || 'true'}" </filter> pods-with-annotation.conf: |- # Filter records with annotation fluentd.active=true <filter kubernetes.**> @type grep <regexp> key $["kubernetes"]["annotations"]["fluentd.active"] pattern "^true$" </regexp> </filter> <filter kubernetes.**> @type record_transformer remove_keys $.docker.container_id,$.kubernetes.container_image_id,$.kubernetes.pod_id,$.kubernetes.namespace_id,$.kubernetes.master_url,$.kubernetes.labels.pod-template-hash </filter> file-fluent.conf: |- <match **> @type file path /tmp/file-test.log </match> elasticsearch.conf: |- <match **> @type elasticsearch @id out_es @log_level info include_tag_key true host "#{ENV['FLUENT_ELASTICSEARCH_HOST']}" port "#{ENV['FLUENT_ELASTICSEARCH_PORT']}" path "#{ENV['FLUENT_ELASTICSEARCH_PATH']}" scheme "#{ENV['FLUENT_ELASTICSEARCH_SCHEME'] || 'http'}" ssl_verify "#{ENV['FLUENT_ELASTICSEARCH_SSL_VERIFY'] || 'true'}" ssl_version "#{ENV['FLUENT_ELASTICSEARCH_SSL_VERSION'] || 'TLSv1_2'}" user "#{ENV['FLUENT_ELASTICSEARCH_USER'] || use_default}" password "#{ENV['FLUENT_ELASTICSEARCH_PASSWORD'] || use_default}" reload_connections "#{ENV['FLUENT_ELASTICSEARCH_RELOAD_CONNECTIONS'] || 'false'}" reconnect_on_error "#{ENV['FLUENT_ELASTICSEARCH_RECONNECT_ON_ERROR'] || 'true'}" reload_on_failure "#{ENV['FLUENT_ELASTICSEARCH_RELOAD_ON_FAILURE'] || 'true'}" log_es_400_reason "#{ENV['FLUENT_ELASTICSEARCH_LOG_ES_400_REASON'] || 'false'}" logstash_prefix "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_PREFIX'] || 'fluentd'}" logstash_dateformat "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_DATEFORMAT'] || '%Y.%m.%d'}" logstash_format "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_FORMAT'] || 'true'}" index_name "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_INDEX_NAME'] || 'fluentd'}" target_index_key "#{ENV['FLUENT_ELASTICSEARCH_TARGET_INDEX_KEY'] || use_nil}" type_name "#{ENV['FLUENT_ELASTICSEARCH_LOGSTASH_TYPE_NAME'] || 'fluentd'}" include_timestamp "#{ENV['FLUENT_ELASTICSEARCH_INCLUDE_TIMESTAMP'] || 'false'}" template_name "#{ENV['FLUENT_ELASTICSEARCH_TEMPLATE_NAME'] || use_nil}" template_file "#{ENV['FLUENT_ELASTICSEARCH_TEMPLATE_FILE'] || use_nil}" template_overwrite "#{ENV['FLUENT_ELASTICSEARCH_TEMPLATE_OVERWRITE'] || use_default}" sniffer_class_name "#{ENV['FLUENT_SNIFFER_CLASS_NAME'] || 'Fluent::Plugin::ElasticsearchSimpleSniffer'}" request_timeout "#{ENV['FLUENT_ELASTICSEARCH_REQUEST_TIMEOUT'] || '5s'}" application_name "#{ENV['FLUENT_ELASTICSEARCH_APPLICATION_NAME'] || use_default}" suppress_type_name "#{ENV['FLUENT_ELASTICSEARCH_SUPPRESS_TYPE_NAME'] || 'true'}" enable_ilm "#{ENV['FLUENT_ELASTICSEARCH_ENABLE_ILM'] || 'false'}" ilm_policy_id "#{ENV['FLUENT_ELASTICSEARCH_ILM_POLICY_ID'] || use_default}" ilm_policy "#{ENV['FLUENT_ELASTICSEARCH_ILM_POLICY'] || use_default}" ilm_policy_overwrite "#{ENV['FLUENT_ELASTICSEARCH_ILM_POLICY_OVERWRITE'] || 'false'}" <buffer> flush_thread_count "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_FLUSH_THREAD_COUNT'] || '8'}" flush_interval "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_FLUSH_INTERVAL'] || '5s'}" chunk_limit_size "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_CHUNK_LIMIT_SIZE'] || '2M'}" queue_limit_length "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_QUEUE_LIMIT_LENGTH'] || '32'}" retry_max_interval "#{ENV['FLUENT_ELASTICSEARCH_BUFFER_RETRY_MAX_INTERVAL'] || '30'}" retry_forever true </buffer> </match> 
2
  • Welcome to Server Fault! Please format console output / settings / log lines as "code" using Markdown and/or the formatting options in the edit menu to properly type-set your posts. That improves readability and attracts better answers . And that will allow people willing to answer your question to also copy-and-paste your commands/settings/error messages into their answer. Commented Aug 29, 2023 at 9:28
  • Thank you for understanding and for the correction Commented Aug 29, 2023 at 10:56

0

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.