0

I'm trying to setup an ELK stack (kibana, logstash and elastic search) in portainer which should receive loggings from pc's around the world.

The thing I'm not sure is how the proper setup should look like, so the performance for the clients is good.

Lets say portainer is running in Europe in a docker container and there are users in America, Europe, Australia and Asia.

What would be a proper setup? I guess I need a server on each continent in my stack but how do I redirect the loggings to the "fastest" endpoint?

Would be great if anyone could refer me to some keywords and articles where I can find a solution how this setup could look like

Right now I have the ELK stack on my local computer with docker and send UDP messages (json content) to my stack. The frequency of these messages can be several per second per client. There are like 600-700 (maybe more in the future) online at the same time around the world.

My elasticsearch.yml

cluster.name: docker-cluster network.host: 0.0.0.0 xpack.license.self_generated.type: trial xpack.security.enabled: true 

Here is my docker-compose.yml

setup: profiles: - setup build: context: setup/ args: ELASTIC_VERSION: ${ELASTIC_VERSION} init: true volumes: - ./setup/entrypoint.sh:/entrypoint.sh:ro,Z - ./setup/lib.sh:/lib.sh:ro,Z - ./setup/roles:/roles:ro,Z environment: ELASTIC_PASSWORD: ${ELASTIC_PASSWORD:-} LOGSTASH_INTERNAL_PASSWORD: ${LOGSTASH_INTERNAL_PASSWORD:-} KIBANA_SYSTEM_PASSWORD: ${KIBANA_SYSTEM_PASSWORD:-} METRICBEAT_INTERNAL_PASSWORD: ${METRICBEAT_INTERNAL_PASSWORD:-} FILEBEAT_INTERNAL_PASSWORD: ${FILEBEAT_INTERNAL_PASSWORD:-} HEARTBEAT_INTERNAL_PASSWORD: ${HEARTBEAT_INTERNAL_PASSWORD:-} MONITORING_INTERNAL_PASSWORD: ${MONITORING_INTERNAL_PASSWORD:-} BEATS_SYSTEM_PASSWORD: ${BEATS_SYSTEM_PASSWORD:-} networks: - elk depends_on: - elasticsearch elasticsearch: build: context: elasticsearch/ args: ELASTIC_VERSION: ${ELASTIC_VERSION} volumes: - ./elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro,Z - elasticsearch:/usr/share/elasticsearch/data:Z ports: - 9200:9200 - 9300:9300 environment: node.name: elasticsearch ES_JAVA_OPTS: -Xms512m -Xmx512m ELASTIC_PASSWORD: ${ELASTIC_PASSWORD:-} discovery.type: single-node networks: - elk restart: unless-stopped logstash: build: context: logstash/ args: ELASTIC_VERSION: ${ELASTIC_VERSION} volumes: - ./logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml:ro,Z - ./logstash/pipeline:/usr/share/logstash/pipeline:ro,Z ports: - 5044:5044/udp - 50000:50000/tcp - 9600:9600 environment: LS_JAVA_OPTS: -Xms256m -Xmx256m LOGSTASH_INTERNAL_PASSWORD: ${LOGSTASH_INTERNAL_PASSWORD:-} networks: - elk depends_on: - elasticsearch restart: unless-stopped kibana: build: context: kibana/ args: ELASTIC_VERSION: ${ELASTIC_VERSION} volumes: - ./kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml:ro,Z ports: - 5601:5601 environment: KIBANA_SYSTEM_PASSWORD: ${KIBANA_SYSTEM_PASSWORD:-} networks: - elk depends_on: - elasticsearch restart: unless-stopped networks: elk: driver: bridge volumes: elasticsearch: 

my logstash.conf

 input { tcp { port => 50000 } udp { port => 5044 codec => json } } filter { json { source => "message" target => "parsed_message" skip_on_invalid_json => true } mutate { rename => { "[parsed_message][message]" => "message" "[parsed_message][logLevel]" => "logLevel" "[parsed_message][application]" => "application" "[parsed_message][username]" => "username" "[parsed_message][computer]" => "computer" "[parsed_message][timestamp]" => "timestamp" } } } output { elasticsearch { #index => "logstash-%{+YYYY.MM.dd}" #hosts => "elasticsearch:9200" hosts => ["elasticsearch:9200"] data_stream => "true" data_stream_type => "logs" data_stream_dataset => "logstash" data_stream_namespace => "default" user => "logstash_internal" password => "${LOGSTASH_INTERNAL_PASSWORD}" } } 

my kibana.yml

 xpack.fleet.agents.fleet_server.hosts: [ http://fleet-server:8220 ] xpack.fleet.outputs: - id: fleet-default-output name: default type: elasticsearch hosts: [ http://elasticsearch:9200 ] is_default: true is_default_monitoring: true xpack.fleet.packages: - name: fleet_server version: latest - name: system version: latest - name: elastic_agent version: latest - name: docker version: latest - name: apm version: latest xpack.fleet.agentPolicies: - name: Fleet Server Policy id: fleet-server-policy description: Static agent policy for Fleet Server monitoring_enabled: - logs - metrics package_policies: - name: fleet_server-1 package: name: fleet_server - name: system-1 package: name: system - name: elastic_agent-1 package: name: elastic_agent - name: docker-1 package: name: docker - name: Agent Policy APM Server id: agent-policy-apm-server description: Static agent policy for the APM Server integration monitoring_enabled: - logs - metrics package_policies: - name: system-1 package: name: system - name: elastic_agent-1 package: name: elastic_agent - name: apm-1 package: name: apm # See the APM package manifest for a list of possible inputs. # https://github.com/elastic/apm-server/blob/v8.5.0/apmpackage/apm/manifest.yml#L41-L168 inputs: - type: apm vars: - name: host value: 0.0.0.0:8200 - name: url value: http://apm-server:8200 

1 Answer 1

0

For a self-hosted/cloud-hosted clustered setup, you would want to have geo-region DNS deployment for your main front-end. Most higher level DNS providers can do this - https://www.cloudns.net/geodns/ for example.

Caution! It is not advised to 'host' your ELK much less your Kibana on public IP space.

It is very wise to have that behind a VPN and if you didn't use a VPN and choose to do a Zerotrust style delivery instead, then ZT providers like Cloudflare have the ability to network and allow users from other geo regions to get to the one closest to them, such as - https://developers.cloudflare.com/data-localization/how-to/zero-trust/.

In each case, clustered setup is a very unique detail, quite unique to not only the feature being clustered but even more so how you balance the agencies of load on it (what each part is doing when, how users/jobs get shared, how recovery/upgrades/backup happens).

Could you expand your vision?

Of interest are how/where are you going to host it, how do you expect your user to get to it, what kind of perimeter/login surface were you going to have around your environment(s)?

There are ways to add a container with config that could start getting you close to your vision but more information about the packaging/expectations will help.

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.