spujadas/elk-docker

License is not available

seefor opened this issue · 19 comments

{"type":"log","@timestamp":"2021-04-11T14:38:57+00:00","tags":["error","plugins","security","authentication"],"pid":395,"message":"License is not available, authentication is not possible."}

image

Could you please provide steps to reproduce your issue?
My initial thought is that you’re using features that require a licence.
Also linking to elastic/kibana#89828 which may or may not be related.

I just ran the docker image as "sudo docker run -p 5601:5601 -p 9200:9200 -p 5044:5044 -it --name elkstack sebp/elk"
And it just returned that license message
I will test again :)

I waited about 30 minutes and still nothing
Here is the following in the logs:
==> /var/log/kibana/kibana5.log <==
{"type":"log","@timestamp":"2021-04-13T20:04:55+00:00","tags":["warning","elasticsearch"],"pid":394,"message":"Unable to revive connection: http://localhost:9200/"}
{"type":"log","@timestamp":"2021-04-13T20:04:55+00:00","tags":["warning","elasticsearch"],"pid":394,"message":"No living connections"}
{"type":"log","@timestamp":"2021-04-13T20:04:55+00:00","tags":["warning","plugins","licensing"],"pid":394,"message":"License information could not be obtained from Elasticsearch due to Error: No Living connections error"}

==> /var/log/logstash/logstash-plain.log <==
[2021-04-13T20:04:59,783][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[2021-04-13T20:05:05,759][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[2021-04-13T20:05:10,767][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[2021-04-13T20:05:15,776][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[2021-04-13T20:05:20,789][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[2021-04-13T20:05:25,798][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}

Looks as if Elasticsearch isn’t running.
Could you post your start-up logs?
From your earlier comment you seem to be using the image in its default configuration, so there really shouldn’t be any errors related to licensing.

Here are the logs, I think you are referring to, you are correct I did not change anything I tried up the "latest" image

sudo docker run -p 5601:5601 -p 9200:9200 -p 5044:5044 -it --name elkstack sebp/elk
Unable to find image 'sebp/elk:latest' locally
latest: Pulling from sebp/elk
23884877105a: Pull complete
bc38caa0f5b9: Pull complete
2910811b6c42: Pull complete
36505266dcc6: Pull complete
07a923053093: Pull complete
776b7eaf0a03: Pull complete
bc700a6f34b5: Pull complete
3a90de61e394: Pull complete
86ce4e2869dd: Pull complete
89a474a625c0: Pull complete
6173f35c87b9: Pull complete
6445d0fade5a: Pull complete
feb799dbe978: Pull complete
5623dd8d72e6: Pull complete
18f3ecd7bb17: Pull complete
42ced68f3e66: Pull complete
d53386b238a3: Pull complete
b42b5336ef04: Pull complete
448dee6985dc: Pull complete
e7add3131257: Pull complete
07da939aa0f7: Pull complete
f031f46a52e1: Pull complete
ade96c1e7721: Pull complete
9d2659306801: Pull complete
0057f4f22e69: Pull complete
dd8bc91e2d4f: Pull complete
7936850402fd: Pull complete
8efe4dc2228b: Pull complete
484fba852b76: Pull complete
b88138ce4de4: Pull complete
efeeb6006cca: Pull complete
620521285c0e: Pull complete
7c775f64a465: Pull complete
22c0f80ce63f: Pull complete
4e5901bd4f69: Pull complete
Digest: sha256:c1c950c97b6c8cf03eb6551cbd8a3d72e0fc3b53717be4d70a4c13f3e105ece7
Status: Downloaded newer image for sebp/elk:latest

  • Starting periodic command scheduler cron [ OK ]
  • Starting Elasticsearch Server [ OK ]
    waiting for Elasticsearch to be up (1/30)
    waiting for Elasticsearch to be up (2/30)
    waiting for Elasticsearch to be up (3/30)
    waiting for Elasticsearch to be up (4/30)
    waiting for Elasticsearch to be up (5/30)
    waiting for Elasticsearch to be up (6/30)
    waiting for Elasticsearch to be up (7/30)
    waiting for Elasticsearch to be up (8/30)
    waiting for Elasticsearch to be up (9/30)
    waiting for Elasticsearch to be up (10/30)
    waiting for Elasticsearch to be up (11/30)
    waiting for Elasticsearch to be up (12/30)
    waiting for Elasticsearch to be up (13/30)
    waiting for Elasticsearch to be up (14/30)
    Waiting for Elasticsearch cluster to respond (1/30)
    logstash started.
  • Starting Kibana5 [ OK ]
    ==> /var/log/elasticsearch/elasticsearch.log <==
    [2021-04-13T20:01:20,876][INFO ][o.e.c.m.MetadataIndexTemplateService] [elk] adding template [.monitoring-kibana] for index patterns [.monitoring-kibana-7-]
    [2021-04-13T20:01:20,913][INFO ][o.e.c.m.MetadataIndexTemplateService] [elk] adding template [.monitoring-logstash] for index patterns [.monitoring-logstash-7-
    ]
    [2021-04-13T20:01:20,950][INFO ][o.e.c.m.MetadataIndexTemplateService] [elk] adding template [.monitoring-beats] for index patterns [.monitoring-beats-7-]
    [2021-04-13T20:01:20,983][INFO ][o.e.c.m.MetadataIndexTemplateService] [elk] adding index template [synthetics] for index patterns [synthetics-
    -]
    [2021-04-13T20:01:21,015][INFO ][o.e.c.m.MetadataIndexTemplateService] [elk] adding index template [metrics] for index patterns [metrics-
    -]
    [2021-04-13T20:01:21,049][INFO ][o.e.c.m.MetadataIndexTemplateService] [elk] adding index template [logs] for index patterns [logs-
    -*]
    [2021-04-13T20:01:21,081][INFO ][o.e.x.i.a.TransportPutLifecycleAction] [elk] adding index lifecycle policy [ml-size-based-ilm-policy]
    [2021-04-13T20:01:21,142][INFO ][o.e.x.i.a.TransportPutLifecycleAction] [elk] adding index lifecycle policy [metrics]
    [2021-04-13T20:01:21,173][INFO ][o.e.x.i.a.TransportPutLifecycleAction] [elk] adding index lifecycle policy [logs]
    [2021-04-13T20:01:21,211][INFO ][o.e.x.i.a.TransportPutLifecycleAction] [elk] adding index lifecycle policy [synthetics]

==> /var/log/logstash/logstash-plain.log <==

==> /var/log/kibana/kibana5.log <==

==> /var/log/elasticsearch/elasticsearch.log <==
[2021-04-13T20:01:21,255][INFO ][o.e.x.i.a.TransportPutLifecycleAction] [elk] adding index lifecycle policy [watch-history-ilm-policy]
[2021-04-13T20:01:21,299][INFO ][o.e.x.i.a.TransportPutLifecycleAction] [elk] adding index lifecycle policy [ilm-history-ilm-policy]
[2021-04-13T20:01:21,332][INFO ][o.e.x.i.a.TransportPutLifecycleAction] [elk] adding index lifecycle policy [slm-history-ilm-policy]
[2021-04-13T20:01:21,440][INFO ][o.e.l.LicenseService ] [elk] license [dc879de3-8f65-4815-be68-2e95f3c7e425] mode [basic] - valid
[2021-04-13T20:01:21,442][INFO ][o.e.x.s.s.SecurityStatusChangeListener] [elk] Active license is now [BASIC]; Security is disabled

==> /var/log/kibana/kibana5.log <==
{"type":"log","@timestamp":"2021-04-13T20:01:34+00:00","tags":["info","plugins-service"],"pid":394,"message":"Plugin "osquery" is disabled."}
{"type":"log","@timestamp":"2021-04-13T20:01:35+00:00","tags":["warning","config","deprecation"],"pid":394,"message":"Config key [monitoring.cluster_alerts.email_notifications.email_address] will be required for email notifications to work in 8.0.""}
{"type":"log","@timestamp":"2021-04-13T20:01:37+00:00","tags":["info","plugins-system"],"pid":394,"message":"Setting up [100] plugins: [taskManager,licensing,globalSearch,globalSearchProviders,banners,code,usageCollection,xpackLegacy,telemetryCollectionManager,telemetry,telemetryCollectionXpack,kibanaUsageCollection,newsfeed,securityOss,share,mapsLegacy,kibanaLegacy,translations,esUiShared,legacyExport,embeddable,uiActionsEnhanced,expressions,charts,bfetch,data,home,observability,console,consoleExtensions,apmOss,searchprofiler,painlessLab,grokdebugger,management,indexPatternManagement,advancedSettings,fileUpload,savedObjects,visualizations,visTypeVega,visTypeVislib,visTypeTimelion,timelion,features,licenseManagement,graph,watcher,canvas,visTypeTable,visTypeTagcloud,visTypeMetric,visTypeMarkdown,tileMap,regionMap,visTypeXy,dashboard,dashboardEnhanced,visualize,visTypeTimeseries,inputControlVis,discover,discoverEnhanced,savedObjectsManagement,spaces,security,savedObjectsTagging,maps,lens,reporting,lists,encryptedSavedObjects,cloud,upgradeAssistant,snapshotRestore,fleet,indexManagement,remoteClusters,crossClusterReplication,rollup,indexLifecycleManagement,enterpriseSearch,dataEnhanced,dashboardMode,beatsManagement,transform,ingestPipelines,eventLog,actions,alerts,triggersActionsUi,stackAlerts,ml,securitySolution,case,infra,monitoring,logstash,apm,uptime]"}
{"type":"log","@timestamp":"2021-04-13T20:01:37+00:00","tags":["info","plugins","taskManager"],"pid":394,"message":"TaskManager is identified by the Kibana UUID: ea79b47b-09cb-4fad-9afb-9809fa8693fd"}
{"type":"log","@timestamp":"2021-04-13T20:01:49+00:00","tags":["warning","plugins","security","config"],"pid":394,"message":"Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."}
{"type":"log","@timestamp":"2021-04-13T20:01:49+00:00","tags":["warning","plugins","security","config"],"pid":394,"message":"Session cookies will be transmitted over insecure connections. This is not recommended."}
{"type":"log","@timestamp":"2021-04-13T20:01:53+00:00","tags":["warning","plugins","reporting","config"],"pid":394,"message":"Generating a random key for xpack.reporting.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.reporting.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."}
{"type":"log","@timestamp":"2021-04-13T20:01:54+00:00","tags":["info","plugins","reporting","config"],"pid":394,"message":"Chromium sandbox provides an additional layer of protection, and is supported for Linux Ubuntu 18.04 OS. Automatically enabling Chromium sandbox."}
{"type":"log","@timestamp":"2021-04-13T20:01:54+00:00","tags":["warning","plugins","encryptedSavedObjects"],"pid":394,"message":"Saved objects encryption key is not set. This will severely limit Kibana functionality. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."}
{"type":"log","@timestamp":"2021-04-13T20:01:55+00:00","tags":["warning","plugins","fleet"],"pid":394,"message":"Fleet APIs are disabled because the Encrypted Saved Objects plugin is missing encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."}
{"type":"log","@timestamp":"2021-04-13T20:01:55+00:00","tags":["warning","plugins","actions","actions"],"pid":394,"message":"APIs are disabled because the Encrypted Saved Objects plugin is missing encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."}
{"type":"log","@timestamp":"2021-04-13T20:01:55+00:00","tags":["warning","plugins","alerts","plugins","alerting"],"pid":394,"message":"APIs are disabled because the Encrypted Saved Objects plugin is missing encryption key. Please set xpack.encryptedSavedObjects.encryptionKey in the kibana.yml or use the bin/kibana-encryption-keys command."}
{"type":"log","@timestamp":"2021-04-13T20:01:55+00:00","tags":["info","plugins","monitoring","monitoring"],"pid":394,"message":"config sourced from: production cluster"}
{"type":"log","@timestamp":"2021-04-13T20:01:56+00:00","tags":["info","savedobjects-service"],"pid":394,"message":"Waiting until all Elasticsearch nodes are compatible with Kibana before starting saved objects migrations..."}
{"type":"log","@timestamp":"2021-04-13T20:01:56+00:00","tags":["error","elasticsearch"],"pid":394,"message":"Request error, retrying\nGET http://localhost:9200/_xpack?accept_enterprise=true => connect ECONNREFUSED 127.0.0.1:9200"}
{"type":"log","@timestamp":"2021-04-13T20:01:56+00:00","tags":["warning","elasticsearch"],"pid":394,"message":"Unable to revive connection: http://localhost:9200/"}
{"type":"log","@timestamp":"2021-04-13T20:01:56+00:00","tags":["warning","elasticsearch"],"pid":394,"message":"No living connections"}
{"type":"log","@timestamp":"2021-04-13T20:01:56+00:00","tags":["warning","plugins","licensing"],"pid":394,"message":"License information could not be obtained from Elasticsearch due to Error: No Living connections error"}
{"type":"log","@timestamp":"2021-04-13T20:01:56+00:00","tags":["warning","plugins","monitoring","monitoring"],"pid":394,"message":"X-Pack Monitoring Cluster Alerts will not be available: No Living connections"}
{"type":"log","@timestamp":"2021-04-13T20:01:56+00:00","tags":["error","savedobjects-service"],"pid":394,"message":"Unable to retrieve version information from Elasticsearch nodes."}

==> /var/log/logstash/logstash-plain.log <==
[2021-04-13T20:02:04,560][INFO ][logstash.runner ] Log4j configuration path used is: /opt/logstash/config/log4j2.properties
[2021-04-13T20:02:04,580][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.12.0", "jruby.version"=>"jruby 9.2.13.0 (2.5.7) 2020-08-03 9a89c94bcc OpenJDK 64-Bit Server VM 11.0.10+9 on 11.0.10+9 +indy +jit [linux-x86_64]"}
[2021-04-13T20:02:04,609][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/opt/logstash/data/queue"}
[2021-04-13T20:02:04,624][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/opt/logstash/data/dead_letter_queue"}
[2021-04-13T20:02:05,212][INFO ][logstash.agent ] No persistent UUID file found. Generating new UUID {:uuid=>"d60cf349-24fe-4baf-b5b0-ca27a3ea37bb", :path=>"/opt/logstash/data/uuid"}
[2021-04-13T20:02:06,432][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2021-04-13T20:02:08,302][INFO ][org.reflections.Reflections] Reflections took 69 ms to scan 1 urls, producing 23 keys and 47 values
[2021-04-13T20:02:09,387][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://localhost:9200/]}}
[2021-04-13T20:02:09,567][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[2021-04-13T20:02:09,583][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["//localhost"]}
[2021-04-13T20:02:09,960][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>8, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>1000, "pipeline.sources"=>["/etc/logstash/conf.d/02-beats-input.conf", "/etc/logstash/conf.d/10-syslog.conf", "/etc/logstash/conf.d/11-nginx.conf", "/etc/logstash/conf.d/30-output.conf"], :thread=>"#<Thread:0x79ef1861 run>"}
[2021-04-13T20:02:11,179][INFO ][logstash.javapipeline ][main] Pipeline Java execution initialization time {"seconds"=>1.21}
[2021-04-13T20:02:11,221][INFO ][logstash.inputs.beats ][main] Starting input listener {:address=>"0.0.0.0:5044"}
[2021-04-13T20:02:11,585][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
[2021-04-13T20:02:11,671][INFO ][org.logstash.beats.Server][main][3345ebe133ac5535dc322adb01ae100ae72e039cc30f0496e6819a92c88c4745] Starting server on port: 5044
[2021-04-13T20:02:11,671][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2021-04-13T20:02:14,594][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[2021-04-13T20:02:19,607][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[2021-04-13T20:02:24,616][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}

==> /var/log/kibana/kibana5.log <==
{"type":"log","@timestamp":"2021-04-13T20:02:26+00:00","tags":["warning","elasticsearch"],"pid":394,"message":"Unable to revive connection: http://localhost:9200/"}
{"type":"log","@timestamp":"2021-04-13T20:02:26+00:00","tags":["warning","elasticsearch"],"pid":394,"message":"No living connections"}
{"type":"log","@timestamp":"2021-04-13T20:02:26+00:00","tags":["warning","plugins","licensing"],"pid":394,"message":"License information could not be obtained from Elasticsearch due to Error: No Living connections error"}

==> /var/log/logstash/logstash-plain.log <==
[2021-04-13T20:02:29,625][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[2021-04-13T20:02:35,604][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[2021-04-13T20:02:40,614][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[2021-04-13T20:02:45,625][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[2021-04-13T20:02:50,639][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}
[2021-04-13T20:02:55,650][WARN ][logstash.outputs.elasticsearch][main] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://localhost:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://localhost:9200/][Manticore::SocketException] Connection refused (Connection refused)"}

Seems that Elasticsearch stopped running after starting up normally.
Could you check if you can connect to http://localhost:9200/ using a browser?
Could you provide details of your set-up? (Possible you may be running out of memory.)
I’d also advise to attempt to run Elasticsearch manually within the container to see what happens (may provide clues that don’t show up in the logs).

You are correct it works fine on my Linux box but no my MAC Book thanks for pointing me in that direction

That's my issue, works great on Linux but not on MAC thanks again

Glad it helped understand your issue, but why this image isn’t working well on Macs remains a mystery at this point.

I have the same issue with MAC

Same issue with Docker for Windows

Still a mystery :')

So still doesn't work on MACs?
I am not using this specific docker images but just docker-compose with latest es and kibana and it's fine.

I can share my snippet of docker-compose:

version: '3'
services:
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.15.0
    container_name: elasticsearch
    environment:
      - cluster.name=integration-test
      - discovery.type=single-node
      - 'ES_JAVA_OPTS=-Xms512m -Xmx512m'
      - logger.level=ERROR
    ulimits:
      memlock:
        soft: -1
        hard: -1
      nofile:
        soft: 65536
        hard: 65536
    ports:
      - '9200:9200'
      - '9300:9300'
    networks:
      - elastic

  kibana:
    image: docker.elastic.co/kibana/kibana:7.15.0
    ports:
      - 5601:5601    
    environment:
      ELASTICSEARCH_URL: http://elasticsearch:9200
      ELASTICSEARCH_HOSTS: http://elasticsearch:9200      
    networks:
      - elastic

networks:
  elastic:
    driver: bridge

It works actually. I allocated 16 gb ram and 10 CPU cores in order to make this image work!

Elastic search is very resource intensive (thinking to outsource it from AWS in Production Env).

@manjotsk I have it running fine on 2 GB RAM and 2 CPUs on Ubuntu Server, I don't have tons of data going through it about 50K DNS queries an hour.

You mean the standalone Elastic search or the elk-docker?

I guess I should think before coming to conclusions! It is strange that elastic search fell off on 8gb ram and 8 cpus on my MacBook Pro i9! However, increasing the limits helped to make the elk-docker image stable on my MacBook!

bc3 commented

I had a sort a like issue with the elk-docker image, license not available , not working correctly, stopping after a while. Then i read in this thread and this one #350 and found out it could be a memory issue, and indeed.

MBP Specs:

MacBook Pro (16-inch, 2019)
Processor: 2,4 GHz 8-Core Intel Core i9
Memory: 64 GB 2667 MHz DDR4
Graphics: Intel UHD Graphics 630 1536 MB
OS: macOS Big Sur Version 11.5

Start command:
sudo docker run -p 5601:5601 -p 9200:9200 -p 5044:5044 -it --name elkstack sebp/elk
Dashboard link: http://localhost:5601/app/home#/
API link: http://localhost:9200/

Solution:
I set my docker setup to 8GB memory and 2GB Swap

Spring cleaning: closing this issue due to inactivity.

For me this got resolved by adding the following under elasticsearch container in docker-compose.yml

environment:
    ES_JAVA_OPTS: "-Xmx256m -Xms256m"