CiscoSecurity/fp-05-firepower-cef-connector-arcsight

Writer ERROR Message data too large. Enable debug if asked to do so.

15U12U opened this issue · 20 comments

Hi,

The estreamer client is shutting down after few minutes prompting the following ERROR.

2020-07-14 09:40:25,143 Controller INFO eNcore version: 3.5.3
2020-07-14 09:40:25,144 Controller INFO Python version: 2.7.17 (default, Jun 5 2020, 03:38:32) \n[GCC 8.3.1 20191121 (Red Hat 8.3.1-5)]
2020-07-14 09:40:25,144 Controller INFO Platform version: Linux-4.18.0-193.6.3.el8_2.x86_64-x86_64-with-centos-8.2.2004-Core
2020-07-14 09:40:25,144 Controller INFO Starting client (pid=468886).
2020-07-14 09:40:25,144 Controller INFO Sha256: 103589f561d67da40d32d37c9249a996982065083e66358600ee44e5e44294c2
2020-07-14 09:40:25,144 Controller INFO Processes: 4
2020-07-14 09:40:25,144 Controller INFO Settings: KGRwMApWbG9nZ2luZwpwMQooZHAyClZsZXZAY29tbWVudApwMwpWTGV2ZWxzIGluY2x1ZGUgRkFUQUwsIEVSUk9SLCBXQVJOSU5HLCBJTkZPLCBERUJVRywgVkVSQk9TRSBhbmQgVFJBQ0UKcDQKc1ZsZXZlbApwNQpWSU5GTwpwNgpzVmZvcm1hdApwNwpWJShhc2N0aW1lKXMgJShuYW1lKS0xMnMgJShsZXZlbG5hbWUpLThzICUobWVzc2FnZSlzCnA4CnNWc3RkT3V0CnA5CkkwMQpzVmZpbGVwYXRoCnAxMApWZXN0cmVhbWVyLmxvZwpwMTEKc3NWY29ubmVjdFRpbWVvdXQKcDEyCkkxMApzVnN0YXJAY29tbWVudApwMTMKVjAgZm9yIGdlbmVzaXMsIDEgZm9yIG5vdywgMiBmb3IgYm9va21hcmsKcDE0CnNWZW5hYmxlZApwMTUKSTAxCnNWd29ya2VyUHJvY2Vzc2VzCnAxNgpJNApzVnN0YXJ0CnAxNwpJMgpzVmhhbmRsZXIKcDE4CihkcDE5ClZvdXRwdXR0ZXJzCnAyMAoobHAyMQooZHAyMgpWYWRhcHRlcgpwMjMKVmNlZgpwMjQKc1ZlbmFibGVkCnAyNQpJMDEKc1ZuYW1lCnAyNgpWRk1DZXN0cmVhbWVyCnAyNwpzVnN0cmVhbQpwMjgKKGRwMjkKVnVyaQpwMzAKVnVkcDovLzEwLjEwLjE1NC4xMjM6NTE0CnAzMQpzc2FzVm91dHB1dEBjb21tZW50CnAzMgpWSWYgeW91IGRpc2FibGUgYWxsIG91dHB1dHRlcnMgaXQgYmVoYXZlcyBhcyBhIHNpbmsKcDMzCnNWcmVjb3JkcwpwMzQKKGRwMzUKVmNvcmUKcDM2CkkwMQpzVmludHJ1c2lvbgpwMzcKSTAxCnNWaW5jQGNvbW1lbnQKcDM4ClZUaGVzZSByZWNvcmRzIHdpbGwgYmUgaW5jbHVkZWQgcmVnYXJkbGVzcyBvZiBhYm92ZQpwMzkKc1ZybmEKcDQwCkkwMQpzVnJ1YQpwNDEKSTAxCnNWcGFja2V0cwpwNDIKSTAxCnNWY29ubmVjdGlvbnMKcDQzCkkwMQpzVmV4Y2x1ZGUKcDQ0CihscDQ1CnNWaW5jbHVkZQpwNDYKKGxwNDcKc1ZleGNsQGNvbW1lbnQKcDQ4CihscDQ5ClZUaGVzZSByZWNvcmRzIHdpbGwgYmUgZXhjbHVkZWQgcmVnYXJkbGVzcyBvZiBhYm92ZSAob3ZlcnJpZGVzICdpbmNsdWRlJykKcDUwCmFWZS5nLiB0byBleGNsdWRlIGZsb3cgYW5kIElQUyBldmVudHMgdXNlIFsgNzEsIDQwMCBdCnA1MQphc1ZtZXRhZGF0YQpwNTIKSTAxCnNzc1ZyZXNwb25zZVRpbWVvdXQKcDUzCkkyCnNWc3Vic2NyaXB0aW9uCnA1NAooZHA1NQpWcmVjb3JkcwpwNTYKKGRwNTcKVmV4dGVuZGVkCnA1OApJMDEKc1ZhcmNoaXZlVGltZXN0YW1wcwpwNTkKSTAxCnNWaW50cnVzaW9uCnA2MApJMDEKc1ZwYWNrZXREYXRhCnA2MQpJMDEKc1ZAY29tbWVudApwNjIKKGxwNjMKVkp1c3QgYmVjYXVzZSB3ZSBzdWJzY3JpYmUgZG9lc24ndCBtZWFuIHRoZSBzZXJ2ZXIgaXMgc2VuZGluZy4gTm9yIGRvZXMgaXQgbWVhbgpwNjQKYVZ3ZSBhcmUgd3JpdGluZyB0aGUgcmVjb3JkcyBlaXRoZXIuIFNlZSBoYW5kbGVyLnJlY29yZHNbXQpwNjUKYXNWZXZlbnRFeHRyYURhdGEKcDY2CkkwMQpzVmltcGFjdEV2ZW50QWxlcnRzCnA2NwpJMDEKc1ZtZXRhZGF0YQpwNjgKSTAxCnNzVnNlcnZlcnMKcDY5CihscDcwCihkcDcxClZwa2NzMTJGaWxlcGF0aApwNzIKVjEwLjEwLjE1NC4xMjNfNjkucGtjczEyCnA3MwpzVmhvc3QKcDc0ClYxMC4xMC4yNDkuMjQ5CnA3NQpzVnRsc1ZlcnNpb24KcDc2CkYxLjIKc1Zwb3J0CnA3NwpJODMwMgpzVnRsc0Bjb21tZW50CnA3OApWVmFsaWQgdmFsdWVzIGFyZSAxLjAgYW5kIDEuMgpwNzkKc2Fzc1Ztb25pdG9yCnA4MAooZHA4MQpWYm9va21hcmsKcDgyCkkwMApzVnZlbG9jaXR5CnA4MwpJMDAKc1ZoYW5kbGVkCnA4NApJMDEKc1ZzdWJzY3JpYmVkCnA4NQpJMDEKc1ZwZXJpb2QKcDg2CkkxMjAKc3Mu
2020-07-14 09:40:25,145 Diagnostics INFO Check certificate
2020-07-14 09:40:25,145 Diagnostics INFO Creating connection
2020-07-14 09:40:25,145 Connection INFO Connecting to X.X.X.X:8302
2020-07-14 09:40:25,145 Connection INFO Using TLS v1.2
2020-07-14 09:40:25,399 Diagnostics INFO Creating request message
2020-07-14 09:40:25,400 Diagnostics INFO Request message=0001000200000008ffffffff48900061
2020-07-14 09:40:25,400 Diagnostics INFO Sending request message
2020-07-14 09:40:25,400 Diagnostics INFO Receiving response message
2020-07-14 09:40:25,413 Connection INFO Connecting to X.X.X.X:8302
2020-07-14 09:40:25,414 Connection INFO Using TLS v1.2
2020-07-14 09:40:25,414 Transformer INFO Starting process.
2020-07-14 09:40:25,415 Writer INFO Starting process.
2020-07-14 09:40:25,415 Diagnostics INFO Response message=KGRwMApTJ2xlbmd0aCcKcDEKSTQ4CnNTJ3ZlcnNpb24nCnAyCkkxCnNTJ2RhdGEnCnAzClMnXHgwMFx4MDBceDEzXHg4OVx4MDBceDAwXHgwMFx4MDhceDAwXHgwMFx4MDBceDAwXHgwMFx4MDBceDAwXHgwMFx4MDBceDAwXHgxM1x4ODhceDAwXHgwMFx4MDBceDA4XHgwMFx4MDBceDAwXHgwMFx4MDBceDAwXHgwMFx4MDBceDAwXHgwMFx4MWFceDBiXHgwMFx4MDBceDAwXHgwOFx4MDBceDAwXHgwMFx4MDBceDAwXHgwMFx4MDBceDAwJwpwNApzUydtZXNzYWdlVHlwZScKcDUKSTIwNTEKcy4=
2020-07-14 09:40:25,415 Decorator INFO Starting process.
2020-07-14 09:40:25,416 Diagnostics INFO Streaming info response
2020-07-14 09:40:25,416 Diagnostics INFO Connection successful
2020-07-14 09:40:25,416 Monitor INFO Starting Monitor.
2020-07-14 09:40:25,416 Monitor INFO Starting. 0 handled; average rate 0 ev/sec;
2020-07-14 09:40:25,900 Bookmark INFO Opening bookmark file /opt/estreamer/abc123_bookmark.dat.
2020-07-14 09:40:25,900 Settings INFO Timestamp: Start = 2 (Bookmark = 1594680731)
2020-07-14 09:40:25,900 Receiver INFO EventStreamRequestMessage: 00010002000000085f0ce59b48900061
2020-07-14 09:40:25,900 SubscriberParser INFO Starting process.
2020-07-14 09:40:25,910 Bookmark INFO Opening bookmark file /opt/estreamer/abc123_bookmark.dat.
2020-07-14 09:40:25,911 Settings INFO Timestamp: Start = 2 (Bookmark = 1594680731)
2020-07-14 09:40:25,911 Receiver INFO StreamingRequestMessage: 000108010000003800001a0b00000038489000615f0ce59b0009000c000400150009001f000b003d000e00470004005b000700650006006f0002008300000000
2020-07-14 09:40:26,328 Cache INFO Loading cache from /opt/estreamer/abc123_cache.dat
2020-07-14 09:40:26,369 Bookmark INFO Opening bookmark file /opt/estreamer/abc123_bookmark.dat.
2020-07-14 09:42:25,502 Monitor INFO Running. 371200 handled; average rate 3085.35 ev/sec;
2020-07-14 09:44:25,620 Monitor INFO Running. 758000 handled; average rate 3153.06 ev/sec;
2020-07-14 09:46:25,633 Monitor INFO Running. 1146400 handled; average rate 3180.8 ev/sec;
2020-07-14 09:48:25,512 Monitor INFO Running. 1532700 handled; average rate 3191.07 ev/sec;
2020-07-14 09:50:25,667 Monitor INFO Running. 1920700 handled; average rate 3198.58 ev/sec;
2020-07-14 09:52:25,498 Monitor INFO Running. 2306300 handled; average rate 3201.81 ev/sec;
2020-07-14 09:54:25,701 Monitor INFO Running. 2688700 handled; average rate 3198.94 ev/sec;
2020-07-14 09:56:25,623 Monitor INFO Running. 3070000 handled; average rate 3196.54 ev/sec;
2020-07-14 09:58:25,561 Monitor INFO Running. 3451600 handled; average rate 3194.86 ev/sec;
2020-07-14 10:00:06,774 Writer ERROR error: \nTraceback (most recent call last):\n File "/opt/estreamer/estreamer/baseproc.py", line 208, in receiveInput\n self.onReceive( item )\n File "/opt/estreamer/estreamer/baseproc.py", line 313, in onReceive\n self.onEvent( item )\n File "/opt/estreamer/estreamer/pipeline.py", line 403, in onEvent\n write( item, self.settings )\n File "/opt/estreamer/estreamer/pipeline.py", line 228, in write\n streams[ index ].write( event['payloads'][index] + delimiter )\n File "/opt/estreamer/estreamer/streams/udp.py", line 63, in write\n self.socket.send( data.encode( self.encoding ) )\nerror: [Errno 111] Connection refused\n
2020-07-14 10:00:06,774 Writer ERROR Message data too large. Enable debug if asked to do so.
2020-07-14 10:00:06,774 Writer INFO Error state. Clearing queue
2020-07-14 10:00:06,855 Writer ERROR error: \nTraceback (most recent call last):\n File "/opt/estreamer/estreamer/baseproc.py", line 208, in receiveInput\n self.onReceive( item )\n File "/opt/estreamer/estreamer/baseproc.py", line 313, in onReceive\n self.onEvent( item )\n File "/opt/estreamer/estreamer/pipeline.py", line 403, in onEvent\n write( item, self.settings )\n File "/opt/estreamer/estreamer/pipeline.py", line 228, in write\n streams[ index ].write( event['payloads'][index] + delimiter )\n File "/opt/estreamer/estreamer/streams/udp.py", line 63, in write\n self.socket.send( data.encode( self.encoding ) )\nerror: [Errno 111] Connection refused\n
2020-07-14 10:00:06,855 Writer ERROR Message data too large. Enable debug if asked to do so.
2020-07-14 10:00:06,958 Writer ERROR error: \nTraceback (most recent call last):\n File "/opt/estreamer/estreamer/baseproc.py", line 208, in receiveInput\n self.onReceive( item )\n File "/opt/estreamer/estreamer/baseproc.py", line 313, in onReceive\n self.onEvent( item )\n File "/opt/estreamer/estreamer/pipeline.py", line 403, in onEvent\n write( item, self.settings )\n File "/opt/estreamer/estreamer/pipeline.py", line 228, in write\n streams[ index ].write( event['payloads'][index] + delimiter )\n File "/opt/estreamer/estreamer/streams/udp.py", line 63, in write\n self.socket.send( data.encode( self.encoding ) )\nerror: [Errno 111] Connection refused\n
2020-07-14 10:00:06,958 Writer ERROR Message data too large. Enable debug if asked to do so.
2020-07-14 10:00:07,141 Writer ERROR error: \nTraceback (most recent call last):\n File "/opt/estreamer/estreamer/baseproc.py", line 208, in receiveInput\n self.onReceive( item )\n File "/opt/estreamer/estreamer/baseproc.py", line 313, in onReceive\n self.onEvent( item )\n File "/opt/estreamer/estreamer/pipeline.py", line 403, in onEvent\n write( item, self.settings )\n File "/opt/estreamer/estreamer/pipeline.py", line 228, in write\n streams[ index ].write( event['payloads'][index] + delimiter )\n File "/opt/estreamer/estreamer/streams/udp.py", line 63, in write\n self.socket.send( data.encode( self.encoding ) )\nerror: [Errno 111] Connection refused\n
2020-07-14 10:00:07,141 Writer ERROR Message data too large. Enable debug if asked to do so.
2020-07-14 10:00:07,212 Writer ERROR error: \nTraceback (most recent call last):\n File "/opt/estreamer/estreamer/baseproc.py", line 208, in receiveInput\n self.onReceive( item )\n File "/opt/estreamer/estreamer/baseproc.py", line 313, in onReceive\n self.onEvent( item )\n File "/opt/estreamer/estreamer/pipeline.py", line 403, in onEvent\n write( item, self.settings )\n File "/opt/estreamer/estreamer/pipeline.py", line 228, in write\n streams[ index ].write( event['payloads'][index] + delimiter )\n File "/opt/estreamer/estreamer/streams/udp.py", line 63, in write\n self.socket.send( data.encode( self.encoding ) )\nerror: [Errno 111] Connection refused\n
2020-07-14 10:00:13,409 Writer ERROR Message data too large. Enable debug if asked to do so.
2020-07-14 10:00:25,644 Controller INFO Process writer state is Error.
2020-07-14 10:00:25,644 Monitor INFO Running. 0 handled; average rate 0 ev/sec;
2020-07-14 10:00:25,686 Controller INFO Stopping...
2020-07-14 10:00:25,692 SubscriberParser INFO Stop message received
2020-07-14 10:00:25,697 SubscriberParser INFO Exiting
2020-07-14 10:00:25,702 Controller INFO Process 468892 (Process-1) exit code: 0
2020-07-14 10:00:25,708 Decorator INFO Stop message received
2020-07-14 10:00:25,713 Decorator INFO Error state. Clearing queue
2020-07-14 10:00:25,713 Cache INFO Saving cache to /opt/estreamer/abc123_cache.dat
2020-07-14 10:00:25,728 Decorator INFO Exiting
2020-07-14 10:00:25,729 Controller INFO Process 468893 (Process-2) exit code: 0
2020-07-14 10:00:25,734 Transformer INFO Stop message received
2020-07-14 10:00:25,739 Transformer INFO Error state. Clearing queue
2020-07-14 10:00:25,739 Transformer INFO Exiting
2020-07-14 10:00:25,740 Controller INFO Process 468894 (Process-3) exit code: 0
2020-07-14 10:00:25,750 Writer INFO Stop message received
2020-07-14 10:00:25,760 Writer INFO Exiting
2020-07-14 10:00:25,760 Controller INFO Process 468895 (Process-4) exit code: 0
2020-07-14 10:00:25,760 Monitor INFO Stopping Monitor.
2020-07-14 10:00:25,894 Controller INFO Goodbye

@15U12U did you find what was the problem? I have the same issues.

Hi @akilbekov,

"Message data too large" error occurs when the length is exceeding in the message. (Default 4096)

This error will prompt if the message length is greater than the defined maximum length or the log level is not set to DEBUG.

(you can find it in lestreamer/baseproc.py)

As a workaround, I changed the self.levelName to 'DEBUG' in estreamer/settings/logging.py.

I have a similar problem
Maybe someone found a solution to this problem?

2020-12-11 11:39:27,829 Monitor INFO Running. 59125200 handled; average rate 807.72 ev/sec;
2020-12-11 11:41:20,901 Transformer ERROR KeyError: lambdas\nTraceback (most recent call last):\n File "/opt/arcsight/eNcore_357/eStreamer-eNcore/estreamer/baseproc.py", line 208, in receiveInput\n self.onReceive( item )\n File "/opt/arcsight/eNcore_357/eStreamer-eNcore/estreamer/baseproc.py", line 313, in onReceive\n self.onEvent( item )\n File "/opt/arcsight/eNcore_357/eStreamer-eNcore/estreamer/pipeline.py", line 397, in onEvent\n data = transform( item, self.settings )\n File "/opt/arcsight/eNcore_357/eStreamer-eNcore/estreamer/pipeline.py", line 205, in transform\n output = adapters[ index ].dumps( event['record'] )\n File "/opt/arcsight/eNcore_357/eStreamer-eNcore/estreamer/adapters/cef.py", line 870, in dumps\n return cefAdapter.dumps()\n File "/opt/arcsight/eNcore_357/eStreamer-eNcore/estreamer/adapters/cef.py", line 860, in dumps\n self.__convert()\n File "/opt/arcsight/eNcore_357/eStreamer-eNcore/estreamer/adapters/cef.py", line 783, in __convert\n for target in self.mapping['lambdas']:\nKeyError: 'lambdas'\n
2020-12-11 11:41:20,962 Transformer ERROR Message data too large. Enable debug if asked to do so.
2020-12-11 11:41:20,962 Transformer INFO Error state. Clearing queue
2020-12-11 11:41:27,628 Controller INFO Process transformer state is Error.
2020-12-11 11:41:27,628 Monitor INFO Running. 0 handled; average rate 0 ev/sec;
2020-12-11 11:41:28,047 Controller INFO Stopping...
2020-12-11 11:41:28,952 SubscriberParser INFO Stop message received
2020-12-11 11:41:28,957 SubscriberParser INFO Exiting

Try with encore client version 3.6.0. It seems like version 3.6.0 is the only one that works fine with this problem.

Where can we get Version 3.6.0?

I have a similar problem
Maybe someone found a solution to this problem?

2020-12-11 11:39:27,829 Monitor INFO Running. 59125200 handled; average rate 807.72 ev/sec;
2020-12-11 11:41:20,901 Transformer ERROR KeyError: lambdas\nTraceback (most recent call last):\n File "/opt/arcsight/eNcore_357/eStreamer-eNcore/estreamer/baseproc.py", line 208, in receiveInput\n self.onReceive( item )\n File "/opt/arcsight/eNcore_357/eStreamer-eNcore/estreamer/baseproc.py", line 313, in onReceive\n self.onEvent( item )\n File "/opt/arcsight/eNcore_357/eStreamer-eNcore/estreamer/pipeline.py", line 397, in onEvent\n data = transform( item, self.settings )\n File "/opt/arcsight/eNcore_357/eStreamer-eNcore/estreamer/pipeline.py", line 205, in transform\n output = adapters[ index ].dumps( event['record'] )\n File "/opt/arcsight/eNcore_357/eStreamer-eNcore/estreamer/adapters/cef.py", line 870, in dumps\n return cefAdapter.dumps()\n File "/opt/arcsight/eNcore_357/eStreamer-eNcore/estreamer/adapters/cef.py", line 860, in dumps\n self.__convert()\n File "/opt/arcsight/eNcore_357/eStreamer-eNcore/estreamer/adapters/cef.py", line 783, in __convert\n for target in self.mapping['lambdas']:\nKeyError: 'lambdas'\n
2020-12-11 11:41:20,962 Transformer ERROR Message data too large. Enable debug if asked to do so.
2020-12-11 11:41:20,962 Transformer INFO Error state. Clearing queue
2020-12-11 11:41:27,628 Controller INFO Process transformer state is Error.
2020-12-11 11:41:27,628 Monitor INFO Running. 0 handled; average rate 0 ev/sec;
2020-12-11 11:41:28,047 Controller INFO Stopping...
2020-12-11 11:41:28,952 SubscriberParser INFO Stop message received
2020-12-11 11:41:28,957 SubscriberParser INFO Exiting

I am having the same issue, did you resolve it?

Hi @akilbekov,

"Message data too large" error occurs when the length is exceeding in the message. (Default 4096)

This error will prompt if the message length is greater than the defined maximum length or the log level is not set to DEBUG.

(you can find it in lestreamer/baseproc.py)

As a workaround, I changed the self.levelName to 'DEBUG' in estreamer/settings/logging.py.

This did not resolve the problem for me

Unfortunately, 3.6.0 is also not working. We have ongoing case atm.

Unfortunately, 3.6.0 is also not working. We have ongoing case atm.

I can see data in the cache file but it seems that data is not being transformed.

What is your FMC version btw ?

3.5.7, downloaded from ArcSight themselves

oh sorry, i misunderstood, i believe 6.2.0.5

I think that the problem is in one of the FMС modules and not its version.
I have solved the problem with the script crash, by writing a service to /etc/systemd/system that will monitor the process and restart if it's crash.

I think that the problem is in one of the FMС modules and not its version.
I have solved the problem with the script crash, by writing a service to /etc/systemd/system that will monitor the process and restart if it's crash.

Yes, I did the same.

`[Unit]
Description=Cisco eStreamer eNcore CLI Service
After=abc.service
StartLimitIntervalSec=0

[Service]
Type=forking
Restart=always
RestartSec=1
User=root
WorkingDirectory=/opt/estreamer
ExecStartPre=cd /opt/estreamer
ExecStart=bash encore.sh start

[Install]
WantedBy=multi-user.target`

I think that the problem is in one of the FMС modules and not its version.
I have solved the problem with the script crash, by writing a service to /etc/systemd/system that will monitor the process and restart if it's crash.

I dont think this solves the issue though. The encore client is shutting down because of a transformer error, you will keep getting the error and just forcing it to start again.

I think that the problem is in one of the FMС modules and not its version.
I have solved the problem with the script crash, by writing a service to /etc/systemd/system that will monitor the process and restart if it's crash.

I dont think this solves the issue though. The encore client is shutting down because of a transformer error, you will keep getting the error and just forcing it to start again.

Yes, this is just a cheeky workaround !!!

I think that the problem is in one of the FMС modules and not its version.
I have solved the problem with the script crash, by writing a service to /etc/systemd/system that will monitor the process and restart if it's crash.

I dont think this solves the issue though. The encore client is shutting down because of a transformer error, you will keep getting the error and just forcing it to start again.

Yes, this is just a cheeky workaround !!!

A workaround for what though? :D restarting the client? If you're getting the transformer error then no data is being passed through right?

My problem is the same as yours and im getting zero data out. I can see in the cache files that i have data being pulled from FMC but its not converting it to CEF correctly. I just get the lambda error over and over and over again

I think that the problem is in one of the FMС modules and not its version.
I have solved the problem with the script crash, by writing a service to /etc/systemd/system that will monitor the process and restart if it's crash.

I dont think this solves the issue though. The encore client is shutting down because of a transformer error, you will keep getting the error and just forcing it to start again.

Yes, this is just a cheeky workaround !!!

A workaround for what though? :D restarting the client? If you're getting the transformer error then no data is being passed through right?

My problem is the same as yours and im getting zero data out. I can see in the cache files that i have data being pulled from FMC but its not converting it to CEF correctly. I just get the lambda error over and over and over again

I have no problem of outputting data. I can send the logs via syslog.

Have you configured it properly?

"outputters": [
{
"adapter": "cef",
"enabled": true,
"stream": {
"uri": "udp://X.X.X.X:514"
}
},
{
"adapter": "cef",
"enabled": true,
"name": "fmc-cef-file",
"stream": {
"options": {
"rotate": true
},
"uri": "file:///:logs/fmc-estreamer.{0}.cef"
}
}

Here is my config

I think that the problem is in one of the FMС modules and not its version.
I have solved the problem with the script crash, by writing a service to /etc/systemd/system that will monitor the process and restart if it's crash.

I dont think this solves the issue though. The encore client is shutting down because of a transformer error, you will keep getting the error and just forcing it to start again.

Yes, this is just a cheeky workaround !!!

A workaround for what though? :D restarting the client? If you're getting the transformer error then no data is being passed through right?
My problem is the same as yours and im getting zero data out. I can see in the cache files that i have data being pulled from FMC but its not converting it to CEF correctly. I just get the lambda error over and over and over again

I have no problem of outputting data. I can send the logs via syslog.

Have you configured it properly?

"outputters": [
{
"adapter": "cef",
"enabled": true,
"stream": {
"uri": "udp://X.X.X.X:514"
}
},
{
"adapter": "cef",
"enabled": true,
"name": "fmc-cef-file",
"stream": {
"options": {
"rotate": true
},
"uri": "file:///:logs/fmc-estreamer.{0}.cef"
}
}

Here is my config

Yes outputters are correct, this is my error which i think is different to yours slightly.

2020-12-11 11:41:20,901 Transformer ERROR KeyError: lambdas\nTraceback (most recent call last):\n File "/opt/arcsight/eNcore_357/eStreamer-eNcore/estreamer/baseproc.py", line 208, in receiveInput\n self.onReceive( item )\n File "/opt/arcsight/eNcore_357/eStreamer-eNcore/estreamer/baseproc.py", line 313, in onReceive\n self.onEvent( item )\n File "/opt/arcsight/eNcore_357/eStreamer-eNcore/estreamer/pipeline.py", line 397, in onEvent\n data = transform( item, self.settings )\n File "/opt/arcsight/eNcore_357/eStreamer-eNcore/estreamer/pipeline.py", line 205, in transform\n output = adapters[ index ].dumps( event['record'] )\n File "/opt/arcsight/eNcore_357/eStreamer-eNcore/estreamer/adapters/cef.py", line 870, in dumps\n return cefAdapter.dumps()\n File "/opt/arcsight/eNcore_357/eStreamer-eNcore/estreamer/adapters/cef.py", line 860, in dumps\n self.__convert()\n File "/opt/arcsight/eNcore_357/eStreamer-eNcore/estreamer/adapters/cef.py", line 783, in __convert\n for target in self.mapping['lambdas']:\nKeyError: 'lambdas'\n