Invalid string; unexpected character: 253 hex: fd
Closed this issue · 7 comments
I am getting this error
java.io.IOException: Invalid string; unexpected character: 253 hex: fd
at org.elasticsearch.common.io.stream.StreamInput.readString(StreamInput.java:372) ~[elasticsearch-5.6.0.jar:5.6.0]
at org.elasticsearch.common.util.concurrent.ThreadContext$ThreadContextStruct.<init>(ThreadContext.java:362) ~[elasticsearch-5.6.0.jar:5.6.0]
at org.elasticsearch.common.util.concurrent.ThreadContext$ThreadContextStruct.<init>(ThreadContext.java:352) ~[elasticsearch-5.6.0.jar:5.6.0]
at org.elasticsearch.common.util.concurrent.ThreadContext.readHeaders(ThreadContext.java:186) ~[elasticsearch-5.6.0.jar:5.6.0]
at org.elasticsearch.transport.TcpTransport.messageReceived(TcpTransport.java:1372) ~[elasticsearch-5.6.0.jar:5.6.0]
at org.elasticsearch.transport.netty4.Netty4MessageChannelHandler.channelRead(Netty4MessageChannelHandler.java:74) ~[transport-netty4-5.6.0.jar:5.6.0]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310) [netty-codec-4.1.13.Final.jar:4.1.13.Final]
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:297) [netty-codec-4.1.13.Final.jar:4.1.13.Final]
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:413) [netty-codec-4.1.13.Final.jar:4.1.13.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265) [netty-codec-4.1.13.Final.jar:4.1.13.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:134) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:544) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:498) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458) [netty-transport-4.1.13.Final.jar:4.1.13.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) [netty-common-4.1.13.Final.jar:4.1.13.Final]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_131]
My yaml file for master is
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: es5-master
labels:
component: elasticsearch5
role: master
spec:
replicas: 3
template:
metadata:
labels:
component: elasticsearch5
role: master
spec:
initContainers:
- name: init-sysctl
image: busybox
imagePullPolicy: IfNotPresent
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
containers:
- name: es5-master
securityContext:
privileged: false
capabilities:
add:
- IPC_LOCK
- SYS_RESOURCE
image: quay.io/pires/docker-elasticsearch-kubernetes:5.6.0
imagePullPolicy: Always
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: "CLUSTER_NAME"
value: "myes5db"
- name: "NUMBER_OF_MASTERS"
value: "2"
- name: NODE_MASTER
value: "true"
- name: NODE_INGEST
value: "false"
- name: NODE_DATA
value: "false"
- name: HTTP_ENABLE
value: "false"
- name: "ES_JAVA_OPTS"
value: "-Xms256m -Xmx256m"
- name: "NETWORK_HOST"
value: "_eth0_"
ports:
- containerPort: 9300
name: transport
protocol: TCP
livenessProbe:
tcpSocket:
port: 9300
volumeMounts:
- name: storage
mountPath: /data
volumes:
- emptyDir:
medium: ""
name: "storage"
I am running Es version 1.7 and that's why I renamed this new one to elasticsearch5. I hope this naming is not the cause of the problem.
I initially didn't have eth0 for NETWORK_HOST , reviewing Troubleshooting
par from the readme doc, I added in but now getting 253 hex: fd
error.
Other network host values didnt work.
Any ideas?
Getting the same error from ElasticSearch when trying to start new SonarQube 6.7 on RHEL 6.6.
We were getting this error because DISCOVERY_SERVICE
used in the config defaults to elasticsearch-discovery
I added the DISCOVERY_SERVICE env var pointing to elasticsearch5-discovery
for us, and it worked.
I don't see a 'DISCOVERY_SERVICE' entry in the elastic search properties with SonarQube 6.7.1, or anything similar. Setting this in the environment doesn't change the error for me.
@sawatts Are you using the docker images / kubernetes resources in this repo?
@zoidbergwill No docker or kubernetes involved -- so probably not of interest to this thread. I was just looking for other occurances of the same issue to find a common cause.
Sorry but I'm closing this issue as it's source to confusion for people coming here looking for issues with Elasticsearch on Kubernetes as documented in this repo.