ONSdigital/address-index-data

Caused by: org.datanucleus.exceptions.NucleusUserException: Persistence process has been specified to use a ClassLoaderResolver of name "datanucleus" yet this has not been found by the DataNucleus plugin mechanism. Please check your CLASSPATH and plugin specification.

Closed this issue · 1 comments

I met this error while building a docker image.
My Dockerfile as follows:

FROM ubuntu:latest

# Install OpenJDK-8
RUN apt-get update -y && \
    apt-get install -y openjdk-8-jdk && \
    apt-get install -y ant && \
    apt-get clean;
    
# Fix certificate issues
RUN apt-get update && \
    apt-get install ca-certificates-java && \
    apt-get clean && \
    update-ca-certificates -f;

# Setup JAVA_HOME -- useful for docker commandline
ENV JAVA_HOME /usr/lib/jvm/java-8-openjdk-amd64/
RUN export JAVA_HOME

# Install sbt and scala
RUN apt-get -qq -y install curl wget
RUN apt-get install gnupg -y
RUN echo "deb https://repo.scala-sbt.org/scalasbt/debian all main" | tee /etc/apt/sources.list.d/sbt.list && \
    echo "deb https://repo.scala-sbt.org/scalasbt/debian /" | tee /etc/apt/sources.list.d/sbt_old.list && \
    curl -sL "https://keyserver.ubuntu.com/pks/lookup?op=get&search=0x2EE0EA64E40A89B84B2DF73499E82A75642AC823" | apt-key add && \
    apt-get update && \
    apt-get install sbt -y
RUN apt-get install scala -y

# Install spark
RUN wget https://apache.claz.org/spark/spark-3.1.2/spark-3.1.2-bin-hadoop3.2.tgz
RUN tar xvf spark-*
RUN mv spark-3.1.2-bin-hadoop3.2 /opt/spark

ENV SPARK_HOME /opt/spark
RUN export SPARK_HOME
ENV PATH="${PATH}:${SPARK_HOME}/bin:${SPARK_HOME}/sbin"

RUN start-master.sh
RUN start-slave.sh spark://localhost:7077

# Install Elasticsearch 7
RUN curl -fsSL https://artifacts.elastic.co/GPG-KEY-elasticsearch | apt-key add -
RUN echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | tee -a /etc/apt/sources.list.d/elastic-7.x.list
RUN apt-get update
RUN apt-get install elasticsearch

# Install hadoop
ENV HADOOP_HOME /opt/hadoop
RUN export HADOOP_HOME
RUN wget https://archive.apache.org/dist/hadoop/common/hadoop-3.3.1/hadoop-3.3.1.tar.gz && \
    tar -xzf hadoop-3.3.1.tar.gz && \
    mv hadoop-3.3.1 $HADOOP_HOME

ENV PATH="${PATH}:${HADOOP_HOME}/bin:${HADOOP_HOME}/sbin"
ENV HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native \
    HADOOP_OPTS="-Djava.library.path=${HADOOP_HOME}/lib" \
    HADOOP_MAPRED_HOME=$HADOOP_HOME \
    HADOOP_COMMON_HOME=$HADOOP_HOME \
    HADOOP_HDFS_HOME=$HADOOP_HOME \
    YARN_HOME=$HADOOP_HOME \
    HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop \
    LD_LIBRARY_PATH="$HADOOP_HOME/lib/native/"

RUN start-all.sh

WORKDIR /opt/address-index
COPY ./ ./
RUN sbt clean assembly
RUN java -Dconfig.file=application.conf -jar batch/target/scala-2.11/ons-ai-batch-assembly-0.0.1.jar

At a minimum you'll need to use the versions listed here: https://github.com/ONSdigital/address-index-data/blob/develop/README.md