sakserv/hadoop-mini-clusters

ERROR ShareLibService:517 - org.apache.oozie.service.ServiceException

Closed this issue · 7 comments

Hi Shane,

First of all, I wanted to say thanks a lot for sharing/working on this project. This is the most needed project that we wanted for so long.

So, I am trying to use oozie project for our unit testing and came across following issue (I am using 0.1.11 version) -

2017-06-24 14:15:25 ERROR ShareLibService:517 - org.apache.oozie.service.ServiceException: E0104: Could not fully initialize service [org.apache.oozie.service.ShareLibService], Not able to cache sharelib. An Admin needs to install the sharelib with oozie-setup.sh and issue the 'oozie admin' CLI command to update the sharelib
org.apache.oozie.service.ServiceException: E0104: Could not fully initialize service [org.apache.oozie.service.ShareLibService], Not able to cache sharelib. An Admin needs to install the sharelib with oozie-setup.sh and issue the 'oozie admin' CLI command to update the sharelib
at org.apache.oozie.service.ShareLibService.init(ShareLibService.java:132)
at org.apache.oozie.service.Services.setServiceInternal(Services.java:386)
at org.apache.oozie.service.Services.setService(Services.java:372)
at org.apache.oozie.service.Services.loadServices(Services.java:305)
at org.apache.oozie.service.Services.init(Services.java:213)
at org.apache.oozie.local.LocalOozie.start(LocalOozie.java:64)
at com.github.sakserv.minicluster.impl.OozieLocalServer.start(OozieLocalServer.java:255)
at uk.co.nokia.ana.deployer.JobDeployerUnitTest.setUp(JobDeployerUnitTest.java:117)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:24)
at org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.run(ParentRunner.java:309)
at org.eclipse.jdt.internal.junit4.runner.JUnit4TestReference.run(JUnit4TestReference.java:86)
at org.eclipse.jdt.internal.junit.runner.TestExecution.run(TestExecution.java:38)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:459)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.runTests(RemoteTestRunner.java:678)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.run(RemoteTestRunner.java:382)
at org.eclipse.jdt.internal.junit.runner.RemoteTestRunner.main(RemoteTestRunner.java:192)
Caused by: java.lang.IllegalArgumentException: Wrong FS: hdfs://localhost/tmp/share_lib, expected: file:///
at org.apache.hadoop.fs.FileSystem.checkPath(FileSystem.java:645)
at org.apache.hadoop.fs.RawLocalFileSystem.pathToFile(RawLocalFileSystem.java:80)
at org.apache.hadoop.fs.RawLocalFileSystem.listStatus(RawLocalFileSystem.java:372)
at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1485)
at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1525)
at org.apache.hadoop.fs.ChecksumFileSystem.listStatus(ChecksumFileSystem.java:570)
at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1485)
at org.apache.hadoop.fs.FileSystem.listStatus(FileSystem.java:1525)
at org.apache.oozie.service.ShareLibService.getLatestLibPath(ShareLibService.java:657)
at org.apache.oozie.service.ShareLibService.updateShareLib(ShareLibService.java:525)
at org.apache.oozie.service.ShareLibService.init(ShareLibService.java:122)
... 23 more

Settings used are -

oozie.test.dir=embedded_oozie
oozie.home.dir=oozie
oozie.username=oozie
oozie.groupname=oozie
oozie.hdfs.share.lib.dir=/tmp/share_lib
oozie.share.lib.create=true
oozie.local.share.lib.cache.dir=./share_lib_cache
oozie.purge.local.share.lib.cache=true

& Code to initialize oozie server is -

oozieLocalServer = new OozieLocalServer.Builder()
.setOozieTestDir(propertyParser.getProperty(ConfigVars.OOZIE_TEST_DIR_KEY))
.setOozieHomeDir(propertyParser.getProperty(ConfigVars.OOZIE_HOME_DIR_KEY))
.setOozieUsername(System.getProperty("user.name"))
.setOozieGroupname(propertyParser.getProperty(ConfigVars.OOZIE_GROUPNAME_KEY))
.setOozieYarnResourceManagerAddress(propertyParser.getProperty(
ConfigVars.YARN_RESOURCE_MANAGER_ADDRESS_KEY))
.setOozieHdfsDefaultFs(hdfsLocalCluster.getHdfsConfig().get("fs.defaultFS"))
.setOozieConf(hdfsLocalCluster.getHdfsConfig())
.setOozieHdfsShareLibDir(propertyParser.getProperty(ConfigVars.OOZIE_HDFS_SHARE_LIB_DIR_KEY))
.setOozieShareLibCreate(Boolean.parseBoolean(
propertyParser.getProperty(ConfigVars.OOZIE_SHARE_LIB_CREATE_KEY)))
.setOozieLocalShareLibCacheDir(propertyParser.getProperty(
ConfigVars.OOZIE_LOCAL_SHARE_LIB_CACHE_DIR_KEY))
.setOoziePurgeLocalShareLibCache(Boolean.parseBoolean(propertyParser.getProperty(
ConfigVars.OOZIE_PURGE_LOCAL_SHARE_LIB_CACHE_KEY)))
.build();
oozieLocalServer.start();

Hello @rajesh-kumar - sorry for the delay here. I'd like to make this more seamless, but currently, you have to explicitly initialize the share lib setup.

See the example code below for how to do that.
https://github.com/sakserv/hadoop-mini-clusters/blob/master/hadoop-mini-clusters-oozie/src/test/java/com/github/sakserv/minicluster/impl/OozieLocalServerIntegrationTest.java#L112

Let me know if you still have issues after adding that code. Thanks!

Thank you. It worked with latest version. Just want to highlight two points -

Could you please add proxy support in HttpUtils class -

private static final boolean USE_PROXY = Boolean.parseBoolean(propertyParser.getProperty("USE_PROXY"));
private static final String PROXY_IP = propertyParser.getProperty("PROXY_IP");
private static final int PROXY_PORT = propertyParser.getProperty("PROXY_PORT");
private static final Proxy PROXY = new Proxy(Proxy.Type.HTTP, new InetSocketAddress(PROXY_IP, PROXY_PORT));
if (USE_PROXY) {
httpURLConnection = (HttpURLConnection) new URL(url).openConnection(PROXY);
} else {
httpURLConnection = (HttpURLConnection) new URL(url).openConnection();
}

I had to explicitly set following env property "hdp.release.version" to "2.6.1.0" this time. Earlier it was picking default latest value without explicitly setting it. I didn't check in details why it is happening but thought to let you know.

Cheers!

@rajesh-kumar - thanks for circling back. It might be a bit before I can get to number 1. If you'd like to open a pull request, I'll work on getting it merged. I'll take a look at number 2. I have a thought on the cause.

@rajesh-kumar - Thanks for submitted the PR! The patch doesn't apply cleanly, but I'll work on that part and will try to get that merged today.

Regarding needing to set the hdp.release.version system property, I've done some tests and I'm not seeing that issue, the variable is properly populated by maven. One change I made several releases back was to remove the intellij related artifacts from the git repo. The run configuration used to explicitly set this variable. I wonder if this is related? Could you try a fresh pull and import into your IDE to see if the issue goes away, as I'm not able to recreate the issue.

The PR has been merged. Let me know what you find on the hdp.release.version issue. Thanks!

0.1.13 has been released and includes the proxy support PR.

Closing this for now. Please open a new issue with what you are seeing if the hdp.release.version continues to be an issue. Thanks!