I wasn't able to run oozie-mini-cluster with "real" workflow
Closed this issue · 3 comments
seregasheypak commented
Root cause: something weird with share lib.
Here is the code for com.github.sakserv.minicluster.impl.OozieLocalServerIntegrationTest
that reproduces issue.
@Test
public void testSubmitWorkflow() throws Exception {
LOG.info("OOZIE: Test Submit Workflow Start");
FileSystem hdfsFs = hdfsLocalCluster.getHdfsFileSystemHandle();
OozieClient oozie = oozieLocalServer.getOozieClient();
Path appPath = new Path(hdfsFs.getHomeDirectory(), "testApp");
hdfsFs.mkdirs(new Path(appPath, "lib"));
Path workflow = new Path(appPath, "workflow.xml");
//write workflow.xml
String wfApp = "<workflow-app name=\"sugar-option-decision\" xmlns=\"uri:oozie:workflow:0.5\">\n" +
" <global>\n" +
" <job-tracker>${jobTracker}</job-tracker>\n" +
" <name-node>${nameNode}</name-node>\n" +
" </global>\n" +
" <start to=\"first\"/>\n" +
" <action name=\"first\">\n" +
" <map-reduce> </map-reduce>\n" +
" <ok to=\"decision-second-option\"/>\n" +
" <error to=\"kill\"/>\n" +
" </action>\n" +
" <decision name=\"decision-second-option\">\n" +
" <switch>\n" +
" <case to=\"option\">${doOption}</case>\n" +
" <default to=\"second\"/>\n" +
" </switch>\n" +
" </decision>\n" +
" <action name=\"option\">\n" +
" <map-reduce> </map-reduce>\n" +
" <ok to=\"second\"/>\n" +
" <error to=\"kill\"/>\n" +
" </action>\n" +
" <action name=\"second\">\n" +
" <map-reduce> </map-reduce>\n" +
" <ok to=\"end\"/>\n" +
" <error to=\"kill\"/>\n" +
" </action>\n" +
" <kill name=\"kill\">\n" +
" <message>\n" +
" Failed to workflow, error message[${wf: errorMessage (wf: lastErrorNode ())}]\n" +
" </message>\n" +
" </kill>\n" +
" <end name=\"end\"/>\n" +
"</workflow-app>";
Writer writer = new OutputStreamWriter(hdfsFs.create(workflow));
writer.write(wfApp);
writer.close();
//write job.properties
Properties conf = oozie.createConfiguration();
conf.setProperty(OozieClient.APP_PATH, workflow.toString());
conf.setProperty(OozieClient.USER_NAME, UserGroupInformation.getCurrentUser().getUserName());
conf.setProperty("nameNode", "hfds://localhost:" + hdfsLocalCluster.getHdfsNamenodePort());
conf.setProperty("jobTracker", mrLocalCluster.getResourceManagerAddress());
conf.setProperty("doOption", "true");
//submit and check
final String jobId = oozie.run(conf);
WorkflowJob wf = oozie.getJobInfo(jobId);
assertNotNull(wf);
assertEquals(WorkflowJob.Status.RUNNING, wf.getStatus());
while(true){
Thread.sleep(1000);
wf = oozie.getJobInfo(jobId);
if(wf.getStatus() == WorkflowJob.Status.FAILED || wf.getStatus() == WorkflowJob.Status.KILLED || wf.getStatus() == WorkflowJob.Status.PREP || wf.getStatus() == WorkflowJob.Status.SUCCEEDED){
break;
}
}
wf = oozie.getJobInfo(jobId);
assertEquals(WorkflowJob.Status.SUCCEEDED, wf.getStatus());
LOG.info("OOZIE: Workflow: {}", wf.toString());
hdfsFs.close();
}
I would like to try to debug it. Where do oozie logs go?
sakserv commented
@seregasheypak - sorry for the delay. I broke notifications for this repo and missed this issue. Thanks for reporting this.
I found the issue with logging and will include the fix in the 0.1.9 release I am cutting today. The fix will set LocalOozie logging to the console. Please give that a try to see if it uncovers more details on the issue above.
I'll report back when 0.1.9 is released.
sakserv commented
0.1.9 has been released. Please let me know what you find out.
sakserv commented
Closing for now. Feel free to reopen if necessary.