OpenNTF/org.openntf.nsfodp

SSJS libraries are cut off around 32K

Closed this issue · 5 comments

Most likely, the export code isn't reading "this is followed by X structures" properly and is only dealing with the first one.

Though my test exported the whole file, I hit an exception when compiling:

Caused by: java.lang.RuntimeException: Input length = 1
	at com.ibm.commons.util.FastStringBuffer.append(FastStringBuffer.java:792)
	at com.ibm.commons.util.FastStringBuffer.load(FastStringBuffer.java:778)
	at com.ibm.commons.util.io.StreamUtil.readString(StreamUtil.java:205)
	at org.openntf.nsfodp.commons.odp.util.DXLNativeUtil.getJavaScriptLibraryData(DXLNativeUtil.java:53)
	at org.openntf.nsfodp.commons.odp.JavaScriptLibrary.getCompositeData(JavaScriptLibrary.java:47)
	at org.openntf.nsfodp.commons.odp.JavaScriptLibrary.attachFileData(JavaScriptLibrary.java:52)
	at org.openntf.nsfodp.commons.odp.AbstractSplitDesignElement.getDxl(AbstractSplitDesignElement.java:71)
	at org.openntf.nsfodp.compiler.ODPCompiler.lambda$12(ODPCompiler.java:635)
	at org.openntf.nsfodp.compiler.ODPCompiler$$Lambda$137/0x0000000000000000.apply(Unknown Source)
	at java.util.stream.Collectors.lambda$toMap$58(Collectors.java:1321)
	at java.util.stream.Collectors$$Lambda$170/0x0000000000000000.accept(Unknown Source)
	at java.util.stream.ReduceOps$3ReducingSink.accept(ReduceOps.java:169)
	at java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
	at java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1384)
	at java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:497)
	at java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:487)
	at java.util.stream.ReduceOps$ReduceOp.evaluateSequential(ReduceOps.java:708)
	at java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:241)
	at java.util.stream.ReferencePipeline.collect(ReferencePipeline.java:566)
	at org.openntf.nsfodp.compiler.ODPCompiler.importFileResources(ODPCompiler.java:631)
	at org.openntf.nsfodp.compiler.ODPCompiler.compile(ODPCompiler.java:367)
	at org.openntf.nsfodp.compiler.ODPCompiler.compile(ODPCompiler.java:301)
	at org.openntf.nsfodp.compiler.servlet.ODPCompilerServlet.lambda$3(ODPCompilerServlet.java:213)
	at org.openntf.nsfodp.compiler.servlet.ODPCompilerServlet$$Lambda$39/0x0000000000000000.call(Unknown Source)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at lotus.domino.NotesThread.run(Unknown Source)

That trouble comes from malformed unicode sequences that happen a good chunk of the way through the exported file. My hunch is that this comes from improper reading of the data - that could be misc. struct garbage being read out as strings.

If I replace the content on disk with a clean >32K file from elsewhere, the import works, but the export is oddly chomped in the middle: the start and end both exist, but the middle is missing a bunch of text. It could be that iteration within each item breaks silently.

It seems like this might be encoding-related, either on reading or writing. The first chunk of data, which is compiled in 20,000 bytes at a time, is read back at 20,000 but read back as 7233 bytes when read back through LMBCS. It's not just a matter of not needing the LMBCS path, though, since I believe that is indeed the proper encoding. It could be that there's a null character in there somewhere... it being at 7233 rules out chunked CD records, which is helpful, at least.

Modifying the example to output functions numbered from 1 - 499 in the name makes it clear that both troubles seem to be happening: output goes from 1-140, then 303-337, then stops.