donmccurdy/glTF-Transform

Can't compress a model with draco

Closed this issue · 6 comments

Describe the bug
We have very large model (can't attach it, because NDA)

When applying draco command, it apparently doesn't change anything and output is the following:

debug: dedup: Merged 150 of 1538 accessors.
debug: dedup: Complete.
debug: weld: Complete.
debug: [KHR_draco_mesh_compression] Compression options: {"method":1,"encodeSpeed":5,"decodeSpeed":5,"quantizationBits":{"POSITION":14,"NORMAL":10,"COLOR":8,"TEX_COORD":12,"GENERIC":12},"quantizationVolume":"mesh"}
warn: [KHR_draco_mesh_compression]: Error applying Draco compression. Skipping primitive compression.
warn: [KHR_draco_mesh_compression]: Error applying Draco compression. Skipping primitive compression.
debug: [KHR_draco_mesh_compression] Compressed 321 primitives.

error: Cannot read properties of undefined (reading 'getParent')

I spend a few hours trying to figure what is going on in the minified code, and the only thing that I was able to understand is that the error originates from here https://github.com/donmccurdy/glTF-Transform/blob/main/packages/core/src/io/writer.ts#L430.

I am not familiar enough with your codebase to investigate further.

To Reproduce

Unfortunately I cannot give you the model. But the command was just
npx @gltf-transform/cli draco model.gltf model.gltf --verbose

Expected behavior
No error, and at least partially compressed model.

Versions:

  • Version: v3.10.1
  • Environment: Node.js v20.11.0

Could test some of the following?

  1. Install from the v4 alpha
npx @gltf-transform/cli@next draco in.glb out.glb --verbose
  1. Run the 'optimize' suite rather than just draco
npx @gltf-transform/cli@next optimize in.glb out.glb --compress draco --no-simplify --no-instance --no-palette --no-join --no-flatten
  1. What is the output of gltf-transform validate scene.gltf ?

Here is the output for the commands. (Except for 1 and 2, I used model.gltf model.gltf instead of in.glb out.glb)
output.zip

1 and 2 did not change model.gltf, and did not produce any new files.
(The model is stored in a git repsitory, I checked git status)

It may be worth trying (2), npx @gltf-transform/cli@next optimize ... once more. That error appears to have been a regression in an upstream dependency, I've pinned the dependency to a working version and republished the CLI.

As for the draco command failing, I'm not sure, it's difficult to guess what might cause that exception at that line. If you'd by any chance be willing to share a redacted .gltf file — perhaps deleting any object names, materials, textures, URIs, and buffer data — that may be enough. I don't need it to be a working file, but need to see whether something unusual might be going on with the data structure.

It may be worth trying (2), npx @gltf-transform/cli@next optimize ... once more. That error appears to have been a regression in an upstream dependency, I've pinned the dependency to a working version and republished the CLI.

It worked this time. No errors, meshes got compressed.

Also when the original file is imported to Blender and then exported back - draco command also works fine.

As for the draco command failing, I'm not sure, it's difficult to guess what might cause that exception at that line. If you'd by any chance be willing to share a redacted .gltf file — perhaps deleting any object names, materials, textures, URIs, and buffer data — that may be enough. I don't need it to be a working file, but need to see whether something unusual might be going on with the data structure.

Got the permission, the full model is available on the web anyway. Here is the slightly stripped file: bad-gltf.zip

Ok, thanks! I think that confirms there's something about this glTF file's internal structure that is causing draco compression to fail. The 'optimize' pipeline, or import/export from Blender, must be restructuring the file enough to avoid the issue. But since the file doesn't have any major validation issues, it should be supported as-is. I'll look more into the file you've shared and try to figure out what's missing. :)