Enhancement Ideas 2
mikedh opened this issue Β· 10 comments
PR's are always super welcome! This is a relatively small open source project and really benefits from the bugfixes, features, and other stuff the 100+ contributors have PR'd, so thanks!
The previous issue (#199) got pretty out of date and a lot of the stuff on it was done so here's a new thread. Feel free to suggest things!
- Implement an actually correct and maybe even faster OBB algorithm (#1544)
- Finally get the
embree4
stuff over the line to be merged- The
mikedh/embreex
fork now has wheels for Mac/Linux/Win Python 3.6-3.11 - A number of people have worked on this ray branch #1108
- The
- Add texture preservation to
mesh.slice_plane
#1920 - Wishlist: find some way to make
scene.save_image
always work. We get a ton of issues about this and the answer is always pretty much "sorry your graphics driver hates you," or "try the docker image." I guess it's just a hard problem that depends a lot on platform and I'm not sure there's any way to solve it from trimesh- Apparently vulkan treats "screen vs offscreen" basically the same. Maybe we just need an additional viewer window that uses a vulkan backend, wgpu-native seems super interesting: https://github.com/pygfx/wgpu-py . BGFX also probably does this.
- Polish the wart where OBJ sometimes return a
Scene
. Maybe now that multi-material single meshes were PR'd in #1246 we could have OBJ always return aTrimesh
.- Probably will be as fixed as it can be by #2241
- Fix
trimesh.remesh.subdivide
subdivide_to_size
so that if you specify a subset of faces it splits their neighbors in a way to maintain watertightness. - Building on the relative success (measured by the lack of an immediate cavalcade of issues) of the deprecation procedure used in #1693 refactor the
trimesh.path.Path
API. Specifically:path.polygons_closed
->path.linestrings
path.polygons_full
->path.polygons
- Maybe additional logic so it isn't quite so "closed curves" specific and matches the Shapely data model a little better. Not sure about
path.path
andpath.discrete
.
- make
pyright trimesh
pass- trimesh predates both pyright/mypy and type hints. Fixing a few of these at a time would be a great place to start contributing, i.e.
pyright trimesh/graph.py
. - tracked in #2192
- trimesh predates both pyright/mypy and type hints. Fixing a few of these at a time would be a great place to start contributing, i.e.
Firstly, want to say thank you @mikedh for the simply amazing library! Incredibly impressive how much functionality trimesh contains.
Just wanted to suggest, it'd be nice to see some of the functionality, particularly under proximity and collisions, leveraging primitives when possible rather than treating everything as arbitrary meshes. I've been using the primitive generation trimesh provides, but as far as I know, things like signed distance queries and collision queries are not taking advantage of the metadata associated with the primitive geometries. Please correct me if I'm wrong! I've written my own set of SDF queries for the primitives I'm working with right now (boxes, cylinders, spheres) and found the speedup for my use case considerable (just adapted the very comprehensive set of examples from the wonderful Inigo Quilez' site).
def sdf_box(locations, world_tf_box, box_half_dims):
# locations is a Nx3 array
# world_tf_box is a 4x4 transformation matrix
# box_dims is a 3x1 array of [length, width, height]
# returns a Nx1 array of sdf values
# get the inverse transformation matrix
box_tf_world = invert_transform(world_tf_box)
# transform the locations to the box frame
locations_box_frame = tm.transformations.transform_points(locations, box_tf_world)
# get the sdf values (2 norm)
diff = np.abs(locations_box_frame) - box_half_dims #broadcasting to Nx3
# first term handles the case where the point is outside the box
# second term handles the case where the point is inside the box
sdf_values = np.linalg.norm(np.maximum(diff, 0.), axis=1) + np.minimum(np.max(diff, axis=1), 0.)
return sdf_values
def sdf_cylinder(locations, world_tf_cylinder, cylinder_dims):
# locations is a Nx3 array
# world_tf_cylinder is a 4x4 transformation matrix
# cylinder_dims is a 2x1 array of [radius, total_height/2]
# note that height is half the total height
# returns a Nx1 array of sdf values
# get the inverse transformation matrix
cylinder_tf_world = invert_transform(world_tf_cylinder)
# transform the locations to the cylinder frame
locations_cylinder_frame = tm.transformations.transform_points(locations, cylinder_tf_world)
# get the sdf values in euclidean distance
# r,z difference
r_diff = np.linalg.norm(locations_cylinder_frame[:, :2], axis=1) - cylinder_dims[0]
z_diff = np.abs(locations_cylinder_frame[:, 2]) - cylinder_dims[1]
# get the sdf values
# first term handles the case where the point is inside the cylinder
# second term handles the case where the point is outside the cylinder
rz_diff = np.stack([r_diff, z_diff], axis=1)
sdf_values = np.minimum(np.max(rz_diff, axis=1), 0.) + \
np.linalg.norm(np.maximum(rz_diff, 0.), axis=1)
return sdf_values
def sdf_sphere(locations, world_tf_sphere, sphere_radius):
# locations is a Nx3 array
# world_tf_sphere is a 4x4 transformation matrix
# sphere_dims is a scalar of the radius
# returns a Nx1 array of sdf values
# get the inverse transformation matrix
sphere_tf_world = invert_transform(world_tf_sphere)
# transform the locations to the sphere frame
locations_sphere_frame = tm.transformations.transform_points(locations, sphere_tf_world)
# get the sdf values
sdf_values = np.linalg.norm(locations_sphere_frame, axis=1) - sphere_radius
return sdf_values
I imagine it should also be possible to support leveraging primitives also with collision queries as FCL already does have objects for common primitives...
But perhaps it's simple enough for users like me just to add this functionality on an as-needed basis for themselves rather than offer full support.
Dear @mikedh, trimesh is awesome, thank you!
As an educator, my feature request would be for the community of trimesh users to have somewhere to discuss and publish other forms of documentation, we have a nice reference and a few examples, but we could have, according to Laing's chart I copy below, tutorials, how-to-guides, and explanations.
Maybe turning on the discussions feature of the repo could be a way in that direction... We could build a gallery of user examples, tutorials & etc.
Hi @mikedh
Very impressed with this library, its fast and supports wide variety of exchange formats. I have been having a blast using it so far - thank you for your work here and keeping it under what looks like an active development.
I wonder how you feel about the current state of mesh visual data representation, so far I am under impression that 'Trimesh' objects can only represent either a vertex color data via ColorVisual
or texture coordinates via TextureVisuals
but never both. It also seems that texture coordinates will only be available if a valid material is present, at least via GLTF exchange implementation.
This presents a serious challenge, its common in real-time graphics to have both - this allows colors sampled from textures to be tinted, overlayed or modified by vertex color values further in a shader pipeline. It also common to encode variety of useful mesh data in both vertex color and texture coordinate buffers without the need for materials or textures.
Whats your thoughts on this, and are you open to changes in this area, or perhaps I got the wrong impression here?
@RealDanTheMan glad you've been happy with it!
I wonder how you feel about the current state of mesh visual data representation, so far I am under impression that 'Trimesh' objects can only represent either a vertex color data via ColorVisual or texture coordinates via TextureVisuals but never both. It also seems that texture coordinates will only be available if a valid material is present, at least via GLTF exchange implementation.
Yeah the library started for my mechanical work and visual info was only used for debugging initially haha. Although it's evolved quite a bit since then! I agree that it really seems like ColorVisuals
and TextureVisuals
could be combined into a single Visuals
object that collected "all the things": materials
just a list of materials, uv
, face_colors
, vertex_colors
.
I don't think this would necessarily even require a ton of API breakage as Visuals.kind
is already there and needs to be checked by a user. The colors objects are probably a little more magical than I'd write today (i.e. checking hashes for modified arrays like mesh.visual.face_colors[10] = [255,0,0,255]
then changing state based on that) but I have found that sort of thing pretty useful for debugging. The other thing (I don't totally remember what it's currently doing) is use mesh.vertex_attributes
and mesh.face_attributes
in the visual objects to store the actual data if reasonably possible. If you want to take a go at it PR's would be very welcome!
Hi Mike, thanks for this great library. I added 2 feature requests: adding 3D text generation, adding bevel/chamfer/filet support.
I know these are not easy or necessarily applicable asks for Trimesh. But I hope you can consider them, thanks.
Hi @mikedh,
Are you still planning on writing a system paper as mentionned in #346 ?
Trimesh is used by thousands of people now, I believe a software paper would add value to the library (visibility, DOI citation, research oriented usage).
I think a publication in JOSS would be a great fit, as their papers typically include a summary and a gallery of features.
Hi @mikedh,
It seems that the decision has been made to utilize pyvista's implementation of the fast simplification algorithm. However, this choice introduces additional dependencies, particularly on PyVista, as outlined in their requirements documentation.
On the other hand, the package we're working on, pyfqmr, has been around for a while and shares the same foundational code (based on sp4cerat's C++ implementation). We prioritize minimal dependencies, relying solely on NumPy (and cython for compilation). While our implementation might be slightly slower, it exposes an additional algorithm, simplify_mesh_lossless
, which uses a threshold based stopping criterion, and we added the preserve open borders option too in this implementation.
If there's interest, I would be glad to work on a PR to explore integrating this approach, which could provide similar functionality while keeping dependencies light.
Looking forward to your thoughts.
Hey @Kramer84, I think pip install fast-simplification
also only requires numpy
? If the package required anything beyond numpy
yeah we would have to drop it from our extra. But their latest setup.py only requires numpy, and a quick check in docker of pip install fast-simplification
also only pulls in numpy:
mikedh@orion:~$ docker run -it python:3.12-slim-bookworm /bin/bash
root@860d12c3a205:/#
root@860d12c3a205:/# pip install fast-simplification
Collecting fast-simplification
Downloading fast_simplification-0.1.9-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (9.7 kB)
Collecting numpy>=2.0 (from fast-simplification)
Downloading numpy-2.1.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.metadata (60 kB)
Downloading fast_simplification-0.1.9-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (1.8 MB)
ββββββββββββββββββββββββββββββββββββββββ 1.8/1.8 MB 22.4 MB/s eta 0:00:00
Downloading numpy-2.1.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (16.0 MB)
ββββββββββββββββββββββββββββββββββββββββ 16.0/16.0 MB 31.7 MB/s eta 0:00:00
Installing collected packages: numpy, fast-simplification
Successfully installed fast-simplification-0.1.9 numpy-2.1.2
That being said, I'm definitely open to multiple backends behind simplify_quadric_decimation
! It's currently implemented in a pretty simple way, and missing things like vertex attribute combining (i.e. UV's π)
Thanks for the correction @mikedh, I should have double-checked that! The C++ code is identical, but we're currently focusing on exposing more functionality at the Python level. UV tracking is already present in the source, so it seems like a good fit for trimesh! And since fast_simplification
doesnβt have any dependencies beyond numpy
, we could look at merging the additional features from pyfqmr
into fast_simplification
. Given that they share the same underlying algorithm, it might not make sense to maintain two separate implementations.
Maybe turning on the discussions feature of the repo could be a way in that direction... We could build a gallery of user examples, tutorials & etc.
Please consider creating a place for people to share materials related to how they use and teach trimesh. I work daily with a library that has trimesh intregrations (https://py5coding.org/integrations/trimesh.html) and I miss more examples to show my students.
I'm talking about examples like this that are really hard to find: