30350n/pcb2blender

What does this add?

marcdraco opened this issue · 9 comments

I know I might sound like an old farty here but I failed to get this to work until I downloaded your importer and after all the fooling around with text markers, what came out of Blender was nothing like what came out of PCB New.

But more than that, PCB New already has this function and while it's buggy, it works directly in Blender with the existing import/export, it's faster as a result and you just export the board directly.

I'm a developer myself so I'm cognisant of people just jumping in and complaining. I'm honesty confused why you did this rather than extend the existing exporter? Multi-stacks perhaps?

The only issue I had with the existing export in PCB New is it's necessary to join all the parts into one to to transforms because each object has its own origin. Honestly man, I'm not taking pop, I'm just confused.

KiCad 3D Viewer:

image

Blender import (without pcb2blender, VRML format):

image

Blender import using pcb2blender:

image

(Both Blender screenshots are straight out of import, the only thing I added is a HDRI for lighting. The pcb2blender render took approximately twice as long to run, compared to the KiCad render (~3s vs ~6s on a 9 year old gpu), which for the increase in quality is more than fine).

I downloaded your importer and after all the fooling around with text markers

If you are only exporting a single (non panelized) PCB, you can just skip that.

I'm honesty confused why you did this rather than extend the existing exporter?

What "exporter"?

The only issue I had with the existing export in PCB New is it's necessary to join all the parts into one to to transforms because each object has its own origin.

That's more of a feature than an issue, the main problem here is that Blender's VRML importer doesn't join each individual part (which might consist of multiple face sets) together. pcb2blender fixes this.

Did that sufficiently answer your questions?

Yeah, thanks it did. Sorry I don't get in much.

I did notice your shaders absolutely killing my older machines during compilation, but that's just Blender.

All good, yeah they are pretty heavy. Optimizing them is on the TODO list (and has been for a while ...).

It's something the Blender devs need to look at too - not really a bug as such, but even a "small" model can bring my smaller i5 machines to their knees. I don't blame shader authors for that.

I rendered some tests on the i7 (the one I use for proper Blender work), but it's a fortune to run that thing. That "solder" shader is really nice - far better than the VRML which I think just makes some basic copper.

I can literally see on my electricity bill the cost of rendering some AI-enhanced photos (2500 of them) so it had to run overnight. Machine barely broke a sweat but man, my it put my bill up by almost 10% for the month!

If you are rendering on your CPU, there's your mistake. It will generally work, but it'll be very slow (and inefficient too) compared to using a dedicated GPU.

even a "small" model

What matters here is detail not size.

I meant the shader compilation, not the rendering. That's what brings the thing to its knees even just switching into the Evee preview starts all that messing around. Not your fault. just the way Blender works.

Funny thing about CPU vs. GPU. It's actually more efficient to use multiple CPUs with loads of memory than a dedicated graphics card for larger scenes using Cycles. I didn't believe it either but my 16 core Xeon machine (which is pretty ancient now) rendered in CPU mode almost as fast as the 1080TI (I've got a lot of tech just lying around), much of it it quite old. Smaller scenes are great on the GPU of course.

I meant the shader compilation, not the rendering. That's what brings the thing to its knees even just switching into the Evee preview starts all that messing around. Not your fault. just the way Blender works.

Partially my fault, because the custom shaders are really not optimized for eevee, but yeah.

It's actually more efficient to use multiple CPUs with loads of memory than a dedicated graphics card for larger scenes using Cycles. I didn't believe it either but my 16 core Xeon machine (which is pretty ancient now) rendered in CPU mode almost as fast as the 1080TI (I've got a lot of tech just lying around), much of it it quite old.

Huh, are you using CUDA or Optix with the 1080TI?

CUDA for the Ti but that machine is so power hungry that it rarely gets switched on these days. The advice came from (Blender) Ton himself at Blendercon a few years back.

I thought he'd lost his chips to be honest, but the limited memory of GPUs (mine is 12Gb) was the issue I believe. I wear rather more hats than my poor little noodle can comfortably handle so I've got to manage with a limited skillset on each if that makes sense.