luxonis/depthai-hardware

Cameraless OAK-USB

madgrizzle opened this issue · 8 comments

I'm looking at using one of my spare OAK-D-Lites to 'extend' my pipeline and hopefully increase the FPS, but think it's a bit of a waste of resources when I'm not using the cameras on it. As I understand it, the NCS isn't compatible with DepthAI (not sure if it could be made to be) but a cameraless OAK with USB interface (stick form-factor) would be nice.

Hi @madgrizzle ,

Thanks! So actually you could use one of these if you'd like:
https://shop.luxonis.com/collections/modular-cameras/products/dm1090ffc.

You can order it without cameras. That said - it's lower volume and as a result is actually the same price as OAK-D-Lite despite having no cameras. So I'm not sure if it's necessarily better in this case.

Thanks,
Brandon

Yeah, I agree. That would be the challenge getting volume high enough to make it cost effective). Is there a reason (i.e. incompatibility) depthai could not be made to work on a NCS?

It could probably be made to work with NCS. It would be probably some real work though and we don't control the NCS so then there'd be support risk because the hardware permutations internal are unknown and change.

So our focus right now is on the next generation version of Myriad X, which is about 10x faster in terms of neural inference. It's a minimum of 3.5x faster, and a max of about 80x faster, depending on what models/permutations are being run. And it's slightly lower power.

So the bang-for-buck wise this will be best for you @madgrizzle as it will be some amount more expensive, but will definitely not be 3.5x as expensive in a finished product, and it will deliver a bare minimum of 3.5x the AI performance, but more typically 10x the performance. And 80x the performance for models that are well suited for quantization and sparsity support (which are TBD, we haven't evaluated that yet, but @tersekmatija will likely be doing soon).

Thoughts?

Thanks,
Brandon

I just saw that mentioned on discord... sounds exciting. Honestly, the fact that I can run face detection plus face recognition plus object detection with spatial plus object tracking all on device at about 10 FPS is pretty impressive. Being able to do it on device is important for my application because as soon as host processing is involved, the fan on the Intel NUC starts running and that's a good indicator that my robot's battery is being used up quickly.

Oh and the next generation will also have a quad-core ARM built-in as well, so no other hardware required. :-)

Don't make it too expensive please :)

It could probably be made to work with NCS. It would be probably some real work though and we don't control the NCS so then there'd be support risk because the hardware permutations internal are unknown and change.

So our focus right now is on the next generation version of Myriad X, which is about 10x faster in terms of neural inference. It's a minimum of 3.5x faster, and a max of about 80x faster, depending on what models/permutations are being run. And it's slightly lower power.

So the bang-for-buck wise this will be best for you @madgrizzle as it will be some amount more expensive, but will definitely not be 3.5x as expensive in a finished product, and it will deliver a bare minimum of 3.5x the AI performance, but more typically 10x the performance. And 80x the performance for models that are well suited for quantization and sparsity support (which are TBD, we haven't evaluated that yet, but @tersekmatija will likely be doing soon).

Thoughts?

Thanks, Brandon

Might be a dumb question but is there a new myriad chip releasing? I thought the oak's already used the myriad x chip?

@justin-larking Yes, check out our OAK Series 3 docs. It will be based on the new Movidius VPU, called Keem Bay.