License for this software?
chrberger opened this issue · 6 comments
I found your nice work and would like to integrate it in an open source project for video processing on RPi. Unfortunately, I could not find any license information in your repository. What is the applicable license for your work?
Hello,
There's actually no real license, it's free to use, do whatever you want with it :) . But since it's based on OpenMAX, I think we should comply to their license. (I'll have to add this in my Readme).
Btw, as I didn't write any documentation so far, would you want a small example on how to extract image buffers to pass them to OpenCV/anything else ?
Hi,
thanks for the feedback. I think explanatory examples are always helpful. Next, we will test the integrated software on our RPi boards in the lab; in case of questions or PRs, I might come back :-)
Here you go https://github.com/dridri/OpenMaxIL-cpp/blob/master/samples/camera_process.cpp
I don't have any rPi left right now to test it, but as I simply made it from existing code it should work as is
Excellent, thank you very much. My use case is to grab the i420-formatted frame from the camera, store that in a certain memory area and afterwards, pass it on to the OMX-h264 encoder. This is how my logic would look like:
bcm_host_init();
OMX_Init();
std::unique_ptr<IL::Camera> camera{new IL::Camera(WIDTH, HEIGHT, 0 /* device number */, false /* high speed */, 4 /* sensor mode */, true /* verbose */)};
std::unique_ptr<IL::VideoEncode> encoder{new IL::VideoEncode(4 * 1024 /* kbps */, IL::VideoEncode::CodingAVC, true /* live */, true /* verbose */)};
camera->setFramerate(static_cast<uint32_t>(FREQ));
camera->DisableProprietaryTunnels(71); // This disables image slicing, without it we would receive only WIDTH*16 image slices in getOutputData
camera->AllocateOutputBuffer(71);
camera->SetState(IL::Component::StateIdle);
encoder->SetState(IL::Component::StateIdle);
std::vector<char> h264Buffer;
h264Buffer.reserve(WIDTH * HEIGHT); // ...we are large enough.
// Start capturing.
camera->SetState(IL::Component::StateExecuting);
encoder->SetState(IL::Component::StateExecuting);
camera->SetCapturing(true);
while (1) {
uint8_t *i420Buffer = camera->outputPorts()[71].buffer->pBuffer;
ssize_t length = camera->getOutputData(71, nullptr);
// Do something with the i420 buffer.
// ...
// Encode I420 frame into h264.
encoder->fillInput(200 /* encoder port */, i420Buffer, length);
length = encoder->getOutputData(reinterpret_cast<uint8_t*>(&h264Buffer[0]));
if (0 < length) {
std::clog << "Received " << length << " bytes from h264 encoder." << std::endl;
}
}
Does the use of the OMX ports look right to you?
Since you use the encoder ports manually without tunneling, you have to allocate their buffers (before going to Idle state) by using
encoder->AllocateInputBuffer(200);
encoder->AllocateOutputBuffer(201);
You also have to disable proprietary tunnels on port 200 of the encoder to allow you to pass entire frames (as OpenMAX works using 16 pixels height slices by default).
By the way I think I should rename (and probably move) the last parameter of fillInput member, as it actually means EndOfFrame, and since we process entire frames here, you actually have to always pass true to this argument (thus calling using encoder->fillInput(200 /* encoder port */, i420Buffer, length, false, true);
).
Finally, the encoder->getOutputData
should be in a loop of this kind :
while ( ( length = encoder->getOutputData( reinterpret_cast<uint8_t*>(&h264Buffer[0]) ) ) > 0 ) {
because the encoder may output several buffers at once, generally when returning h264 headers.
To get better performances, and more time to process you image, you can also put the encoder->getOutputData call inside a loop in another thread (everything is thread-safe)
Great, thanks for your feedback; we will try that on our boards.