hzeller/flaschen-taschen

flaschen-taschen Full screen Image

beikimajid opened this issue · 7 comments

According to hzeller/rpi-rgb-led-matrix#372,
(My image and my led screen are 384x96)

I use (./ft-server) and -g parameter (./send-image -g384x96+0+0 some-image.png)
Result : Attempting to send image larger than fits in a UDP packet (110623 > 65507 bytes)

I use (./ft-server) and -g parameter (./send-image -g384x54+0+0 some-image.png)
Result : It does not show anything.

Then I use (./ft-server --led-chian=36) and (./ft-server --led-chian=36 -D 384x54 )
Result : The image became displayed but But not in full screen

My problem is on the height. (96 pixel height >>> error)
What is the problem and How do I get the fullscreen image?

384 * 96 pixels is 110592 bytes, but a UDP packet size limit is 64k.

So just send it in two halfs ( -g192x96+0+0 and -g192x96+192+0) or write a network protocol that uses TCP.

Do you mean to split the Image into two parts? How?

Any image processing tool will do.

How about video file instead of image?
When we divide the video into two parts, it is possible that two frames(as example: left & right side) will not be displayed together

I honestly don't understand why you make your life so complicated. From what I can tell from the flood of messages, you want to show some animation on some fairly large display and you spent the last couple of days trying so.

Why don't you just use the rpi-rgb-led-matrix and write a program that does that with double-buffering. Or implement a compressed stream. It will take you an afternoon to implement and you're done.

I'm really sorry to be so annoying you.
I use util(led-image-viewer) And just as you said I want to show some animation on some fairly large display and the parameters of speed and size are important to me.
Can you help me out which part of the code I start with to write double-buffering or compressed stream?
I'm sorry again

I would start by just using the -O option to write the preprocessed stream and then play that stream. The output file might be large, but storage is cheap so it might not really be a big deal. Make sure to have a sufficiently good quality fast SD card for that, and you can easily play this size animation with more than 100fps.

If the file is too large, then consider implementing an on-the-fly compression and uncompression in content-streamer.{h.cc} and use that in led-image-viewer. There might be some low-hanging fruit in just using e.g. snappy to compress each frame; or you can do things like taking differentials and compress these which typically creates better compression ratio unless the animation is very busy.

You also can just not do the preprocessing but just take the RGB images and just set the pixels directly and then just implement simple double buffering. This is what the video-viewer is doing internally, you can get some inspiration from there (you could also use the video-viewer directly, but make sure to have the size of your video already the size of the output, otherwise flickering will be seen. Also, videos have more compression artifacts, so if this is for line-art, it will not work well).

Using double-buffering in the led-image-viewer will actually simplify it, but it will constantly use CPU to fill frames at runtime, and this is somewhat limited around 3-4 Megapixels/s (on a Raspberry Pi3), so with your size panel, this will use a full core to roughly get 80-100fps (which might be ok, the Raspberry Pi 3 has 4 cores, one of which is reserved for the display output).