Image drawing is incredibly slow
Closed this issue · 3 comments
The original Adafruit_ILI9341 library was using Numpy to speed up the conversion from pixels to an SPI-conformant byte array (see here).
The new library in this repository uses the following loop
# Iterate through the pixels
for x in range(self.width): # yes this double loop is slow,
for y in range(self.height): # but these displays are small!
pix = color565(img.getpixel((x, y)))
pixels[2*(y * self.width + x)] = pix >> 8
pixels[2*(y * self.width + x) + 1] = pix & 0xFF
The comment about slowness is no understatement indeed. On a Raspberry Pi Zero, drawing a black image with a single line of text takes 8 seconds in my testing.
When I used the old code
pixelbytes = list(image_to_data(image))
self.display._block(0, 0, self.display.width-1, self.display.height - 1, pixelbytes)
with
def image_to_data(image):
"""Generator function to convert a PIL image to 16-bit 565 RGB bytes."""
#NumPy is much faster at doing this. NumPy code provided by:
#Keith (https://www.blogger.com/profile/02555547344016007163)
pb = np.array(image.convert('RGB')).astype('uint16')
color = ((pb[:,:,0] & 0xF8) << 8) | ((pb[:,:,1] & 0xFC) << 3) | (pb[:,:,2] >> 3)
return np.dstack(((color >> 8) & 0xFF, color & 0xFF)).flatten().tolist()
the time consumed drops down to below half a second. Unless there was a technical reason not to depend on Numpy, I think it'd be worth to bring back the old code. 8 seconds for a screen update puts an end to a lot of use cases.
oh yeah good idea - would you like to submit a PR to re-implement numpy speedups? :)
Totally! Just meant to test the waters first. 🙂
👍