monatis/clip.cpp

python bindings🐍: Support for accepting list of Input in the encoding methods

Yossef-Dawoad opened this issue · 7 comments

iterating over a path that has images and calculate the image embeddings in python is really expensive,
I was thinking maybe it can be offloaded into the c code and exposed into the python binding.
as an example, form the doc notebook:

image_files= [...] #list of image paths
##⚠️it take about ~30 min to embed 5000 images of fashion dataset
image_embeddings = [model.load_preprocess_encode_image(im) for im in tqdm(image_files)]
image_embeddings = np.array(image_embeddings, dtype=np.float16)

the favorable behavior may be like this:

images = [...] #list of image paths
# accepting list of image files 
# load_preprocess_encode_images iterating and process it in c exposed to the bindings
image_embeddings = model.load_preprocess_encode_images(image_files)
image_embeddings = np.array(image_embeddings, dtype=np.float16)

Yes there was a bug in batch inference, I'll expose it to Python after I make sure that it's completely fixed.

We can also support writing directly to a Numpy file (*.npy). Numpy file structure is quite simple and I can implement it in C++ without any dependency on Numpy. In this case, it could be something like

model.load_preprocess_encode_images(
    image_paths # List[str]
    to_numpy_file='image_embeddings.npy' # str, optional
)

WDYT?

mmm.., I don't think it's great idea not right now at least, here's why since you support saving in specific format you should or expected to support to be loaded in that same format, second saving the embedding is not the real issue and beyond clip.cpp core, it's getting them in the first place is the deal breaker and expensive to do in python, at least not right now see Better Solution

if it's really not that hard you could do that (don't recommend it - so out of the scope for now), i.e:

# embedding as an object that has save_to_npy() and to_list() method
embeddings = model.load_preprocess_encode_images(image_files)
embeddings.save_to_npy()
embeddings.to_list() # which will return the python list

Better Solution

I am planning to refactor the python interface (not so much just some naming convention and some new methods) to be as close as much to that of sentence_transformers package for ease of use and familiarity for devs, I will raise an issue for how the interface will look like for a review to start right into it but it need batch inference
also let me know of you planning something else in mind?

embedding as an object that has save_to_npy() and to_list() method

Sounds reasonable. Let me have a look.

also let me know of you planning something else in mind?

Zero-shot image labelling will be also exposed to Python soon --it's now implemented in C++ examples but I'll move it to the main lib and then expose to Python.

In fact all these started just for fun, but there seems to be a community interest in this so I'm thinking of pushing more features over time and optimizing performance.

Zero-shot image labelling will be also exposed to Python soon --it's now implemented in C++ examples but I'll move it to the main lib and then expose to Python.

keep up the good work ❤️

In fact all these started just for fun, but there seems to be a community interest in this so I'm thinking of pushing more features over time and optimizing performance.

isn't that how 90% of great projects and frameworks started😊.

Hey @Yossef-Dawoad, with #75, now we're using GGUF and older model files are not usable. can you please update the notebook?

Sure, will do it tomorrow.
any luck 🤞 of getting batch inference fixed ?

Batch inference is working on the C/C++ side, it needs to be exposed to Python. I still have some considerations about how to expose it. The issue is, if we try to read all the embeddings at once from the native side to Python, it might be a bad user experience, so instead a generator or a custom data structure that can read embeddings lasily could be preferable. I'll try to pack it ASAP. After the GGUF support in clip.cpp, I started to work on lmm.cpp, sister of this project. So I can try to release batch inference in Python tomorrow but if it takes longer than I expected it might be delayed after the release of lmm.cpp.