How exactly the model works off the shelf?
bhushanpant opened this issue · 2 comments
Amazing paper!
Thanks for publishing the code.
I wanted to check, if it is possible to test out Tabert off the shelf for my documents/images. If yes, can you share some steps/guidelines on how to do that?
Thanks for your interest! What kinds of tasks are you considering? If you'd like to use TaBERT as the encoder of text and tables in your end-to-end task-specific model, you could just fine-tune TaBERT with the rest of the parameters in the network.
If you'd like to pre-train TaBERT on your in-domain text and tabular data, sorry we haven't released the pre-training code yet. I will prepare a minimal working example ASAP. Thanks!
I was wondering how to use the pretrained model for other tasks. I want to build a column classifier based on TaBERT, using the column representations with a few additional layers including a classifier on the end. I figure this could be trained (fine-tuned) end-to-end from a pretrained model but I'm not sure how to put it together. I see on https://github.com/pcyin/pytorch_neural_symbolic_machines it says training with pretrained TaBERT models is to be released, is there any other example or resource you could point me to that might help me get started? Thanks again for the paper and code!