naver/sqlova

Training on custom data

Opened this issue · 4 comments

I did following things in order to train SQLova using train.py

  1. add_csv.py on custom data from CSV
  2. add_question.py on some questions from same custom data from CSV
  3. annonate_ws.py on .jsonl file which was created from step-2.

Can someone kindly share now what should be the ideal step after this ?

  1. Should I run train.py on it ?
  2. Should I run other two train files from git
  3. I tried predict.py and accuracy is good however my problem is I am adding around 50 tables in SQLova and taking query from user. Now without knowing the table name I cant use SQLova.

One solution to this is, I create separate query_classifier to know the table name. However instead of that,I wanted to check if SQLova can be trained on my data which will work on split name instead of giving table name. with each query.

Kindly assist.

Regards,
PS

how did you run annotate , I am getting 'PermanentlyFailedException: Timed out waiting for service to come alive.' error in nlpclient

Hey,

Are you sure your StanFordCoreNLP client is running and listening on port 9000 ??

Hi, I just realised on Colab it should run port 9001 , thanks

Hello, how good of a prediction are you getting on custom data,
Like did you train your model further on new tables or did you just use the pre-trained model for predicting the sql for your questions?