Model Availability In Inference API
Closed this issue · 2 comments
jordanyan93 commented
Hello, I have a question regarding a specific model and its availability on the Serverless Inference API. I want to use the twitter-roberta-base-offensive model but the problem is that whenever I go to view the model on your website, it switches back and forth between its availability on the Inference API. Why is this the case?
I want to use this model for an application but I must ensure that it's consistently available. Do you recommend using Inference Endpoints to deploy this model? If so, is there a way to freeze this service so I am not constantly charged?
Thank you and I look forward to hearing from you.
abhishekkrthakur commented
it doesnt seem like an autotrain issue. is it?
jordanyan93 commented
oh sorry it's not