vizwiz-vqa

There are 7 repositories under vizwiz-vqa topic.

  • RachanaJayaram/Cross-Attention-VizWiz-VQA

    A self-evident application of the VQA task is to design systems that aid blind people with sight reliant queries. The VizWiz VQA dataset originates from images and questions compiled by members of the visually impaired community and as such, highlights some of the challenges presented by this particular use case.

    Language:Python14226
  • yousefkotp/Visual-Question-Answering

    A Light weight deep learning model with with a web application to answer image-based questions with a non-generative approach for the VizWiz grand challenge 2023 by carefully curating the answer vocabulary and adding linear layer on top of Open AI's CLIP model as image and text encoder

    Language:Jupyter Notebook10313
  • Konic-NLP/5922-deep-learning

    A repo for CSCI 5922 deep learning course works, including MNIST classification, Cifar-10 classification and VizWiz VQA challenge

    Language:Jupyter Notebook1100
  • MohEsmail143/vizwiz-visual-question-answering

    An implementation of the paper "Less is More", which was used to attempt the VizWiz visual question answering and answerability challenge tasks.

    Language:Jupyter Notebook0100
  • nanom/textMining2021

    Code for the 2021 Text Mining Course, taught by Dra. Laura Alonso Alemanny

    Language:Jupyter Notebook0100
  • atharva-naik/MMML-TermProject-VizWiz-VQA-Challenge

    VizWiz Challenge Term Project for Multi Modal Machine Learning @ CMU (11777)

    Language:Python10
  • reshalfahsi/vqa-clip-lstm

    Visual Question Answering Using CLIP + LSTM

    Language:Jupyter Notebook10