lwachowiak
PhD student at King's College & Imperial. My research interests include explainable AI for collaborative robots & NLP for cognitive linguistics
King's College London
Pinned Repositories
Emotion-Recognition-with-ViT
A jupyter notebook showing how to finetune the vision transformer on a facial expression dataset (FER-2013)
Evaluating-Crowdsourced-Annotations
Verifying user annotations for images provided by museums and archives
HRI-Error-Detection-STAI
Winning entry for the ERR@HRI competition, https://sites.google.com/cam.ac.uk/err-hri/
LLMs-for-Social-Robotics
Code and data for our IROS paper: "Are Large Language Models Aligned with People's Social Intuitions for Human–Robot Interactions?"
Metaphor-Extraction-With-GPT-3
Code for our ACL'23 paper on how to identify metaphor mappings with the help of GPT-3
Multilingual-Metaphor-Detection
The multilingual language model XLM-R fine-tuned for metaphor detection on a token-level using Huggingface
overcooked-demo
Fork of overcooked-demo for HRI experiments regarding human gaze patterns during collaborations
Term-Extraction-With-Language-Models
Extracting terms from text using XLM-R for token and sequence classification
Towards-Learning-Terminological-Concept-Systems
A pipeline approach to automatically extract terminological concept systems from text. We use multilingual neural language models to extract terms and their relations on a an intra-sentence level.
Transrelation
The winning system of the Text2TCS project team submitted to the CogaLex VI shared task.
lwachowiak's Repositories
lwachowiak/Emotion-Recognition-with-ViT
A jupyter notebook showing how to finetune the vision transformer on a facial expression dataset (FER-2013)
lwachowiak/Multilingual-Metaphor-Detection
The multilingual language model XLM-R fine-tuned for metaphor detection on a token-level using Huggingface
lwachowiak/Metaphor-Extraction-With-GPT-3
Code for our ACL'23 paper on how to identify metaphor mappings with the help of GPT-3
lwachowiak/Evaluating-Crowdsourced-Annotations
Verifying user annotations for images provided by museums and archives
lwachowiak/LLMs-for-Social-Robotics
Code and data for our IROS paper: "Are Large Language Models Aligned with People's Social Intuitions for Human–Robot Interactions?"
lwachowiak/HRI-Error-Detection-STAI
Winning entry for the ERR@HRI competition, https://sites.google.com/cam.ac.uk/err-hri/
lwachowiak/overcooked-demo
Fork of overcooked-demo for HRI experiments regarding human gaze patterns during collaborations
lwachowiak/Systematic-Analysis-of-Image-Schemas-through-Explainable-Multilingual-Language-Models
A multilingual image schema dataset and an explainable extraction model
lwachowiak/When-and-What-to-Explain
Timeseries classification code for detecting confusion and agent errors. Published in Transactions on Affective Computing..
lwachowiak/explainable_overcooked_ai
A benchmark environment for fully cooperative human-AI performance.
lwachowiak/Explanation-Types-and-Need-Indicators-in-HAI
Collection of papers used for a scoping review of explanation types and need indicators in human–agent interact / robotics / human–agent collaborations
lwachowiak/ISCMs
lwachowiak/open-sesame
A frame-semantic parsing system based on a softmax-margin SegRNN.
lwachowiak/HRI-Video-Survey-on-Preferred-Robot-Responses
Code and data for the paper "When Do People Want an Explanation?", presented at HRI'24