taeyoungyeon
Computer Science and Engineering Undergraduate Student in Seoul National University, Seoul, Korea
taeyoungyeon's Stars
MarilynKeller/SKEL
Release for the Siggraph Asia 2023 SKEL paper "From Skin to Skeleton: Towards Biomechanically Accurate 3D Digital Humans".
XinhaoMei/ACT
Source code for the paper 'Audio Captioning Transformer'
lucidrains/axial-positional-embedding
Axial Positional Embedding for Pytorch
lucidrains/x-transformers
A concise but complete full-attention transformer with a set of promising experimental features from various papers
bytedance/1d-tokenizer
This repo contains the code for 1D tokenizer and generator
RicherMans/SAT
Streaming Audiotransformers for online Audio tagging
SPICExLAB/MobilePoser
Open-source implementation of MobilePoser: Real-Time Full-Body Pose Estimation and 3D Human Translation from IMUs in Mobile Consumer Devices.
THU-MIG/RepViT
RepViT: Revisiting Mobile CNN From ViT Perspective [CVPR 2024] and RepViT-SAM: Towards Real-Time Segmenting Anything
rafat/wavelib
C Implementation of 1D and 2D Wavelet Transforms (DWT,SWT and MODWT) along with 1D Wavelet packet Transform and 1D Continuous Wavelet Transform.
KevinCoble/AIToolbox
A toolbox of AI modules written in Swift: Graphs/Trees, Support Vector Machines, Neural Networks, PCA, K-Means, Genetic Algorithms
jindongwang/transferlearning
Transfer learning / domain adaptation / domain generalization / multi-task learning etc. Papers, codes, datasets, applications, tutorials.-迁移学习
OxWearables/ssl-wearables
Self-supervised learning for wearables using the UK-Biobank (>700,000 person-days)
microsoft/Exercise-Recognition-from-Wearable-Sensors
This data set contains accelerometer and gyroscope recordings from over 200 participants performing various gym exercises. This data set is described in more detail in the associated manuscript: Morris, D., Saponas, T. S., Guillory, A., & Kelner, I. (2014, April). RecoFit: using a wearable sensor to find, recognize, and count repetitive exercises.
locuslab/TCN
Sequence modeling benchmarks and temporal convolutional networks
getalp/Lightweight-Transformer-Models-For-HAR-on-Mobile-Devices
Human Activity Recognition Transformer (HART) is a transformer based architecture that has been specifically adapted for IMU sensing devices. Findings shows that HART uses fewer paremeters, FLOPs and achieves state-of-the-art results.
scrapfly/scrapfly-scrapers
Scalable Python web scraping scripts for +40 popular domains
fschmid56/EfficientAT
This repository aims at providing efficient CNNs for Audio Tagging. We provide AudioSet pre-trained models ready for downstream training and extraction of audio embeddings.
AdelaideAuto-IDLab/Attend-And-Discriminate
Attend and Discriminate: Beyond the State-of-the-Art for Human Activity Recognition Using Wearable Sensors
yeyupiaoling/AudioClassification-Pytorch
The Pytorch implementation of sound classification supports EcapaTdnn, PANNS, TDNN, Res2Net, ResNetSE and other models, as well as a variety of preprocessing methods.
google/lyra
A Very Low-Bitrate Codec for Speech Compression
facebookresearch/encodec
State-of-the-art deep learning based audio codec supporting both mono 24 kHz audio and stereo 48 kHz audio.
OxWearables/Oxford_Wearables_Activity_Recognition
Notebooks for Oxford CDT Wearables Data Challenge
haoranD/Awesome-Human-Activity-Recognition
An up-to-date & curated list of Awesome IMU-based Human Activity Recognition(Ubiquitous Computing) papers, methods & resources. Please note that most of the collections of researches are mainly based on IMU data.
NVlabs/FasterViT
[ICLR 2024] Official PyTorch implementation of FasterViT: Fast Vision Transformers with Hierarchical Attention
swsnu/swppfall2022-team16