higuseonhye
"Success is not final, failure is not fatal: it is the courage to continue that counts." - Winston Churchill
South Korea
Pinned Repositories
abductive-commonsense-reasoning
Advanced-Computer-Vision-with-TensorFlow
My Solutions to the course Advanced-Computer-Vision-with-Tensorflow
AI-basketball-analysis
:basketball::robot::basketball: AI web app and API to analyze basketball shots and shooting pose.
albert
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations
AutoCrawler
Google, Naver image web crawler
awesome-computer-vision
A curated list of awesome computer vision resources
bert
TensorFlow code and pre-trained models for BERT
caffe
Caffe: a fast open framework for deep learning.
ml-powered-applications
Companion repository for the book Building Machine Learning Powered Applications
higuseonhye's Repositories
higuseonhye/aria-practices
WAI-ARIA Authoring Practices Guide (APG)
higuseonhye/awesome-production-llm
A curated list of awesome open-source libraries for production LLM
higuseonhye/awesome-production-machine-learning
A curated list of awesome open source libraries to deploy, monitor, version and scale your machine learning
higuseonhye/BioGPT
higuseonhye/chatgpt-advanced
WebChatGPT: A browser extension that augments your ChatGPT prompts with web results.
higuseonhye/Clinical-Longformer
higuseonhye/coding_test_leetcode
Collection of LeetCode questions to ace the coding interview! - Created using [LeetHub](https://github.com/QasimWani/LeetHub)
higuseonhye/copilot-in-codespace
GitHub 코드스페이스에서 개발자의 코딩 파트너 GitHub 코파일럿으로 짝코딩하기
higuseonhye/courses
Anthropic's educational courses
higuseonhye/deepeval
The LLM Evaluation Framework
higuseonhye/evals
Evals is a framework for evaluating LLMs and LLM systems, and an open-source registry of benchmarks.
higuseonhye/evalverse
The Universe of Evaluation. All about the evaluation for LLMs.
higuseonhye/FineSurE-ACL24
The official repo of FineSure (ACL-2024)
higuseonhye/KoChatLLaMA.cpp
Port of Facebook's LLaMA model in C/C++ and Fine-tuning in Korean.
higuseonhye/lighteval
Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends
higuseonhye/lm-evaluation-harness
A framework for few-shot evaluation of language models.
higuseonhye/m3l2_forking_lab
higuseonhye/m4l1_managing_a_project
higuseonhye/openai-connector
This is a Power Platform custom connector project for OpenAI API and Azure OpenAI Service API.
higuseonhye/opencompass
OpenCompass is an LLM evaluation platform, supporting a wide range of models (Llama3, Mistral, InternLM2,GPT-4,LLaMa2, Qwen,GLM, Claude, etc) over 100+ datasets.
higuseonhye/ragas
Evaluation framework for your Retrieval Augmented Generation (RAG) pipelines
higuseonhye/raptor
The official implementation of RAPTOR: Recursive Abstractive Processing for Tree-Organized Retrieval
higuseonhye/react
The library for web and native user interfaces
higuseonhye/sam
higuseonhye/semikong
First Open-Source Industry-Specific Model for Semiconductors
higuseonhye/SpeechT5
Unified-Modal Speech-Text Pre-Training for Spoken Language Processing
higuseonhye/stanford_alpaca
Code and documentation to train Stanford's Alpaca models, and generate the data.
higuseonhye/talk-to-chatgpt
Talk to ChatGPT AI using your voice and listen to its answers through a voice
higuseonhye/visprog
Official code for VisProg (CVPR 2023)
higuseonhye/WeeklyArxivTalk
[Club House] Weekly Arxiv Casual Talk