Pinned Repositories
earth-forecasting-transformer
Official implementation of Earthformer
mxnet
Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more
tvm
Open deep learning compiler stack for cpu, gpu and specialized accelerators
autogluon
Fast and Accurate ML in 3 Lines of Code
gluon-nlp
NLP made easy
automl_multimodal_benchmark
Repository for Multimodal AutoML Benchmark
aws-summit-2017-seoul
Demo codes in our presentation about MXNet in AWS Seoul Summit 2017
CodeBERT
CodeBERT
gluonnlp-gpt2
HKO-7
Source code of paper "[NIPS2017] Deep Learning for Precipitation Nowcasting: A Benchmark and A New Model"
sxjscience's Repositories
sxjscience/HKO-7
Source code of paper "[NIPS2017] Deep Learning for Precipitation Nowcasting: A Benchmark and A New Model"
sxjscience/sxjscience.github.io
https://sxjscience.github.io
sxjscience/earth-forecasting-transformer
sxjscience/autogluon
AutoGluon: AutoML Toolkit for Deep Learning
sxjscience/release06_notebook_verification
sxjscience/alpaca_eval
An automatic evaluator for instruction-following language models. Human-validated, high-quality, cheap, and fast.
sxjscience/arena-hard-auto
Arena-Hard-Auto: An automatic LLM benchmark.
sxjscience/bigcode-evaluation-harness
A framework for the evaluation of autoregressive code generation language models.
sxjscience/blanc
Human-free quality estimation of document summaries
sxjscience/BOON
Code for results presented in the BOON paper
sxjscience/DeepSpeed
DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective.
sxjscience/donut
Official Implementation of OCR-free Document Understanding Transformer (Donut) and Synthetic Document Generator (SynthDoG), ECCV 2022
sxjscience/earthnet-model-intercomparison-suite
sxjscience/evals
Evals is a framework for evaluating OpenAI models and an open-source registry of benchmarks.
sxjscience/FastChat
The release repo for "Vicuna: An Open Chatbot Impressing GPT-4"
sxjscience/FasterTransformer
Transformer related optimization, including BERT, GPT
sxjscience/flash-attention
Fast and memory-efficient exact attention
sxjscience/helm-pr-fork
Holistic Evaluation of Language Models (HELM), a framework to increase the transparency of language models (https://arxiv.org/abs/2211.09110).
sxjscience/langchain
⚡ Building applications with LLMs through composability ⚡
sxjscience/layout_diffuse
Code release for LayoutDiffuse
sxjscience/LeMDA
Code Example for Learning Multimodal Data Augmentation in Feature Space
sxjscience/llama
Inference code for LLaMA models
sxjscience/lm-evaluation-harness
A framework for few-shot evaluation of autoregressive language models.
sxjscience/natural-instructions
Expanding natural instructions
sxjscience/neurips2022-autogluon-workshop
NeurIPS 2022 AutoGluon Workshop. See website: https://autogluon.github.io/neurips2022-autogluon-workshop/
sxjscience/PreDiff
Official implementation of PreDiff
sxjscience/promptsource
Toolkit for creating, sharing and using natural language prompts.
sxjscience/skillful_nowcasting
Implementation of DeepMind's Deep Generative Model of Radar (DGMR) https://arxiv.org/abs/2104.00954
sxjscience/stable-diffusion
sxjscience/text-generation-inference-pr-fork
Large Language Model Text Generation Inference