Pinned Repositories
e-puck
e-puck and pi-puck; wifi; ROS; image process
evol-teacher
Open Source WizardCoder Dataset
lm-evaluation-harness
A framework for few-shot evaluation of autoregressive language models.
Megatron-LM
Ongoing research training transformer models at scale
newhope
NewHope: Harnessing 99% of GPT-4's Programming Capabilities
PPO-PyTorch
Minimal implementation of clipped objective Proximal Policy Optimization (PPO) in PyTorch
pytorch
Tensors and Dynamic neural networks in Python with strong GPU acceleration
Pytorch_Examples_Test
ray-webots
Use the Ray framework to call the Webots simulation environment for reinforcement learning sampling and training
Skywork
Skywork series models are pre-trained on 3.2TB of high-quality multilingual (mainly Chinese and English) and code data. We have open-sourced the model, training data, evaluation data, evaluation methods, etc. 天工系列模型在3.2TB高质量多语言代码数据上进行预训练。我们开源了模型参数,训练数据,评估数据,评估方法。
young-chao's Repositories
young-chao/ray-webots
Use the Ray framework to call the Webots simulation environment for reinforcement learning sampling and training
young-chao/e-puck
e-puck and pi-puck; wifi; ROS; image process
young-chao/evol-teacher
Open Source WizardCoder Dataset
young-chao/lm-evaluation-harness
A framework for few-shot evaluation of autoregressive language models.
young-chao/Megatron-LM
Ongoing research training transformer models at scale
young-chao/newhope
NewHope: Harnessing 99% of GPT-4's Programming Capabilities
young-chao/PPO-PyTorch
Minimal implementation of clipped objective Proximal Policy Optimization (PPO) in PyTorch
young-chao/pytorch
Tensors and Dynamic neural networks in Python with strong GPU acceleration
young-chao/Pytorch_Examples_Test
young-chao/Skywork
Skywork series models are pre-trained on 3.2TB of high-quality multilingual (mainly Chinese and English) and code data. We have open-sourced the model, training data, evaluation data, evaluation methods, etc. 天工系列模型在3.2TB高质量多语言代码数据上进行预训练。我们开源了模型参数,训练数据,评估数据,评估方法。