Pinned Repositories
Alex_Tensor-Fusion-Network
apachecn-algo-zh
ApacheCN 数据结构与算法译文集
ARM_Cortex-M3
该项目依据全国大学生集成电路创新创业大赛“ARM杯”赛题要求,在FPGA上搭建Cortex-M3软核、图像协处理器,并通过OV5640摄像头采集车牌图像,实现对车牌的识别与结果显示。项目基于Altera DE1 FPGA搭载Cortex-M3软核,依据AHB-Lite总线协议,将LCD1602、RAM、图像协处理器等外设挂载至Cortex-M3。视频采集端,设计写FiFo模块、SDRAM存储与输出、读FiFo模块、灰度处理模块、二值化、VGA显示等模块。最终将400位宽的结果数据(对应20张车牌)存储在RAM中,输出至AHB总线,由Cortex-M3调用并显示识别结果。
ASEBO
Code to run the ASEBO algorithm from the paper: From Complexity to Simplicity: Adaptive ES-Active Subspaces for Blackbox Optimization... please get in touch if interested!!
awesome-model-quantization
A list of papers, docs, codes about model quantization. This repo is aimed to provide the info for model quantization research, we are continuously improving the project. Welcome to PR the works (papers, repositories) that are missed by the repo.
awesome-public-datasets
A topic-centric list of HQ open datasets.
awesome-tensorial-neural-networks
A thoroughly investigated survey for tensorial neural networks.
AWG
BackRazor_Neurips22
[Neurips 2022] “ Back Razor: Memory-Efficient Transfer Learning by Self-Sparsified Backpropogation”, Ziyu Jiang*, Xuxi Chen*, Xueqin Huang, Xianzhi Du, Denny Zhou, Zhangyang Wang
HUSTLYRM-Training-Documents
华中科技大学狼牙战队算法组培训资料
olokevin's Repositories
olokevin/awesome-model-quantization
A list of papers, docs, codes about model quantization. This repo is aimed to provide the info for model quantization research, we are continuously improving the project. Welcome to PR the works (papers, repositories) that are missed by the repo.
olokevin/awesome-tensorial-neural-networks
A thoroughly investigated survey for tensorial neural networks.
olokevin/BackRazor_Neurips22
[Neurips 2022] “ Back Razor: Memory-Efficient Transfer Learning by Self-Sparsified Backpropogation”, Ziyu Jiang*, Xuxi Chen*, Xueqin Huang, Xianzhi Du, Denny Zhou, Zhangyang Wang
olokevin/brain-segmentation-pytorch
U-Net implementation in PyTorch for FLAIR abnormality segmentation in brain MRI
olokevin/EAOT
The open source implementation of "Connecting Large Language Models with Evolutionary Algorithms Yields Powerful Prompt Optimizers"
olokevin/Early-Cropression-via-Gradient-Flow-Preservation
Code for Winning the Lottery Ahead of Time: Efficient Early Network Pruning (ICML 2022)
olokevin/ece278a
olokevin/ece_594_final
Multi-voxel pattern analyses methods based on ML & DL to decode the category of visual stimuli viewed by a human subject based on their recorded brain activity in fMRI form
olokevin/fwdgrad
Implementation of "Gradients without backpropagation" paper (https://arxiv.org/abs/2202.08587) using functorch
olokevin/generalized-smoothing
Companion code for the ICML 2022 paper "Generalizing Gaussian Smoothing for Random Search"
olokevin/GraSP_ZO
olokevin/Hyper-LR-PINN
olokevin/llama4micro
A "large" language model running on a microcontroller
olokevin/local-bo-mpd
Bayesian optimization via maximizing probability of descent
olokevin/MeZO
[NeurIPS 2023] MeZO: Fine-Tuning Language Models with Just Forward Passes. https://arxiv.org/abs/2305.17333
olokevin/multiplexed-gradient-descent-paper
olokevin/PETL-ViT
[ICCV 2023] Binary Adapters, [AAAI 2023] FacT, [Tech report] Convpass
olokevin/PINN-without-Stacked-BP
About The official implementation of Learning Physics-Informed Neural Networks without Stacked Back-propagation (AISTATS 2023).
olokevin/poster_template
some academic posters as references. May we have in-person poster session soon!
olokevin/Prior-Guided-RGF
olokevin/proxylessnas
[ICLR 2019] ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware
olokevin/pytorch-onn
A PyTorch Library for Photonic Integrated Circuit Simulation and Photonic AI Computing
olokevin/SGES
olokevin/SPINN
Source code for Separable PINN
olokevin/SSF
[NeurIPS'22] This is an official implementation for "Scaling & Shifting Your Features: A New Baseline for Efficient Model Tuning".
olokevin/tiny-training
On-Device Training Under 256KB Memory [NeurIPS'22]
olokevin/Torch-Pruning
[CVPR-2023] Towards Any Structural Pruning; LLaMA / YOLOv8 / CNNs / Transformers
olokevin/tuning_playbook
A playbook for systematically maximizing the performance of deep learning models.
olokevin/ZO_TONN
olokevin/ZORO
Zeroth-Order Regularized Optimization (ZORO): Approximately Sparse Gradients and Adaptive Sampling