Pinned Repositories
AsymMirai
B_Pref
BIONLP_final
BIONLP_PROJ1
EECS438Project
ICNN
A pytorch implementation of interpretable convolutional neural network.
JBI2023_TCAV_debiasing
Niffler
Niffler: A DICOM Framework for Machine Learning and Processing Pipelines.
quip_classification
My copy of quip_classification
VIMA
Official Algorithm Implementation of ICML'23 Paper "VIMA: General Robot Manipulation with Multimodal Prompts"
ramon349's Repositories
ramon349/BIONLP_final
ramon349/ICNN
A pytorch implementation of interpretable convolutional neural network.
ramon349/AsymMirai
ramon349/B_Pref
ramon349/BMI500_EEG_lab
ramon349/BMI500_medImagingLabe
ramon349/BMI_lab11
ramon349/breast_density_classifier
Breast density classification with deep convolutional neural networks
ramon349/CSE574_Final_project
ramon349/JBI2023_TCAV_debiasing
ramon349/Niffler
Niffler: A DICOM Framework for Machine Learning and Processing Pipelines.
ramon349/VIMA
Official Algorithm Implementation of ICML'23 Paper "VIMA: General Robot Manipulation with Multimodal Prompts"
ramon349/BSSLecture
Simplified illustration of blind-source separation algorithms
ramon349/BSSLectureExamplesForStudents
ramon349/ColonCancer_survival
ramon349/cs534
Repository meant to keep my hw code. Usually meant to transfer between my computers
ramon349/CS584FinalProject
ramon349/CS584HWs
ramon349/CS584ImageProcessing
ramon349/DSE598_HW
ramon349/fair_survival
ramon349/kits21_spatial_channel_attention
Code for spatial and channel attetion enhanced U-Net for Kits21 dataset
ramon349/MICCAI23-ProtoContra-SFDA
This is the official code of MICCAI23 paper "Source-Free Domain Adaptation for Medical Image Segmentation via Prototype-Anchored Feature Alignment and Contrastive Learning"
ramon349/Mirai
This repository was used to develop Mirai, the risk model described in: Towards Robust Mammography-Based Models for Breast Cancer Risk.
ramon349/nnUNet
ramon349/OncoNet_Public
Developing Deep Learning Models for Mammography
ramon349/quip_cancer_segmentation
ramon349/redcap-em-aimi
The study team would like to present Stanford ML-models as downloadable, plug-and-play modules for all REDCap users, which also precludes the need to deal with integrating external models/code. The customer aims to write a paper encapsulating the ‘how to’ for RedCAP and ML models.
ramon349/VIMABench
Official Task Suite Implementation of ICML'23 Paper "VIMA: General Robot Manipulation with Multimodal Prompts"
ramon349/YOLOX
YOLOX is a high-performance anchor-free YOLO, exceeding yolov3~v5 with MegEngine, ONNX, TensorRT, ncnn, and OpenVINO supported. Documentation: https://yolox.readthedocs.io/