HuHaigen's Stars
horrible-dong/TeamDETR
[ICIP 2023 (oral)] Team DETR: Guide Queries as a Professional Team in Detection Transformers
horrible-dong/DNRT
[ICLR 2024] Dynamic Neural Response Tuning
HuHaigen/COVID-19-Lung-Infection-Segmentation
Due to the irregular shapes,various sizes and indistinguishable boundaries between the normal and infected tissues, it is still a challenging task to accurately segment the infected lesions of COVID-19 on CT images. In this paper, a novel segmentation scheme is proposed for the infections of COVID-19 by enhancing supervised information and fusing multi-scale feature maps of different levels based on the encoder-decoder architecture. To this end, a deep collaborative supervision (Co-supervision) scheme is proposed to guide the network learning the features of edges and semantics. More specifically, an Edge Supervised Module (ESM) is firstly designed to highlight low-level boundary features by incorporating the edge supervised information into the initial stage of down-sampling. Meanwhile, an Auxiliary Semantic Supervised Module (ASSM) is proposed to strengthen high-level semantic information by integrating mask supervised information into the later stage. Then an Attention Fusion Module (AFM) is developed to fuse multiple scale feature maps of different levels by using an attention mechanism to reduce the semantic gaps between high-level and low-level feature maps. Finally, the effectiveness of the proposed scheme is demonstrated on four various COVID-19 CT datasets. The results show that the proposed three modules are all promising. Based on the baseline (ResUnet), using ESM, ASSM, or AFM alone can respectively increase Dice metric by 1.12%, 1.95%,1.63% in our dataset, while the integration by incorporating three models together can rise 3.97%. Compared with the existing approaches in various datasets, the proposed method can obtain better segmentation performance in some main metrics, and can achieve the best generalization and comprehensive performance.
jinzcdev/vscode-pintia
在 VS Code 中练习 拼题A (PTA) 编程题
horrible-dong/QTClassification
A lightweight and extensible toolbox for image classification
HuHaigen/Adaptively-Customizing-Activation-Functions
To enhance the nonlinearity of neural networks and increase their mapping abilities between the inputs and response variables, activation functions play a crucial role to model more complex relationships and patterns in the data. In this work, a novel methodology is proposed to adaptively customize activation functions only by adding very few parameters to the traditional activation functions such as Sigmoid, Tanh, and ReLU. To verify the effectiveness of the proposed methodology, some theoretical and experimental analysis on accelerating the convergence and improving the performance is presented, and a series of experiments are conducted based on various network models (such as AlexNet, VGGNet, GoogLeNet, ResNet and DenseNet), and various datasets (such as CIFAR10, CIFAR100, miniImageNet, PASCAL VOC and COCO) . To further verify the validity and suitability in various optimization strategies and usage scenarios, some comparison experiments are also implemented among different optimization strategies (such as SGD, Momentum, AdaGrad, AdaDelta and ADAM) and different recognition tasks like classification and detection. The results show that the proposed methodology is very simple but with significant performance in convergence speed, precision and generalization, and it can surpass other popular methods like ReLU and adaptive functions like Swish in almost all experiments in terms of overall performance.
jinzcdev/occupied-gpus
The program used to occupy GPUs.
jinzcdev/ohmyzsh-with-plugins
编写了一个可快速安装 ohmyzsh 并附加 zsh-autosuggestions 和 zsh-syntax-highlighting 插件的 shell 脚本,高效配置终端环境
horrible-dong/CCT-Net
[IJCAI 2022] Comparison Knowledge Translation for Generalizable Image Classification
HuHaigen/CTI-UNet
[ICIP 2023] CTI-Unet: Hybrid Local Features and Global Representations Efficiently
HuHaigen/TDRConv-Exploring-the-Trade-off-Between-Feature-Diversity-and-Redundancy-for-a-Compact-CNN-Module
HuHaigen/HDConv