/vesselSeg

Primary LanguagePython

vesselSeg

This study presents an imbalanced bidirectional-scaling enhanced attention model for liver vessel segmentation, of which the shallow down-scaling module enlarges the receptive field and suppresses intensive pixel-level noise, the deep up-scaling module is a super-resolution architecture aiming at zooming in vessel details, and the attention module is to capture structural connections.

Usage

Parameters

  • num_workers: int
    Number of workers. Used to set the number of threads to load data.
  • ckpt: str
    Weight path. Used to set the dir path to save model weight.
  • w: str
    The path of model wight to test or reload.
  • heads: int
    Number of heads in Multi-head Attention layer.
  • mlp_dim: int.
    Dimension of the MLP (FeedForward) layer.
  • channels: int, default 3.
    Number of image's channels.
  • dim: int.
    Last dimension of output tensor after linear transformation nn.Linear(..., dim).
  • dropout: float between [0, 1], default 0.
    Dropout rate.
  • emb_dropout: float between [0, 1], default 0.
    Embedding dropout rate.
  • patch_h and patch_w:int
    The patches size.
  • dataset_path: str
    Used to set the relative path of training and validation set.
  • batch_size: int
    Batch size.
  • max_epoch: int
    The maximum number of epoch for the current training.
  • lr: float
    learning rate. Used to set the initial learning rate of the model.