original paper: "CPTNN: CROSS-PARALLEL TRANSFORMER NEURAL NETWORK FOR TIME-DOMAIN SPEECH ENHANCEMENT"
step1: add cptnn.py, TRANSFORMER.py, process_for_cptnn.py to your model directory.
step2: import cptnn in your training framework and ready to go.
current params: 1.1M
frame_len, hop_size: transform wavform to segments
feat_dim, hidden_size, num_heads, cptm_layers: tune your hyperparameters based on your task