SHI-Labs/OneFormer

Swin large backbone warning: "norm.bias will not be loaded. Please double check and see if this is desired."

GYPM4H opened this issue · 1 comments

GYPM4H commented

I am fine-tuning OneFormer on my dataset(in coco format).
I have used pre-trained weights for swin - swin_large_patch4_window12_384_22kto1k.pth, which I got from here:
https://detrex.readthedocs.io/en/latest/tutorials/Download_Pretrained_Weights.html
Here is my config:
MODEL:
BACKBONE:
FREEZE_AT: 0
NAME: "D2SwinTransformer"
SWIN:
EMBED_DIM: 192
DEPTHS: [2, 2, 18, 2]
NUM_HEADS: [6, 12, 24, 48]
WINDOW_SIZE: 12
APE: False
DROP_PATH_RATE: 0.3
PATCH_NORM: True
PRETRAIN_IMG_SIZE: 384

META_ARCHITECTURE: "OneFormer"

SEM_SEG_HEAD:
NAME: "OneFormerHead"
IGNORE_VALUE: 255
NUM_CLASSES: 1
LOSS_WEIGHT: 1.0
CONVS_DIM: 256
MASK_DIM: 256
NORM: "GN"
# pixel decoder
PIXEL_DECODER_NAME: "MSDeformAttnPixelDecoder"
IN_FEATURES: ["res2", "res3", "res4", "res5"]
DEFORMABLE_TRANSFORMER_ENCODER_IN_FEATURES: ["res3", "res4", "res5"]
COMMON_STRIDE: 4
TRANSFORMER_ENC_LAYERS: 6

ONE_FORMER:
TRANSFORMER_DECODER_NAME: "ContrastiveMultiScaleMaskedTransformerDecoder"
TRANSFORMER_IN_FEATURE: "multi_scale_pixel_decoder"
DEEP_SUPERVISION: True
NO_OBJECT_WEIGHT: 0.1
CLASS_WEIGHT: 2.0
MASK_WEIGHT: 5.0
DICE_WEIGHT: 5.0
CONTRASTIVE_WEIGHT: 0.5
CONTRASTIVE_TEMPERATURE: 0.07
HIDDEN_DIM: 256
NUM_OBJECT_QUERIES: 150
USE_TASK_NORM: True
NHEADS: 8
DROPOUT: 0.1
DIM_FEEDFORWARD: 2048
ENC_LAYERS: 0
PRE_NORM: False
ENFORCE_INPUT_PROJ: False
SIZE_DIVISIBILITY: 32
CLASS_DEC_LAYERS: 2
DEC_LAYERS: 10 # 9 decoder layers, add one for the loss on learnable query
TRAIN_NUM_POINTS: 12544
OVERSAMPLE_RATIO: 3.0
IMPORTANCE_SAMPLE_RATIO: 0.75

WEIGHTS: "./swin_large_patch4_window12_384_22kto1k.pkl"
PIXEL_MEAN: [123.675, 116.280, 103.530]
PIXEL_STD: [58.395, 57.120, 57.375]

TEXT_ENCODER:
WIDTH: 256
CONTEXT_LENGTH: 77
NUM_LAYERS: 6
VOCAB_SIZE: 49408
PROJ_NUM_LAYERS: 2
N_CTX: 16

TEST:
SEMANTIC_ON: False
INSTANCE_ON: True
PANOPTIC_ON: False
DETECTION_ON: False
OVERLAP_THRESHOLD: 0.8
OBJECT_MASK_THRESHOLD: 0.8
TASK: "instance"

DATASETS:
TRAIN: ("instance_cells_train_v1",)
TEST_INSTANCE: ("instance_cells_val_v1",)

SOLVER:
IMS_PER_BATCH: 2
BASE_LR: 0.0001
STEPS: (19580, 21120)
MAX_ITER: 22000
WARMUP_FACTOR: 1.0
WARMUP_ITERS: 10
WEIGHT_DECAY: 0.05
OPTIMIZER: "ADAMW"
BACKBONE_MULTIPLIER: 0.1
CLIP_GRADIENTS:
ENABLED: True
CLIP_TYPE: "full_model"
CLIP_VALUE: 0.01
NORM_TYPE: 2.0
AMP:
ENABLED: False

INPUT:
IMAGE_SIZE: 512
MIN_SCALE: 0.1
MAX_SCALE: 2.0
FORMAT: "RGB"
DATASET_MAPPER_NAME: "coco_instance"
MAX_SEQ_LEN: 77
TASK_SEQ_LEN: 77
TASK_PROB:
SEMANTIC: 0.0
INSTANCE: 1.0

TEST:
EVAL_PERIOD: 5000
DETECTIONS_PER_IMAGE: 150
DATALOADER:
FILTER_EMPTY_ANNOTATIONS: True
VERSION: 2

Running:
python train_net.py --dist-url 'tcp://127.0.0.1:50163'
--num-gpus 1
--config-file ./config.yaml
OUTPUT_DIR ./cells_swin_large

Got warnings:
WARNING [11/09 13:00:23 d2.checkpoint.c2_model_loading]: Shape of norm.bias in checkpoint is torch.Size([1536]), while shape of sem_seg_head.pixel_decoder.adapter_1.norm.bias in model is torch.Size([256]).
WARNING [11/09 13:00:23 d2.checkpoint.c2_model_loading]: norm.bias will not be loaded. Please double check and see if this is desired.
WARNING [11/09 13:00:23 d2.checkpoint.c2_model_loading]: Shape of norm.weight in checkpoint is torch.Size([1536]), while shape of sem_seg_head.pixel_decoder.adapter_1.norm.weight in model is torch.Size([256]).
WARNING [11/09 13:00:23 d2.checkpoint.c2_model_loading]: norm.weight will not be loaded. Please double check and see if this is desired.
WARNING [11/09 13:00:23 d2.checkpoint.c2_model_loading]: Shape of norm.bias in checkpoint is torch.Size([1536]), while shape of sem_seg_head.pixel_decoder.layer_1.norm.bias in model is torch.Size([256]).
[11/09 13:00:23 d2.checkpoint.c2_model_loading]: Following weights matched with submodule backbone - Total num: 128
WARNING [11/09 13:00:23 fvcore.common.checkpoint]: Some model parameters or buffers are not found in the checkpoint:
backbone.norm0.{bias, weight}
backbone.norm1.{bias, weight}
backbone.norm2.{bias, weight}
backbone.norm3.{bias, weight}
criterion.{empty_weight, logit_scale}
prompt_ctx.weight
sem_seg_head.pixel_decoder.adapter_1.norm.{bias, weight}
sem_seg_head.pixel_decoder.adapter_1.weight
sem_seg_head.pixel_decoder.input_proj.0.0.{bias, weight}
sem_seg_head.pixel_decoder.input_proj.0.1.{bias, weight}
sem_seg_head.pixel_decoder.input_proj.1.0.{bias, weight}
sem_seg_head.pixel_decoder.input_proj.1.1.{bias, weight}
sem_seg_head.pixel_decoder.input_proj.2.0.{bias, weight}
sem_seg_head.pixel_decoder.input_proj.2.1.{bias, weight}
sem_seg_head.pixel_decoder.layer_1.norm.{bias, weight}
sem_seg_head.pixel_decoder.layer_1.weight
sem_seg_head.pixel_decoder.mask_features.{bias, weight}
sem_seg_head.pixel_decoder.transformer.encoder.layers.0.linear1.{bias, weight}
sem_seg_head.pixel_decoder.transformer.encoder.layers.0.linear2.{bias, weight}
sem_seg_head.pixel_decoder.transformer.encoder.layers.0.norm1.{bias, weight}
sem_seg_head.pixel_decoder.transformer.encoder.layers.0.norm2.{bias, weight}
sem_seg_head.pixel_decoder.transformer.encoder.layers.0.self_attn.attention_weights.{bias, weight}
sem_seg_head.pixel_decoder.transformer.encoder.layers.0.self_attn.output_proj.{bias, weight}
sem_seg_head.pixel_decoder.transformer.encoder.layers.0.self_attn.sampling_offsets.{bias, weight}
sem_seg_head.pixel_decoder.transformer.encoder.layers.0.self_attn.value_proj.{bias, weight}
sem_seg_head.pixel_decoder.transformer.encoder.layers.1.linear1.{bias, weight}
sem_seg_head.pixel_decoder.transformer.encoder.layers.1.linear2.{bias, weight}
sem_seg_head.pixel_decoder.transformer.encoder.layers.1.norm1.{bias, weight}
sem_seg_head.pixel_decoder.transformer.encoder.layers.1.norm2.{bias, weight}
sem_seg_head.pixel_decoder.transformer.encoder.layers.1.self_attn.attention_weights.{bias, weight}
sem_seg_head.pixel_decoder.transformer.encoder.layers.1.self_attn.output_proj.{bias, weight}
sem_seg_head.pixel_decoder.transformer.encoder.layers.1.self_attn.sampling_offsets.{bias, weight}
sem_seg_head.pixel_decoder.transformer.encoder.layers.1.self_attn.value_proj.{bias, weight}
sem_seg_head.pixel_decoder.transformer.encoder.layers.2.linear1.{bias, weight}
sem_seg_head.pixel_decoder.transformer.encoder.layers.2.linear2.{bias, weight}
sem_seg_head.pixel_decoder.transformer.encoder.layers.2.norm1.{bias, weight}
sem_seg_head.pixel_decoder.transformer.encoder.layers.2.norm2.{bias, weight}
sem_seg_head.pixel_decoder.transformer.encoder.layers.2.self_attn.attention_weights.{bias, weight}
sem_seg_head.pixel_decoder.transformer.encoder.layers.2.self_attn.output_proj.{bias, weight}
sem_seg_head.pixel_decoder.transformer.encoder.layers.2.self_attn.sampling_offsets.{bias, weight}
sem_seg_head.pixel_decoder.transformer.encoder.layers.2.self_attn.value_proj.{bias, weight}
sem_seg_head.pixel_decoder.transformer.encoder.layers.3.linear1.{bias, weight}
sem_seg_head.pixel_decoder.transformer.encoder.layers.3.linear2.{bias, weight}
sem_seg_head.pixel_decoder.transformer.encoder.layers.3.norm1.{bias, weight}
sem_seg_head.pixel_decoder.transformer.encoder.layers.3.norm2.{bias, weight}
sem_seg_head.pixel_decoder.transformer.encoder.layers.3.self_attn.attention_weights.{bias, weight}
sem_seg_head.pixel_decoder.transformer.encoder.layers.3.self_attn.output_proj.{bias, weight}
sem_seg_head.pixel_decoder.transformer.encoder.layers.3.self_attn.sampling_offsets.{bias, weight}
sem_seg_head.pixel_decoder.transformer.encoder.layers.3.self_attn.value_proj.{bias, weight}
sem_seg_head.pixel_decoder.transformer.encoder.layers.4.linear1.{bias, weight}
sem_seg_head.pixel_decoder.transformer.encoder.layers.4.linear2.{bias, weight}
sem_seg_head.pixel_decoder.transformer.encoder.layers.4.norm1.{bias, weight}
sem_seg_head.pixel_decoder.transformer.encoder.layers.4.norm2.{bias, weight}
sem_seg_head.pixel_decoder.transformer.encoder.layers.4.self_attn.attention_weights.{bias, weight}
sem_seg_head.pixel_decoder.transformer.encoder.layers.4.self_attn.output_proj.{bias, weight}
sem_seg_head.pixel_decoder.transformer.encoder.layers.4.self_attn.sampling_offsets.{bias, weight}
sem_seg_head.pixel_decoder.transformer.encoder.layers.4.self_attn.value_proj.{bias, weight}
sem_seg_head.pixel_decoder.transformer.encoder.layers.5.linear1.{bias, weight}
sem_seg_head.pixel_decoder.transformer.encoder.layers.5.linear2.{bias, weight}
sem_seg_head.pixel_decoder.transformer.encoder.layers.5.norm1.{bias, weight}
sem_seg_head.pixel_decoder.transformer.encoder.layers.5.norm2.{bias, weight}
sem_seg_head.pixel_decoder.transformer.encoder.layers.5.self_attn.attention_weights.{bias, weight}
sem_seg_head.pixel_decoder.transformer.encoder.layers.5.self_attn.output_proj.{bias, weight}
sem_seg_head.pixel_decoder.transformer.encoder.layers.5.self_attn.sampling_offsets.{bias, weight}
sem_seg_head.pixel_decoder.transformer.encoder.layers.5.self_attn.value_proj.{bias, weight}
sem_seg_head.pixel_decoder.transformer.level_embed
sem_seg_head.predictor.class_embed.{bias, weight}
sem_seg_head.predictor.class_input_proj.{bias, weight}
sem_seg_head.predictor.class_transformer.decoder.layers.0.linear1.{bias, weight}
sem_seg_head.predictor.class_transformer.decoder.layers.0.linear2.{bias, weight}
sem_seg_head.predictor.class_transformer.decoder.layers.0.multihead_attn.out_proj.{bias, weight}
sem_seg_head.predictor.class_transformer.decoder.layers.0.multihead_attn.{in_proj_bias, in_proj_weight}
sem_seg_head.predictor.class_transformer.decoder.layers.0.norm1.{bias, weight}
sem_seg_head.predictor.class_transformer.decoder.layers.0.norm2.{bias, weight}
sem_seg_head.predictor.class_transformer.decoder.layers.0.norm3.{bias, weight}
sem_seg_head.predictor.class_transformer.decoder.layers.0.self_attn.out_proj.{bias, weight}
sem_seg_head.predictor.class_transformer.decoder.layers.0.self_attn.{in_proj_bias, in_proj_weight}
sem_seg_head.predictor.class_transformer.decoder.layers.1.linear1.{bias, weight}
sem_seg_head.predictor.class_transformer.decoder.layers.1.linear2.{bias, weight}
sem_seg_head.predictor.class_transformer.decoder.layers.1.multihead_attn.out_proj.{bias, weight}
sem_seg_head.predictor.class_transformer.decoder.layers.1.multihead_attn.{in_proj_bias, in_proj_weight}
sem_seg_head.predictor.class_transformer.decoder.layers.1.norm1.{bias, weight}
sem_seg_head.predictor.class_transformer.decoder.layers.1.norm2.{bias, weight}
sem_seg_head.predictor.class_transformer.decoder.layers.1.norm3.{bias, weight}
sem_seg_head.predictor.class_transformer.decoder.layers.1.self_attn.out_proj.{bias, weight}
sem_seg_head.predictor.class_transformer.decoder.layers.1.self_attn.{in_proj_bias, in_proj_weight}
sem_seg_head.predictor.class_transformer.decoder.norm.{bias, weight}
sem_seg_head.predictor.decoder_norm.{bias, weight}
sem_seg_head.predictor.level_embed.weight
sem_seg_head.predictor.mask_embed.layers.0.{bias, weight}
sem_seg_head.predictor.mask_embed.layers.1.{bias, weight}
sem_seg_head.predictor.mask_embed.layers.2.{bias, weight}
sem_seg_head.predictor.query_embed.weight
sem_seg_head.predictor.transformer_cross_attention_layers.0.multihead_attn.out_proj.{bias, weight}
sem_seg_head.predictor.transformer_cross_attention_layers.0.multihead_attn.{in_proj_bias, in_proj_weight}
sem_seg_head.predictor.transformer_cross_attention_layers.0.norm.{bias, weight}
sem_seg_head.predictor.transformer_cross_attention_layers.1.multihead_attn.out_proj.{bias, weight}
sem_seg_head.predictor.transformer_cross_attention_layers.1.multihead_attn.{in_proj_bias, in_proj_weight}
sem_seg_head.predictor.transformer_cross_attention_layers.1.norm.{bias, weight}
sem_seg_head.predictor.transformer_cross_attention_layers.2.multihead_attn.out_proj.{bias, weight}
sem_seg_head.predictor.transformer_cross_attention_layers.2.multihead_attn.{in_proj_bias, in_proj_weight}
sem_seg_head.predictor.transformer_cross_attention_layers.2.norm.{bias, weight}
sem_seg_head.predictor.transformer_cross_attention_layers.3.multihead_attn.out_proj.{bias, weight}
sem_seg_head.predictor.transformer_cross_attention_layers.3.multihead_attn.{in_proj_bias, in_proj_weight}
sem_seg_head.predictor.transformer_cross_attention_layers.3.norm.{bias, weight}
sem_seg_head.predictor.transformer_cross_attention_layers.4.multihead_attn.out_proj.{bias, weight}
sem_seg_head.predictor.transformer_cross_attention_layers.4.multihead_attn.{in_proj_bias, in_proj_weight}
sem_seg_head.predictor.transformer_cross_attention_layers.4.norm.{bias, weight}
sem_seg_head.predictor.transformer_cross_attention_layers.5.multihead_attn.out_proj.{bias, weight}
sem_seg_head.predictor.transformer_cross_attention_layers.5.multihead_attn.{in_proj_bias, in_proj_weight}
sem_seg_head.predictor.transformer_cross_attention_layers.5.norm.{bias, weight}
sem_seg_head.predictor.transformer_cross_attention_layers.6.multihead_attn.out_proj.{bias, weight}
sem_seg_head.predictor.transformer_cross_attention_layers.6.multihead_attn.{in_proj_bias, in_proj_weight}
sem_seg_head.predictor.transformer_cross_attention_layers.6.norm.{bias, weight}
sem_seg_head.predictor.transformer_cross_attention_layers.7.multihead_attn.out_proj.{bias, weight}
sem_seg_head.predictor.transformer_cross_attention_layers.7.multihead_attn.{in_proj_bias, in_proj_weight}
sem_seg_head.predictor.transformer_cross_attention_layers.7.norm.{bias, weight}
sem_seg_head.predictor.transformer_cross_attention_layers.8.multihead_attn.out_proj.{bias, weight}
sem_seg_head.predictor.transformer_cross_attention_layers.8.multihead_attn.{in_proj_bias, in_proj_weight}
sem_seg_head.predictor.transformer_cross_attention_layers.8.norm.{bias, weight}
sem_seg_head.predictor.transformer_ffn_layers.0.linear1.{bias, weight}
sem_seg_head.predictor.transformer_ffn_layers.0.linear2.{bias, weight}
sem_seg_head.predictor.transformer_ffn_layers.0.norm.{bias, weight}
sem_seg_head.predictor.transformer_ffn_layers.1.linear1.{bias, weight}
sem_seg_head.predictor.transformer_ffn_layers.1.linear2.{bias, weight}
sem_seg_head.predictor.transformer_ffn_layers.1.norm.{bias, weight}
sem_seg_head.predictor.transformer_ffn_layers.2.linear1.{bias, weight}
sem_seg_head.predictor.transformer_ffn_layers.2.linear2.{bias, weight}
sem_seg_head.predictor.transformer_ffn_layers.2.norm.{bias, weight}
sem_seg_head.predictor.transformer_ffn_layers.3.linear1.{bias, weight}
sem_seg_head.predictor.transformer_ffn_layers.3.linear2.{bias, weight}
sem_seg_head.predictor.transformer_ffn_layers.3.norm.{bias, weight}
sem_seg_head.predictor.transformer_ffn_layers.4.linear1.{bias, weight}
sem_seg_head.predictor.transformer_ffn_layers.4.linear2.{bias, weight}
sem_seg_head.predictor.transformer_ffn_layers.4.norm.{bias, weight}
sem_seg_head.predictor.transformer_ffn_layers.5.linear1.{bias, weight}
sem_seg_head.predictor.transformer_ffn_layers.5.linear2.{bias, weight}
sem_seg_head.predictor.transformer_ffn_layers.5.norm.{bias, weight}
sem_seg_head.predictor.transformer_ffn_layers.6.linear1.{bias, weight}
sem_seg_head.predictor.transformer_ffn_layers.6.linear2.{bias, weight}
sem_seg_head.predictor.transformer_ffn_layers.6.norm.{bias, weight}
sem_seg_head.predictor.transformer_ffn_layers.7.linear1.{bias, weight}
sem_seg_head.predictor.transformer_ffn_layers.7.linear2.{bias, weight}
sem_seg_head.predictor.transformer_ffn_layers.7.norm.{bias, weight}
sem_seg_head.predictor.transformer_ffn_layers.8.linear1.{bias, weight}
sem_seg_head.predictor.transformer_ffn_layers.8.linear2.{bias, weight}
sem_seg_head.predictor.transformer_ffn_layers.8.norm.{bias, weight}
sem_seg_head.predictor.transformer_self_attention_layers.0.norm.{bias, weight}
sem_seg_head.predictor.transformer_self_attention_layers.0.self_attn.out_proj.{bias, weight}
sem_seg_head.predictor.transformer_self_attention_layers.0.self_attn.{in_proj_bias, in_proj_weight}
sem_seg_head.predictor.transformer_self_attention_layers.1.norm.{bias, weight}
sem_seg_head.predictor.transformer_self_attention_layers.1.self_attn.out_proj.{bias, weight}
sem_seg_head.predictor.transformer_self_attention_layers.1.self_attn.{in_proj_bias, in_proj_weight}
sem_seg_head.predictor.transformer_self_attention_layers.2.norm.{bias, weight}
sem_seg_head.predictor.transformer_self_attention_layers.2.self_attn.out_proj.{bias, weight}
sem_seg_head.predictor.transformer_self_attention_layers.2.self_attn.{in_proj_bias, in_proj_weight}
sem_seg_head.predictor.transformer_self_attention_layers.3.norm.{bias, weight}
sem_seg_head.predictor.transformer_self_attention_layers.3.self_attn.out_proj.{bias, weight}
sem_seg_head.predictor.transformer_self_attention_layers.3.self_attn.{in_proj_bias, in_proj_weight}
sem_seg_head.predictor.transformer_self_attention_layers.4.norm.{bias, weight}
sem_seg_head.predictor.transformer_self_attention_layers.4.self_attn.out_proj.{bias, weight}
sem_seg_head.predictor.transformer_self_attention_layers.4.self_attn.{in_proj_bias, in_proj_weight}
sem_seg_head.predictor.transformer_self_attention_layers.5.norm.{bias, weight}
sem_seg_head.predictor.transformer_self_attention_layers.5.self_attn.out_proj.{bias, weight}
sem_seg_head.predictor.transformer_self_attention_layers.5.self_attn.{in_proj_bias, in_proj_weight}
sem_seg_head.predictor.transformer_self_attention_layers.6.norm.{bias, weight}
sem_seg_head.predictor.transformer_self_attention_layers.6.self_attn.out_proj.{bias, weight}
sem_seg_head.predictor.transformer_self_attention_layers.6.self_attn.{in_proj_bias, in_proj_weight}
sem_seg_head.predictor.transformer_self_attention_layers.7.norm.{bias, weight}
sem_seg_head.predictor.transformer_self_attention_layers.7.self_attn.out_proj.{bias, weight}
sem_seg_head.predictor.transformer_self_attention_layers.7.self_attn.{in_proj_bias, in_proj_weight}
sem_seg_head.predictor.transformer_self_attention_layers.8.norm.{bias, weight}
sem_seg_head.predictor.transformer_self_attention_layers.8.self_attn.out_proj.{bias, weight}
sem_seg_head.predictor.transformer_self_attention_layers.8.self_attn.{in_proj_bias, in_proj_weight}
task_mlp.layers.0.{bias, weight}
task_mlp.layers.1.{bias, weight}
text_encoder.ln_final.{bias, weight}
text_encoder.positional_embedding
text_encoder.token_embedding.weight
text_encoder.transformer.resblocks.0.attn.out_proj.{bias, weight}
text_encoder.transformer.resblocks.0.attn.{in_proj_bias, in_proj_weight}
text_encoder.transformer.resblocks.0.ln_1.{bias, weight}
text_encoder.transformer.resblocks.0.ln_2.{bias, weight}
text_encoder.transformer.resblocks.0.mlp.c_fc.{bias, weight}
text_encoder.transformer.resblocks.0.mlp.c_proj.{bias, weight}
text_encoder.transformer.resblocks.1.attn.out_proj.{bias, weight}
text_encoder.transformer.resblocks.1.attn.{in_proj_bias, in_proj_weight}
text_encoder.transformer.resblocks.1.ln_1.{bias, weight}
text_encoder.transformer.resblocks.1.ln_2.{bias, weight}
text_encoder.transformer.resblocks.1.mlp.c_fc.{bias, weight}
text_encoder.transformer.resblocks.1.mlp.c_proj.{bias, weight}
text_encoder.transformer.resblocks.2.attn.out_proj.{bias, weight}
text_encoder.transformer.resblocks.2.attn.{in_proj_bias, in_proj_weight}
text_encoder.transformer.resblocks.2.ln_1.{bias, weight}
text_encoder.transformer.resblocks.2.ln_2.{bias, weight}
text_encoder.transformer.resblocks.2.mlp.c_fc.{bias, weight}
text_encoder.transformer.resblocks.2.mlp.c_proj.{bias, weight}
text_encoder.transformer.resblocks.3.attn.out_proj.{bias, weight}
text_encoder.transformer.resblocks.3.attn.{in_proj_bias, in_proj_weight}
text_encoder.transformer.resblocks.3.ln_1.{bias, weight}
text_encoder.transformer.resblocks.3.ln_2.{bias, weight}
text_encoder.transformer.resblocks.3.mlp.c_fc.{bias, weight}
text_encoder.transformer.resblocks.3.mlp.c_proj.{bias, weight}
text_encoder.transformer.resblocks.4.attn.out_proj.{bias, weight}
text_encoder.transformer.resblocks.4.attn.{in_proj_bias, in_proj_weight}
text_encoder.transformer.resblocks.4.ln_1.{bias, weight}
text_encoder.transformer.resblocks.4.ln_2.{bias, weight}
text_encoder.transformer.resblocks.4.mlp.c_fc.{bias, weight}
text_encoder.transformer.resblocks.4.mlp.c_proj.{bias, weight}
text_encoder.transformer.resblocks.5.attn.out_proj.{bias, weight}
text_encoder.transformer.resblocks.5.attn.{in_proj_bias, in_proj_weight}
text_encoder.transformer.resblocks.5.ln_1.{bias, weight}
text_encoder.transformer.resblocks.5.ln_2.{bias, weight}
text_encoder.transformer.resblocks.5.mlp.c_fc.{bias, weight}
text_encoder.transformer.resblocks.5.mlp.c_proj.{bias, weight}
text_projector.layers.0.{bias, weight}
text_projector.layers.1.{bias, weight}

Though the training process continued.
What is the problem? Is it expected behaviour?

@GYPM4H How to solve this issue, Could you inform me ? I get same error as mentioned it