/UWMGI

kaggle UW-Madison GI Tract Image Segmentation

Primary LanguageJupyter Notebook

{"nbformat":4,"nbformat_minor":0,"metadata":{"colab":{"name":"README.ipynb","provenance":[],"collapsed_sections":[],"authorship_tag":"ABX9TyMX33OqCRsYpCS04Ne5DC+S"},"kernelspec":{"name":"python3","display_name":"Python 3"},"language_info":{"name":"python"}},"cells":[{"cell_type":"markdown","source":["# UWMGI\n","\n","* reffernce\n","---\n","\n","\n","1.   * [AW-Madison: EDA & In Depth Mask Exploration](https://www.kaggle.com/code/andradaolteanu/aw-madison-eda-in-depth-mask-exploration)\n","     * nb001, nb002\n","     * EDA, kaggle notebook\n","     * done\n","\n","---\n","\n","2. * [UWMGI: Mask Data](https://www.kaggle.com/code/awsaf49/uwmgi-mask-data)\n","   * nb003\n","   * create 2.5D images set\n","   * 前後のスライスも考慮した2.5D dataの方が精度いいみたい\n","   * baselineつくって試してみたい\n","   * doing\n","\n","---\n","\n"],"metadata":{"id":"Gu20BiaSrXxJ"}},{"cell_type":"code","source":[""],"metadata":{"id":"ptHHi84Kf82r"},"execution_count":null,"outputs":[]},{"cell_type":"markdown","source":["* SCORE\n","\n","\n","---\n","* nb004\n","    * fold0 Dice: 0.902, Jaccard 0.875\n","    * fold1 Dice: 0.892, Jaccard 0.864\n","    * fold2 Dice: 0.898, Jaccard 0.865\n","    * PB: 0.838\n","---\n","\n","* nb008\n","    * fold0 Dice: 0.903, Jaccard 0.875\n","    * fold1 Dice: 0.895, Jaccard 0.865\n","    * fold2 Dice: 0.896, Jaccard 0.864\n","    * PB: 0.847\n"],"metadata":{"id":"Yvwi7ApIf9bh"}},{"cell_type":"markdown","source":["### 2022/05/27\n","* nb001\n","    * EDAやってる。segmentationないのが多い\n","        * そもそも写っていないのか、それともannotationミスか\n","        * **そもそも写ってないのが多い**\n","    * 画像の明るさが結構違う、一応画像読み込むときに正規化はしてると思うが"],"metadata":{"id":"0_rKKveOrWew"}},{"cell_type":"markdown","source":["### 2022/05/28\n","\n","   * nb002\n","\n","        * maskしたdataづくり\n","        * [UWMGI: Mask Data](https://www.kaggle.com/code/awsaf49/uwmgi-mask-data) (これは結局参考にしてない)\n","        * [AW-Madison: EDA & In Depth Mask Exploration](https://www.kaggle.com/code/andradaolteanu/aw-madison-eda-in-depth-mask-exploration)を参照\n","        * 2D mask data\n","        * visualization\n","\n"],"metadata":{"id":"8wKUI7wGGrp9"}},{"cell_type":"markdown","source":["### 2022/06/01\n","   * nb003\n","      * 2.5D data つくっている\n","      * [UWMGI: Mask Data](https://www.kaggle.com/code/awsaf49/uwmgi-mask-data)\n","      * **chennls = 3, stride = 2, 後ろへずれていく**"],"metadata":{"id":"tLey92rYIHDV"}},{"cell_type":"markdown","source":["### 2022/06/10\n","### 2022/06/11\n","   * nb002, nb003, nb005\n","      * [hdf5](https://qiita.com/simonritchie/items/23db8b4cb5c590924d95) でデータ管理したい\n","      * colabのデータ読み込みが遅い\n","      * hdf5はバイナリファイルの形で管理・ファイルの中で階層構造をつくる(DBみたいなイメージ)\n","      * **ただcompressionを指定して圧縮しないとファイルがばかでかくなる** [リンクテキスト](https:// [リンクテキスト](https://))"],"metadata":{"id":"GNurJ34ox3fY"}},{"cell_type":"markdown","source":["### 2022/06/14 \n","### 2022/06/15\n","   * nb004\n","      * **lossがnanになる**\n","        * 原因はimageのピクセル値が大きすぎたこと(2000以上の値とか入っていた)\n","        * **imageはちゃんと正規化やスケーリングをする**\n","      * 学習が進まない\n","        * 正解データをGPUに送る段階で画像をGPUに送っていた\n","        * to(device)でのミス多い気がするから注意する"],"metadata":{"id":"GXbgzgI314ce"}},{"cell_type":"markdown","source":["### 2022/06/16 2022/06/20\n","* nb004\n","    * Unet 2D data\n","\n","* nb008\n","    * Unet 2.5D data\n","    * PB: 0.847\n","        * Public notebookの0.86に及ばない(foldの数 or epoch数 が原因か)\n","* nb009 nb010 nb011 PB: 0.848\n","    * nb009\n","        * nb008と同じbackboneのUnet\n","        * large-bowel専門\n","    * nb010\n","        * nb008と同じbackboneのUnet\n","        * small bowel 専門\n","    * nb011\n","        * nb008と同じbackboneのUnet\n","        * stomach専門"],"metadata":{"id":"jtIiUIwOx8y5"}},{"cell_type":"markdown","source":["### 2022/06/21\n","* nb012\n","    * nb008のfoldを5に増やす PB: 0.852\n","    * foldのせいかepochのせいか\n","    * **foldを増やすと精度が上がった(データ量とそのバラエティーが上がったから**\n","\n","* nb013\n","    * channel5, stride1のデータ作成"],"metadata":{"id":"3no-0wljwBY-"}},{"cell_type":"markdown","source":["### 2022/06/22 2022/06/23\n","* nb014 PB: 0.854\n","    * nb008のfoldを5に増やす PB: 0.852\n","    * CHANNEL 5 SRRIDE 1の2.5Dimagesを使う\n","\n","* nb015 PB: 0.846\n","    * **strong augumentation**\n","    * data argumentationを強くかける\n","    * epoch数そのままなので上げてもいいかも\n","    * nb008より落ちた(エポック数が足りないと思う)\n","\n","* nb016 PB: 0.861\n","    * nb008のepoch数を上げてみた 7 -> 15\n","    * 最終モデルの推論結果も保存\n","        * modelの推論がかなり遅い\n","            * そのため一部の推論のみ出力\n","            * データひとつひとつに推論をするのでこんなものか\n","            * それとも、gpu上でもっと上手く計算を回せないか(notebookのinferも2.81s/iterくらいしていたので仕方ないか)\n","\n","* backbone\n","    1. effb4 0.874\n","    2. resnet50 epoch 30は欲しい"],"metadata":{"id":"K1hPhKtZCYrP"}},{"cell_type":"markdown","source":["### 2022/06/24\n","* nb017 PB: 0.847 -> 0.854 !!!\n","    * nb008の損失関数をDice+BCEにする nb008と比べてスコアが0.01~0.07程度上昇\n","    * nb008 fold0 dice: 0.9032, jaccard: 0.8748\n","    * nb017 fold0 dice: 0.9049, jaccard: 0.8767\n","\n","    * nb008 fold1 dice: 0.8948, jaccard: 0.865\n","    * nb017 fold1 dice: 0.8987, jaccard: 0.8692\n","\n","    * nb008 fold2 dice: 0.8959, jaccard: 0.8645\n","    * nb017 fold2 dice: 0.9023, jaccard: 0.8715\n","\n","\n","* 損失関数Dice onlyでもやりたい (nb023)\n","* 損失関数channel5 strdie2のデータ作成と学習 (nb019)\n","* backbone efficientnet_B4(colabの計算量足りないならlarge-bowelとsmall-bowelだけでも上げる) (nb018)\n","* backbone resnet50 [discussion](https://www.kaggle.com/competitions/uw-madison-gi-tract-image-segmentation/discussion/320692)\n","* TransUnet [HuBMAP](https://www.kaggle.com/code/elcaiseri/hubmap-pytorch-vit-segmentation-starter-train/notebook)\n","* schuler ReduceLROnPlateau [discussion](https://www.kaggle.com/competitions/uw-madison-gi-tract-image-segmentation/discussion/323885)\n","* epoch15の条件でstrong augumentation(nb015)を試す (nb020)"],"metadata":{"id":"j4t5hMMe3puW"}},{"cell_type":"markdown","source":["* nb018 PB: 0.857 CV: fold0 0.913-0.887 fold1 0.898-0.868\n","\n","    * model efficientnet-b4\n","    * epoch 15\n","    * nb016と比較 PB: 0.861 CV: fold0 0.909-0.887 fold1 0.903-0.87\n","    * fold0 fold1 のみしか学習できなかった\n","\n","* nb019\n","    * channel5 stride2を作成\n","* nb020\n","    * epoch15でstrong augumentation(nb015)を試す\n","    * nb016と比較\n","    * nb016 fold0 dice: 0.9094, jaccard: 0.8818\n","    * nb020 fold0 dice: 0.9119, jaccard: 0.8846\n","    * **nb020が若干cv scoreが良い**\n","    \n","* nb021 (後でやる)\n","    * resnext50_32*4d\n","    * epoch 30\n","\n"],"metadata":{"id":"uJAMqnXY9_NF"}},{"cell_type":"markdown","source":["## 2022/06/25\n","\n","* nb022\n","    * nb019で作成したchannel5 stride2のデータセット\n","    * nb014と比較する\n","    * nb019 fold0 0.903-0.875\n","    * nb014 fold0 0.907-0.879\n","    * メモリ不足で落ちてた\n","    * **channel5 stride1の方が性能よさそう**\n","\n","* nb023\n","    * loss function Dice only\n","    * nb008 Dice + Trysky, nb017 Dice + BCEと比較する\n","    * fold0 0.905-0.876, fold1 0.897-0.868, fold2 0.903-0.872\n","\n","### slice numberとslice maxの情報をchannelとして追加して位置情報を与えてやればいいのでは?"],"metadata":{"id":"r8TdrF9IImYJ"}},{"cell_type":"markdown","source":["### 2022/06/26 2022/06/27\n","\n","* nb024\n","    * 4 channel目の半分を0.slice_num, 0.slice_maxとした\n","    * 学習上手く進まないので学習率変更\n","\n","* nb025\n","    * 4 channel目の左上9個を0.slice_num, 右上9個を0.slice_maxとして残り0として入れる\n","\n","* nb026 PB:0.857 (channel3 epch16 loss bce+tvlesky PB:0.861)\n","    * **思ったより伸びない**\n","    * cv nb026 vs nb014(dice) fold0 0.9121-0.9093 fold1 0.8977-0.0.9032 fold2 0.9080-0.9065\n","    * **cvはnb026の方がいい、Trust CV**\n","    * nb020でstrong argumentationがいい\n","    * nb017でBCE + Diceがいい\n","    * nb014 5channel + 1 strideのデータがいい\n"],"metadata":{"id":"LpyPAYFYmkmR"}},{"cell_type":"markdown","source":["### 2022/06/27\n","* nb027\n","    * **efficient b3**\n","    * channel5\n","    * moderate augumantation\n","        * Affine\n","        * Elastic Transform\n","        * random Brightness\n","\n","* nb028\n","    * **efficient b4**\n","    * channel3\n","\n","* [FiLM](https://proceedings.mlr.press/v143/lemay21a/lemay21a.pdf)\n","    * meta dataをsegmentationの中に統合する\n","    * sliceの番号もこれに入れればどうか?\n","\n","* [Unet from scratch](https://www.youtube.com/watch?v=u1loyDCoGbE)\n","    * FiLMの実装で役に立つ?\n"],"metadata":{"id":"Wckiv4rQ3Zfe"}},{"cell_type":"markdown","source":["### 2020/06/30\n","\n","* nb029 \n","    * FiLM実装してみる\n","    * **FiLMの最終層でsigmoidを使って0-1に変換していたけどその操作をしなければ学習がめちゃくちゃ進んだ!!(進んでなかった)**\n","\n","* nb030 PB: 0.870 位置情報のデータきいてる? channel5の影響?\n","    * cv fold0 0.9172, fold1 0.9101, fold2 0.9099 PB 0.870\n","    * (nb028) fold0 0.9162, fold1 0.9103, fold2 0.9089 PB: 0.867\n","    * nb028の内容に位置情報のデータを入れる\n","        * efficient net b3\n","        * channel5 + channel1 with location information\n","    * nb025の位置情報と合わせる"],"metadata":{"id":"YXqZP5CJtZv_"}},{"cell_type":"markdown","source":["### 2020/06/30 - 2020/07/05\n","\n","* nb031 0.857\n","    * **channel5の影響力** -> channel3では全然戦えない\n","    * nb030を受けてlocation information効いているか\n","    * efficient net b4でやってみる\n","    * channel3 + channel1(location information)\n","    * epoch 20\n","    * nb018 efficient net b4, nb028 efficient net b4\n","        * nb018 PB: 0.857 CV: fold0 0.913-0.887 fold1 0.898-0.868\n","        * nb031 PB: 0.857 CV: fold0 0.9159-0.8895 fold1 0.906-0.878\n","        * nb018ではepoch 15, nb031 epoch 20なのでそっちの影響かもしれないが、位置情報が効いている可能性もある\n","\n","\n","* nb027 nb028\n","    * この2つのアンサンブルのnotebookがcomputation limitのerrorが出る\n","    * efficient net b4 + efficient net b3 合計6個のモデル\n","    * **6個のアンサンブルきついか**\n","    * **maskの値をfloat32のせいでメモリ爆発している**\n","\n","* nb032\n","    * effieicent net4 channel5 + channel1 (location information)\n","    * 計算が重いのできついか\n","\n","    * **channel5 eppoch 30 efficient net b4 に変更**\n","    * cv fold0 0.9202, fold1 0.9138, fold2 0.9151 PB: 0.871\n","* nb033\n","    * efficient net b4\n","    * epoch 30\n","    * channel5 + channel1location information\n","    * stride 2 forward back -> これだめ(アンサンブルのやり方の問題)\n","    * Dice + BCE loss\n","    * fold0 0.9184 (lr 2e-3) fold1 0.9084, fold2 0.9098, PB: 0.873\n","    * nb030 location informationなしと比較したい\n","    * これに頼るのはshake upが怖い(cvはそれほど良くないから)\n","\n","\n","* nb034\n","    * efficient net n4\n","    * channel5 + locaiton information\n","    * Dice only\n","    * cv fold0: 0.9166, fold1 fold2\n","\n","* nb030(efficient net b4 channle5 + locaiton information) + nb032(efficient net b4 channel5 only)\n","    * このアンサンブルを試してみる\n","\n","* channel5 stride2 のモデル、損失関数を異なるようにする、strong augumentationもやってみたい\n","* channel5 は確定\n","\n","* nb035\n","    * efficient net b3\n","    * channel5\n","    * strong augumentation\n","    * cv fold0: 0.9193, fold1: 0.9151, fold2: "],"metadata":{"id":"Oe6x-kThobkE"}},{"cell_type":"markdown","source":["## 2021/07/08\n","\n","* アンサンブルの候補\n","    * nb030\n","        * efficient net b3\n","        * channel5 + channel1 with location information\n","        * CV : fold0 0.9172, fold1 0.9101, fold2 0.9106\n","        * PB : 0.870\n","    * nb031\n","        * efficient net b4\n","        * channel3 + channel1(location information), epoch 20\n","        * CV : fold0 0.9159 fold1 0.906, fold2 0.9154\n","        * PB : 0.857\n","        * epoch 30 にして鍛えなおす? \n","    * **nb032**\n","        * efficient net b4\n","        * channel5 eppoch 30\n","        * CV : fold0 0.9202, fold1 0.9138, fold2 0.9151\n","        * PB : 0.871\n","    * **nb033**\n","        * efficient net b4\n","        * epoch 30\n","        * channel5 + channel1location information\n","        * Dice + BCE loss\n","        * CV : fold0 0.9184, fold1 0.9084, fold2 0.9098\n","        * PB : 0.873\n","        * **PBのスコアが以上に高い, shake upに注意する**\n","    * nb034\n","        * efficient net n4\n","        * channel5 + locaiton information\n","        * Dice only\n","        * CV : fold0 0.9166, fold1 , fold2\n","    * **nb035**\n","        * efficient net b3\n","        * channel5 + location information (channel1)\n","        * strong augumentation\n","        * CV : fold0 0.9193, fold1 0.9151, fold2 0.9129\n","        * PB: 0.867\n","    * **nb036**\n","        * efficient net b4\n","        * channel5 only\n","        * Dice + BCE\n","        * CV : fold0 0.9197, fold1 0.9119, fold2 0.9147\n","        * PB: 0.872\n","\n","    * nb037\n","        * ensembleの重みの決定\n","        * softで最適化してthresholdは後で決定したい\n","        * infer notebookのコードからうまく流用する\n","\n","    * preproccessing 入れた方がよい"],"metadata":{"id":"O4KRzA_dsjgf"}},{"cell_type":"markdown","source":["# 2021/07/16\n","\n","### solutionメモ\n","\n","* [3rd place solution](https://www.kaggle.com/competitions/uw-madison-gi-tract-image-segmentation/discussion/337468)\n","    * [別のコンペ solution](https://www.kaggle.com/c/siim-acr-pneumothorax-segmentation/discussion/107981) を参照していた\n","        * まず注目する領域の抽出(Unet)の後で肝心の予測を行っていた\n","    * [そのまた別のコンペ solution](https://www.kaggle.com/competitions/rsna-str-pulmonary-embolism-detection/discussion/194145)\n","        * 物体検出で目的の部分だけを前処理で抜き出す\n","\n","    * Preprocessing: EfficientDet size 256 epoch 5で main areaを抜きだす(物体検出ならbboxは自分でやった?)\n","    * Cls part: segmentationするべきかどうか判断する\n","    * Seg part: 実際にsegmentationするモデルを訓練する\n","        * resourceの削減につながる\n","\n","* [2nd place solution](https://www.kaggle.com/competitions/uw-madison-gi-tract-image-segmentation/discussion/337400)\n","    * 2 stage pipline\n","        * positive-negative detect (stage1) -> segmentation (stage2)\n","            * まずsliceにsegmentationがあるかないか予測してから次の予測にうつっている\n","    * crop by yolo-v5\n","        * 2.5d modelのtrainの場合(200人分 manual annotation)\n","        * 無駄な背景を削除できる\n","        * 腕の部分の光が強くて、min-max scalerがうまく機能しないことがあるのでそれを防ぐ\n","    * 2.5D model 3D modelとのかけあわせ\n","    \n","* [8th place solution](https://www.kaggle.com/competitions/uw-madison-gi-tract-image-segmentation/discussion/337359)\n","    * classification(annotationがあるかどうか) + segmentation\n","    * multi-task のmodelのように学習を行った\n","        * encoderの最終層からannotationがあるかどうか出力 + 通常のsegmentationの予測の出力\n","    * 2.5D 3D modelのensemble\n","        * Unet++の論文からdeep supervision lossも実装している\n","\n","* [23rd place solution](https://www.kaggle.com/competitions/uw-madison-gi-tract-image-segmentation/discussion/337327)\n","    * 2.5D image\n","        * Unet++, regnety-160\n","        * mixupを使用\n","    * deep super vision loss がめっちゃ効いた\n","    * mix up も効いた\n","    * 2.5D と 3D の ensemble\n","\n","* [15th place solution](https://www.kaggle.com/competitions/uw-madison-gi-tract-image-segmentation/discussion/337326)\n","    * augumentationがめっちゃ大事といっている\n","        * このコンペでは軽いaugumentationの方が聞いたといっていた\n","    * annotationが下側の方で切り捨てられているので、その境界を予測して、基準から外れているものは切り捨てる\n","        * 結局これもannotationのない領域を判定して、sementationのmodelはsegmentationに専念させるという戦略をとっている\n","\n","* [1st place solution](https://www.kaggle.com/competitions/uw-madison-gi-tract-image-segmentation/discussion/337197)\n","    * CLS, SEG, 3D という3つのgroupのmodelを作成\n","        * CLS: sliceにpredictionが含まれているかどうかの予測\n","        * SEGと3Dでsegmentation\n","        * SEGと3Dのensembleをした後にCLSのmodelを使っている\n","    * segmentation_models_pytorchではなくMMsegmentationというlibraryを使用\n","    * CLSは普通にsegmentationを行い、predictの和が合計12以上ならsliceをpositiveと判定する\n","    * SEG groupのmodelはannotationがついているdataのみを使用\n","    * [another competetion](https://www.kaggle.com/c/siim-acr-pneumothorax-segmentation/discussion/108060)\n","        * 単純な2値の分類モデルではなくて、segmentationの予測のピクセルの大きさでsliceにannotationが含まれているかみたほうがよかったらしい\n","\n","* [5th place solution](https://www.kaggle.com/competitions/uw-madison-gi-tract-image-segmentation/discussion/337268)\n","    * まず3Dのデータセットを作成して、x, y, z方向に切ったデータセットを作成\n","    * ensembleでスコアがはねた\n","\n","* [10th place solution](https://www.kaggle.com/competitions/uw-madison-gi-tract-image-segmentation/discussion/337195)\n","    * nnUnetを用いた\n","    * かなり有効とのこと\n","\n","\n","\n","\n","\n","\n","\n"],"metadata":{"id":"jwgajr-pJgzH"}},{"cell_type":"code","source":[""],"metadata":{"id":"PmufU1j198cC"},"execution_count":null,"outputs":[]}]}