PaddlePaddle/PaddleSeg

Model deployment with API-based training code

nwjun opened this issue · 0 comments

问题确认 Search before asking

  • 我已经搜索过问题,但是没有找到解答。I have searched the question and found no related answer.

请提出你的问题 Please ask your question

I've successfully trained a PPLiteSeg model with STDC2 backbone using the PaddleSeg API by following the official API example. However, I'm facing challenges when trying to deploy the model, as most deployment examples in the documentation use YAML configuration files.

Questions:

  1. Is there a way to deploy a model trained using API-based code without converting it to a YAML configuration?
  2. If conversion to YAML is necessary, how can I convert my API-based training code to a YAML configuration file efficiently for model export and deployment?
  3. Is there an official YAML config file for PPLiteSeg with STDC2 backbone that uses the same default values as in the API example? I couldn't find a similar one to use as a starting point.

Current API-based training code:

import os
import paddle
import paddle.nn as nn

import paddleseg.transforms as T
from paddleseg.datasets import Dataset
from paddleseg.models import PPLiteSeg
from paddleseg.models.backbones import STDC2
from paddleseg.models.losses import OhemCrossEntropyLoss
import numpy as np

folder = "initial_test"
root = os.path.join("data", folder)

model = PPLiteSeg(
    num_classes=2,
    backbone=STDC2(
        pretrained="https://bj.bcebos.com/paddleseg/dygraph/PP_STDCNet2.tar.gz"
    ),
)

size = (1960, 1280)
T_size = T.Resize(target_size=size)
train_transforms = [
    T_size,
    T.Normalize(),
]

train_dataset = Dataset(
    dataset_root=root,
    transforms=train_transforms,
    mode="train",
    num_classes=2,
    train_path=os.path.join(root, "train.txt"),
)
val_transforms = [T_size, T.Normalize()]

val_dataset = Dataset(
    dataset_root=root,
    transforms=val_transforms,
    mode="val",
    num_classes=2,
    val_path=os.path.join(root, "val.txt"),
)

base_lr = 0.01
lr = paddle.optimizer.lr.PolynomialDecay(base_lr, power=0.9, decay_steps=1000, end_lr=0)
optimizer = paddle.optimizer.Momentum(
    lr, parameters=model.parameters(), momentum=0.9, weight_decay=4.0e-5
)

losses = {}
losses["types"] = [OhemCrossEntropyLoss(min_kept=200000)] * 3
losses["coef"] = [1] * 3

from paddleseg.core import train

train(
    model=model,
    train_dataset=train_dataset,
    val_dataset=val_dataset,
    optimizer=optimizer,
    save_dir=os.path.join("output", folder),
    iters=2000,
    batch_size=1,
    save_interval=10,
    log_iters=10,
    num_workers=4,
    losses=losses,
    use_vdl=True,
)

Any guidance on converting this API-based configuration to a YAML file or finding a suitable starting point would be greatly appreciated. Thank you!