xuelunshen/gim

Different performance between online and offline mode

zgdjcls opened this issue · 28 comments

Hi, I was shocked by the precision of gim when I ran the test on huggingface, but when I tested the same images on my machine using the provided model weights, I found there was a huge performance difference. The offline version couldn't return a correct result.
I chose the following details :

model: gim
match thers: 0.1
max features: 1000
keypoints thres: 0.015

RANSAC method: USAC_DEFAULT
Ransac Reproj threshold: 8
Ransac Confidence: 0.99999
Ransac Iterations: 10000

Reconstruct Geometry: Homography

I used the same images for both version and changed the RANSAC parameters in demo.py(line 18-22) according to the online version. I couldn't find where to change the parameters for matching settings so I didn't touch it.
Both matches and wrapped pairs are worse than online version. But I don't know where to fix it.
Screenshot from 2024-06-03 19-44-25

offline version:
Screenshot from 2024-06-03 19-51-30
Screenshot from 2024-06-03 19-51-37

congraduation, thanks for your work! And same question, is online version model the best GIM_DKM, otherwise released model is GIM_LightGlue?

congraduation, thanks for your work! And same question, is online version model the best GIM_DKM, otherwise released model is GIM_LightGlue? If that's the case, will you release the GIM_DKM Pre-trained model and when?

I downloaded the provided GIM_DKM weights. But I faced this problem even when I used this model weights.

congraduation, thanks for your work! And same question, is online version model the best GIM_DKM, otherwise released model is GIM_LightGlue? If that's the case, will you release the GIM_DKM Pre-trained model and when?

I downloaded the provided GIM_DKM weights. But I faced this problem even when I used this model weights.

I'll try it and reply back.

congraduation, thanks for your work! And same question, is online version model the best GIM_DKM, otherwise released model is GIM_LightGlue? If that's the case, will you release the GIM_DKM Pre-trained model and when?

I downloaded the provided GIM_DKM weights. But I faced this problem even when I used this model weights.

I'll try it and reply back.

During my testing, some image pairs got a good result when using the DKM weight. So I guess this problem depends on the given image pairs, I will try to test more pairs and see the success rate.

From my testing, the offline model got this problem mostly on rotate images. I tested each environment on 28 image pairs. Here are the success details:

origin(27/28)
highlight(27/28)
perspective(26/28)
rotate(22/28)

I ran all failed cases on online model and all of them got good results. However, for some failed pairs, I got 2 totally different results for 2 runs. I'm not sure is this problem related to the environment settings?
Here is the difference between my env setting and the guideline:

torch                     2.3.0+cu118              pypi_0    pypi
torchaudio                2.3.0+cu118              pypi_0    pypi
torchvision               0.18.0+cu118   

@zgdjcls


在线模型和这里GitHub的网络权重是同一个文件. 按理说性能应该是一样, 我能想到的是, 是不是 image matching 之后的 robust fitting不一样. 其实 online demo 的 huggingface 你也是可以看到源代码的, 你可以看下 online demo 的源代码和我GitHub这里的源码是不是有哪里不同.


The online model and the network weights on GitHub are the same file. Logically, the performance should be identical. What I can think of is that perhaps the robust fitting after image matching is different. Actually, you can also view the source code of the online demo on HuggingFace. You might want to check if there are any differences between the source code of the online demo and the code on my GitHub.

congraduation, thanks for your work! And same question, is online version model the best GIM_DKM, otherwise released model is GIM_LightGlue?


谢谢你~ 在线模型的模型下拉框中的 gim 就是 gim_dkm_100h, 在线模型的模型下拉框中的其他方法是它们原论文的模型, 这是为了方便大家进行效果对比. gim_lightglue_100h 没有部署到在线模型中. 你需要 clone 这个仓库来体验 gim_lightglue_100h.


Thank you! The “gim” in the model dropdown of the online model refers to gim_dkm_100h. The other methods in the model dropdown of the online model are the models from their original papers, intended for performance comparison by everyone. The gim_lightglue_100h is not deployed in the online model. You need to clone this repository to experience gim_lightglue_100h.

@zgdjcls


哥们, 觉得 gim 好用记得给个 🌟 star 哈~ 🤝


Bro, if you find gim useful, don't forget to give it a 🌟 star! 🤝

@zgdjcls

哥们, 觉得 gim 好用记得给个 🌟 star 哈~ 🤝

Bro, if you find gim useful, don't forget to give it a 🌟 star! 🤝

已经点了🌟
请问在线版的DKM robust fitting 具体文件是哪个(gim-online/common/utils.py?)?我的本地版本具体实现是:

_, mask = cv2.findFundamentalMat(kpts0.cpu().detach().numpy(),
                                     kpts1.cpu().detach().numpy(),
                                     cv2.USAC_DEFAULT, ransacReprojThreshold=8.0,
                                     confidence=0.99999, maxIters=10000)

在线版我的网页参数是:

RANSAC method: USAC_DEFAULT
Ransac Reproj threshold: 8
Ransac Confidence: 0.99999
Ransac Iterations: 10000

Reconstruct Geometry: Homography

我可以假设他们的参数是一样的吗?
最后,我应该在哪里设置本地版的matcher参数,在gim-online/hoc/match_dense中默认参数是

"model": {
            "name": "gim",
            "weights": "gim_dkm_100h.ckpt",
            "max_keypoints": 4096,
            "match_threshold": 0.2,
        },

这与网页版的初始设置不同。
感谢🙇‍♂

@zgdjcls

好像图像匹配部份也有点不一样
https://huggingface.co/spaces/xuelunshen/gim-online/blob/main/hloc/matchers/gim.py
在线模型里面我加了一些 padding 的操作, GitHub这里没加.

然后, 在线模型的robust fitting可能是在这里
https://huggingface.co/spaces/xuelunshen/gim-online/blob/8b53ab6e398081d33fab94ca6b57c50318affbef/common/utils.py#L187
直接去估计Homography了

@zgdjcls

好像图像匹配部份也有点不一样 https://huggingface.co/spaces/xuelunshen/gim-online/blob/main/hloc/matchers/gim.py 在线模型里面我加了一些 padding 的操作, GitHub这里没加.

然后, 在线模型的robust fitting可能是在这里 https://huggingface.co/spaces/xuelunshen/gim-online/blob/8b53ab6e398081d33fab94ca6b57c50318affbef/common/utils.py#L187 直接去估计Homography了

感谢,我仔细拜读一下

@zgdjcls
好像图像匹配部份也有点不一样 https://huggingface.co/spaces/xuelunshen/gim-online/blob/main/hloc/matchers/gim.py 在线模型里面我加了一些 padding 的操作, GitHub这里没加.
然后, 在线模型的robust fitting可能是在这里 https://huggingface.co/spaces/xuelunshen/gim-online/blob/8b53ab6e398081d33fab94ca6b57c50318affbef/common/utils.py#L187 直接去估计Homography了

感谢,我仔细拜读一下

客气了 🤝

@zgdjcls
好像图像匹配部份也有点不一样 https://huggingface.co/spaces/xuelunshen/gim-online/blob/main/hloc/matchers/gim.py 在线模型里面我加了一些 padding 的操作, GitHub这里没加.
然后, 在线模型的robust fitting可能是在这里 https://huggingface.co/spaces/xuelunshen/gim-online/blob/8b53ab6e398081d33fab94ca6b57c50318affbef/common/utils.py#L187 直接去估计Homography了

感谢,我仔细拜读一下

客气了 🤝

你好,我有个很基础的问题想问下,gim-dkm模型对显存占用是非常巨大的吗?这里边哪个模块对显存消耗最多,有没有优化空间啊,我这边一个8G的显卡和12G的显卡都试了,都是OOD,按照默认的参数配置,需要多大显存才能跑的起来?有没有哪些参数是可以调整对显存的占用的?
同时我也单独试了DKM模型,这个模型本身也是需要很大的显存,我把模型输入尺寸降到128x128才能在12G显存的显卡上跑起来,但是结果一塌糊涂,请问作者大大,这些模型本身就是需要很大的显存吗?

@zgdjcls
好像图像匹配部份也有点不一样 https://huggingface.co/spaces/xuelunshen/gim-online/blob/main/hloc/matchers/gim.py 在线模型里面我加了一些 padding 的操作, GitHub这里没加.
然后, 在线模型的robust fitting可能是在这里 https://huggingface.co/spaces/xuelunshen/gim-online/blob/8b53ab6e398081d33fab94ca6b57c50318affbef/common/utils.py#L187 直接去估计Homography了

感谢,我仔细拜读一下

问一哈,你这边是用GPU跑的吗?默认参数的gim-dkm需要多大显存才能跑起来啊,我这边手头的机器最大的12G,一直OOD

@zgdjcls
好像图像匹配部份也有点不一样 https://huggingface.co/spaces/xuelunshen/gim-online/blob/main/hloc/matchers/gim.py 在线模型里面我加了一些 padding 的操作, GitHub这里没加.
然后, 在线模型的robust fitting可能是在这里 https://huggingface.co/spaces/xuelunshen/gim-online/blob/8b53ab6e398081d33fab94ca6b57c50318affbef/common/utils.py#L187 直接去估计Homography了

感谢,我仔细拜读一下

问一哈,你这边是用GPU跑的吗?默认参数的gim-dkm需要多大显存才能跑起来啊,我这边手头的机器最大的12G,一直OOD

我跑的时候大概需要16G

@zgdjcls
好像图像匹配部份也有点不一样 https://huggingface.co/spaces/xuelunshen/gim-online/blob/main/hloc/matchers/gim.py 在线模型里面我加了一些 padding 的操作, GitHub这里没加.
然后, 在线模型的robust fitting可能是在这里 https://huggingface.co/spaces/xuelunshen/gim-online/blob/8b53ab6e398081d33fab94ca6b57c50318affbef/common/utils.py#L187 直接去估计Homography了

感谢,我仔细拜读一下

问一哈,你这边是用GPU跑的吗?默认参数的gim-dkm需要多大显存才能跑起来啊,我这边手头的机器最大的12G,一直OOD

我跑的时候大概需要16G

谢谢,这个确实有点太占资源了,貌似离实际应用还有很多工程化的事情要做啊

@zgdjcls
好像图像匹配部份也有点不一样 https://huggingface.co/spaces/xuelunshen/gim-online/blob/main/hloc/matchers/gim.py 在线模型里面我加了一些 padding 的操作, GitHub这里没加.
然后, 在线模型的robust fitting可能是在这里 https://huggingface.co/spaces/xuelunshen/gim-online/blob/8b53ab6e398081d33fab94ca6b57c50318affbef/common/utils.py#L187 直接去估计Homography了

感谢,我仔细拜读一下

问一哈,你这边是用GPU跑的吗?默认参数的gim-dkm需要多大显存才能跑起来啊,我这边手头的机器最大的12G,一直OOD

我跑的时候大概需要16G

谢谢,这个确实有点太占资源了,貌似离实际应用还有很多工程化的事情要做啊

又做了些调整,
image
这样在RTX3070Ti上可以跑起来了,显存占用大概最多到7G多,读图预处理+匹配时间加一起0.8s左右;尺寸调到1024x1024就需要差不多10G+显存了,还是很吃资源

@zgdjcls
好像图像匹配部份也有点不一样 huggingface.co/spaces/xuelunshen/gim-online/blob/main/hloc/matchers/gim.py 在线模型里面我加了一些 padding 的操作, GitHub这里没加.
然后, 在线模型的robust fitting可能是在这里 huggingface.co/spaces/xuelunshen/gim-online/blob/8b53ab6e398081d33fab94ca6b57c50318affbef/common/utils.py#L187 直接去估计Homography了

感谢,我仔细拜读一下

客气了 🤝

你好,我有个很基础的问题想问下,gim-dkm模型对显存占用是非常巨大的吗?这里边哪个模块对显存消耗最多,有没有优化空间啊,我这边一个8G的显卡和12G的显卡都试了,都是OOD,按照默认的参数配置,需要多大显存才能跑的起来?有没有哪些参数是可以调整对显存的占用的? 同时我也单独试了DKM模型,这个模型本身也是需要很大的显存,我把模型输入尺寸降到128x128才能在12G显存的显卡上跑起来,但是结果一塌糊涂,请问作者大大,这些模型本身就是需要很大的显存吗?


我们的 gim 可以给现有的图像匹配网络带来超强的泛化能力,不会改变原始模型对于显存的占用大小,因为DKM本身就对显存占用比较大,gim也不能减小这部份的显存占用。在inference阶段,最直接影响显存占用的因素就是输入图像的大小。这个也正如你自己所说,只能把输入图像减小来减小显存占用了。或者,你可以在代码里把 upsample_preds 设为 False,这个变量你全局搜索一下。


Our gim can significantly enhance the generalization ability of existing image matching networks without changing the original model's memory usage, as DKM itself already consumes a considerable amount of memory, and gim cannot reduce this memory usage either. During the inference phase, the most direct factor affecting memory usage is the size of the input image. As you mentioned yourself, the only way to reduce memory consumption is to reduce the size of the input image. Alternatively, you can set upsample_preds to False in the code, which you can find by conducting a global search.

@zgdjcls
好像图像匹配部份也有点不一样 huggingface.co/spaces/xuelunshen/gim-online/blob/main/hloc/matchers/gim.py 在线模型里面我加了一些 padding 的操作, GitHub这里没加.
然后, 在线模型的robust fitting可能是在这里 huggingface.co/spaces/xuelunshen/gim-online/blob/8b53ab6e398081d33fab94ca6b57c50318affbef/common/utils.py#L187 直接去估计Homography了

感谢,我仔细拜读一下

问一哈,你这边是用GPU跑的吗?默认参数的gim-dkm需要多大显存才能跑起来啊,我这边手头的机器最大的12G,一直OOD

我跑的时候大概需要16G

谢谢,这个确实有点太占资源了,貌似离实际应用还有很多工程化的事情要做啊

又做了些调整, image 这样在RTX3070Ti上可以跑起来了,显存占用大概最多到7G多,读图预处理+匹配时间加一起0.8s左右;尺寸调到1024x1024就需要差不多10G+显存了,还是很吃资源

我刚才回复完才看到你已经将 upsample_preds 设为 False 了 👍

@letmejoin @zgdjcls

image

这个 h 和 w 最好设置为 480 $\times$ 640 ( height $\times$ width ) 的等比缩放。因为网络是在这种图像宽高比例下进行训练的。Inference 的时候不使用和 Training 一样的图像宽高比,可能会影响性能。这也是前面我提到的,为什么我在 HuggingFace 的在线模型进行图像 padding 的原因。具体情况看你自己使用的时候怎么样,如果你使用起来满足业务需求就不管它。

@letmejoin @zgdjcls

image

这个 h 和 w 最好设置为 480 $\times$ 640 ( height $\times$ width ) 的等比缩放。因为网络是在这种图像宽高比例下进行训练的。Inference 的时候不使用和 Training 一样的图像宽高比,可能会影响性能。这也是前面我提到的,为什么我在 HuggingFace 的在线模型进行图像 padding 的原因。具体情况看你自己使用的时候怎么样,如果你使用起来满足业务需求就不管它。

感谢,目前我这边初步的使用场景还比较简单,而且原始图像就是方形的,后边如果性能不够,再按照这个比例调整

@zgdjcls
好像图像匹配部份也有点不一样 https://huggingface.co/spaces/xuelunshen/gim-online/blob/main/hloc/matchers/gim.py 在线模型里面我加了一些 padding 的操作, GitHub这里没加.
然后, 在线模型的robust fitting可能是在这里 https://huggingface.co/spaces/xuelunshen/gim-online/blob/8b53ab6e398081d33fab94ca6b57c50318affbef/common/utils.py#L187 直接去估计Homography了

感谢,我仔细拜读一下

您好,请问您这边有复现online demo的效果了吗

wintercat1994

在用online版本后效果的确和网页版一致

wintercat1994

在用online版本后效果的确和网页版一致

感谢,想向您请教一些细节,您使用online版本是直接用online版本代码在服务器上跑,还是把online版本的padding和robust fitting替换掉github里的代码?非常感谢您的回复

@wintercat1994 我后面有空的话会把 github 的效果对齐到 huggingface 的版本, 不过你得等我一下.

我当时是把他的online版本下载下来然后自己写了一个测试程序,直接放在了根目录下(与app.py同文件夹),你可以试试能不能跑,代码很久之前写的了

import os
import random

import matplotlib.pyplot as plt
import numpy as np
import torch
from itertools import combinations
import cv2
import gradio as gr
from pathlib import Path
import time
import pandas

from hloc import match_dense, match_features, extract_features
from common.utils import get_model, matcher_zoo, ransac_zoo
import os
import random

import matplotlib.pyplot as plt
import numpy as np
import torch
from itertools import combinations
import cv2
import gradio as gr
from pathlib import Path
import time
import pandas

from hloc import match_dense, match_features, extract_features
from common.utils import get_model, matcher_zoo, ransac_zoo


device = "cuda" if torch.cuda.is_available() else "cpu"
DEFAULT_SETTING_THRESHOLD = 0.1
DEFAULT_SETTING_MAX_FEATURES = 4096
DEFAULT_DEFAULT_KEYPOINT_THRESHOLD = 0.01
DEFAULT_ENABLE_RANSAC = True
DEFAULT_RANSAC_METHOD = "USAC_DEFAULT"
DEFAULT_RANSAC_REPROJ_THRESHOLD = 8
DEFAULT_RANSAC_CONFIDENCE = 0.999
DEFAULT_RANSAC_MAX_ITER = 10000
DEFAULT_MIN_NUM_MATCHES = 4
DEFAULT_MATCHING_THRESHOLD = 0.2
DEFAULT_SETTING_GEOMETRY = "Homography"

def intersection_over_union(mask_0, mask_1):
    intersection = np.logical_and(mask_0, mask_1)
    union = np.logical_or(mask_0, mask_1)
    iou = np.sum(intersection) / np.sum(union)
    return iou
def get_matcher(
    match_threshold,
    extract_max_keypoints,
    key
):
    model = matcher_zoo[key]
    match_conf = model["config"]
    # update match config
    match_conf["model"]["match_threshold"] = match_threshold
    match_conf["model"]["max_keypoints"] = extract_max_keypoints
    matcher = get_model(match_conf)
    return matcher, match_conf
def match(image0,
    image1,
    matcher,
    match_conf,
    ransac_method=DEFAULT_RANSAC_METHOD,
    ransac_reproj_threshold=DEFAULT_RANSAC_REPROJ_THRESHOLD,
    ransac_confidence=DEFAULT_RANSAC_CONFIDENCE,
    ransac_max_iter=DEFAULT_RANSAC_MAX_ITER,
):
    pred = match_dense.match_images(
        matcher, image0, image1, match_conf["preprocessing"], device=device
    )
    mkpts0 = pred["keypoints0_orig"]
    mkpts1 = pred["keypoints1_orig"]
    H, _ = cv2.findHomography(
        mkpts1,
        mkpts0,
        method=ransac_zoo[ransac_method],
        ransacReprojThreshold=ransac_reproj_threshold,
        confidence=ransac_confidence,
        maxIters=ransac_max_iter,
    )
    new_height = pred["image1_orig"].shape[0]
    new_width = pred["image1_orig"].shape[1]
    pts = np.float32(
        [[0, 0], [0, new_height - 1], [new_width - 1, new_height - 1], [new_width - 1, 0]]).reshape(-1, 1, 2)
    dst = cv2.perspectiveTransform(pts, H)
    mask = np.zeros(pred["image0_orig"].shape[:2], dtype=np.uint8)
    cv2.fillPoly(mask, [np.int32(dst)], 1)
    mask = np.uint8(np.float32(mask) * 255)
    wrap = cv2.warpPerspective(
        pred["image1_orig"], H, (pred["image0_orig"].shape[1], pred["image0_orig"].shape[0])
    )
    return mask, wrap
def demo1():
    match_threshold = 0.1
    extract_max_keypoints = 1000
    key = "gim"
    matcher, match_conf = get_matcher(match_threshold, extract_max_keypoints, key)
    env = 'origin'
    name = 'img name'
    type = 'front'
    device = 'cuda' if torch.cuda.is_available() else 'cpu'
    img_path0 = f'path to your img'
    img_path1 = f'path to your img'
    image0 = cv2.imread(str(img_path0), cv2.IMREAD_COLOR)
    image1 = cv2.imread(str(img_path1), cv2.IMREAD_COLOR)
    image1 = cv2.resize(image1, (384, 208))
    plt.imshow(image0)
    plt.imshow(image1)
    mask, wrap = match(image0, image1, matcher, match_conf)
    cv2.imwrite(f'logs/{name}/{type}_mask_2_resize.png', mask)
    cv2.imwrite(f'logs/{name}/{type}_wrap_2_resize.png', wrap)

if __name__ == '__main__':
    demo1()

我当时是把他的online版本下载下来然后自己写了一个测试程序,直接放在了根目录下(与app.py同文件夹),你可以试试能不能跑,代码很久之前写的了

import os
import random

import matplotlib.pyplot as plt
import numpy as np
import torch
from itertools import combinations
import cv2
import gradio as gr
from pathlib import Path
import time
import pandas

from hloc import match_dense, match_features, extract_features
from common.utils import get_model, matcher_zoo, ransac_zoo
import os
import random

import matplotlib.pyplot as plt
import numpy as np
import torch
from itertools import combinations
import cv2
import gradio as gr
from pathlib import Path
import time
import pandas

from hloc import match_dense, match_features, extract_features
from common.utils import get_model, matcher_zoo, ransac_zoo


device = "cuda" if torch.cuda.is_available() else "cpu"
DEFAULT_SETTING_THRESHOLD = 0.1
DEFAULT_SETTING_MAX_FEATURES = 4096
DEFAULT_DEFAULT_KEYPOINT_THRESHOLD = 0.01
DEFAULT_ENABLE_RANSAC = True
DEFAULT_RANSAC_METHOD = "USAC_DEFAULT"
DEFAULT_RANSAC_REPROJ_THRESHOLD = 8
DEFAULT_RANSAC_CONFIDENCE = 0.999
DEFAULT_RANSAC_MAX_ITER = 10000
DEFAULT_MIN_NUM_MATCHES = 4
DEFAULT_MATCHING_THRESHOLD = 0.2
DEFAULT_SETTING_GEOMETRY = "Homography"

def intersection_over_union(mask_0, mask_1):
    intersection = np.logical_and(mask_0, mask_1)
    union = np.logical_or(mask_0, mask_1)
    iou = np.sum(intersection) / np.sum(union)
    return iou
def get_matcher(
    match_threshold,
    extract_max_keypoints,
    key
):
    model = matcher_zoo[key]
    match_conf = model["config"]
    # update match config
    match_conf["model"]["match_threshold"] = match_threshold
    match_conf["model"]["max_keypoints"] = extract_max_keypoints
    matcher = get_model(match_conf)
    return matcher, match_conf
def match(image0,
    image1,
    matcher,
    match_conf,
    ransac_method=DEFAULT_RANSAC_METHOD,
    ransac_reproj_threshold=DEFAULT_RANSAC_REPROJ_THRESHOLD,
    ransac_confidence=DEFAULT_RANSAC_CONFIDENCE,
    ransac_max_iter=DEFAULT_RANSAC_MAX_ITER,
):
    pred = match_dense.match_images(
        matcher, image0, image1, match_conf["preprocessing"], device=device
    )
    mkpts0 = pred["keypoints0_orig"]
    mkpts1 = pred["keypoints1_orig"]
    H, _ = cv2.findHomography(
        mkpts1,
        mkpts0,
        method=ransac_zoo[ransac_method],
        ransacReprojThreshold=ransac_reproj_threshold,
        confidence=ransac_confidence,
        maxIters=ransac_max_iter,
    )
    new_height = pred["image1_orig"].shape[0]
    new_width = pred["image1_orig"].shape[1]
    pts = np.float32(
        [[0, 0], [0, new_height - 1], [new_width - 1, new_height - 1], [new_width - 1, 0]]).reshape(-1, 1, 2)
    dst = cv2.perspectiveTransform(pts, H)
    mask = np.zeros(pred["image0_orig"].shape[:2], dtype=np.uint8)
    cv2.fillPoly(mask, [np.int32(dst)], 1)
    mask = np.uint8(np.float32(mask) * 255)
    wrap = cv2.warpPerspective(
        pred["image1_orig"], H, (pred["image0_orig"].shape[1], pred["image0_orig"].shape[0])
    )
    return mask, wrap
def demo1():
    match_threshold = 0.1
    extract_max_keypoints = 1000
    key = "gim"
    matcher, match_conf = get_matcher(match_threshold, extract_max_keypoints, key)
    env = 'origin'
    name = 'img name'
    type = 'front'
    device = 'cuda' if torch.cuda.is_available() else 'cpu'
    img_path0 = f'path to your img'
    img_path1 = f'path to your img'
    image0 = cv2.imread(str(img_path0), cv2.IMREAD_COLOR)
    image1 = cv2.imread(str(img_path1), cv2.IMREAD_COLOR)
    image1 = cv2.resize(image1, (384, 208))
    plt.imshow(image0)
    plt.imshow(image1)
    mask, wrap = match(image0, image1, matcher, match_conf)
    cv2.imwrite(f'logs/{name}/{type}_mask_2_resize.png', mask)
    cv2.imwrite(f'logs/{name}/{type}_wrap_2_resize.png', wrap)

if __name__ == '__main__':
    demo1()

我研究一下,非常感谢您的分享!

@zgdjcls @letmejoin @wintercat1994
我现在更新了这里的代码, 使其和 huggingface 的在线模型的表现一致了. 大家可以去尝试一下, 如果遇到任何问题可以在这里继续提问.
I have now updated the code here to make it consistent with the performance of Huggingface's online model. You can give it a try, and if you encounter any problems, you can continue to ask questions here.