seanys/2D-Irregular-Packing-Algorithm

Error computing NFP

Naimad1CZ opened this issue · 3 comments

Hi, I'm creating my own irregular packing (online) algorithm for my Bachelor's thesis and want to use your implementations in my benchmarks for comparison.

However, when trying to compute NFP in nfp_test.py using poly1=json.loads(df['polygon'][11]) and poly2=json.loads(df['polygon'][14]) from my crystals.csv dataset, it writes "出现相交区域". It happens only in combination of those 2 polygons, combinations of any other polygons are without error.
This is probably the reason why I get "没有计算出可行向量" and "出现相交区域" during running any packing algorithms with my dataset.

crystals.txt
(The file is .txt because Github doesn't allow .csv files)

EDIT: Looks like removing those 2 polygons + this: 2,"[[0.0, 0.0], [10.667, 55.333], [-33.333, 65.333], [-51.333, 58.0], [-66.667, 12.666], [-48.667, -10.0], [-38.667, -5.334], [-28.667, -8.0], [-12.0, 2.0]]" makes your algorithms run without errors.

Sorry for missing this issue. Probably because the size of your polygon is small. You can set the parameter ‘scale’ 20 or 50 to solve this problem.
tools/data.py def getData(index): name=["ga","albano","blaz1","blaz2","dighe1","dighe2","fu","han","jakobs1","jakobs2","mao","marques","shapes","shirts","swim","trousers"] print("开始处理",name[index],"数据集") scale=[100,0.5,100,100,20,20,20,10,20,20,0.5,20,50] print("缩放",scale[index],"倍") df = pd.read_csv("data/"+name[index]+".csv") polygons=[] for i in range(0,df.shape[0]): for j in range(0,df['num'][i]): poly=json.loads(df['polygon'][i]) GeoFunc.normData(poly,scale[index]) polygons.append(poly) return polygons
Since computing the position will inevitably cause some errors, so we maintain that point A and point B are the same if their distance is less than a very small number. This number can work when the sizes of most of the polygons are large than 100.

Well, maybe it's because it's "small", but in this case, it's present even if I scale the polygons to have width around 1000.
Try following piece of code in nfp_test.py:

def tryNFP():
    df = pd.read_csv("data/crystals.csv")

    poly1 = json.loads(df['polygon'][14])
    poly2 = json.loads(df['polygon'][11])
    GeoFunc.normData(poly1, 15)
    GeoFunc.normData(poly2, 15)
    GeoFunc.slidePoly(poly1, 100, 800)

    nfp = NFP(poly1, poly2, show=True)
    print(nfp.nfp)

+change the size of the pyplot to at least 2500 x 1500 (in show.py on line 35) to be able to see both pieces.
You can see that the NFP is wrong.

But when you change the argument in normData to e.g. 20 (anything 17+ should work), then the generated NFP is correct.

Yeah, we believe that the size of the polygon is still the major reason. Frankly speaking, we have found this problem when running the NFP in some randomly generated polygons a year ago. However, when we set the scaling factor larger than a certain number(derive from the experiment), the NFP can always work. So, we continued our experiments and don't consider this problem so far.
Sorry for cannot fix this problem recently(since we have changed our research fields) and thanks for your detailed issue. We warmly welcome you can contribute to this repository. Sorry for cannot fix this problem in a short time and thanks for your detailed issue. We are warmly welcome you can contribute to this repository.