giantcroc/featuretoolsOnSpark

Gives Key Error issue while creating the tableset from pyspark dataframe

Opened this issue · 3 comments

Hi I am trying to use the featuretoolsOnSpark on my server. In trial run i am implementing the example as shown in the repo. It throws KeyError on calling "ts.table_from_dataframe"
I imported the csv as spark dataframe and checked the schema ,
column names they are same, but it throws "KeyError".

Screenshot (27)

I guess this bug come from this line column = column.encode("utf-8"), if you encode a column name which is str type, it would become a bytes type.
A str obejct is different with bytes object even they same characters.
I dont get why the column has to be encoded to bytes.

I'm having the same issue as described above. Did you guys figured out how to overcome this bug?

I'm having the same issue as described above. Did you guys figured out how to overcome this bug?