samelamin/spark-bigquery

BQ Table Schema - Maintaining Original Case of Spark DF Schema

Closed this issue · 8 comments

Hi Sam,

Is it possible to retain the original case for the BQ Table schema generated? Currently it makes all fields lowercase (customeraccountid) when the original is camel (customerAccountId) - this makes it quite hard for some of the business people to read. Is this possible at all to retain the original case?

Cheers
K

cool will try - just looking at the code nothing jumps out as where this is explicitly happening - assume its implicitly (json4s library?) happening somewhere I cant see - a pointer?

fieldToJson - would appear the where this would happen?

Cheers

Hey Sam any hints as to why the schema casing is all lower case? Nothing obvious jumps out, I may be missing the missing the obvous too :)

Hey @kurtmaile I believe its here https://github.com/samelamin/spark-bigquery/blob/master/src/main/scala/com/samelamin/spark/bigquery/converters/BigQueryAdapter.scala#L17

Give it a test locally after you do the pr to ensure it still saves to bq :)

Oh cool thanks - what is the reason you initially made this lower case though so that I dont break things? Is there a BQ constraint to adhere to? Thanks for the tip ;)

I think initially BQ didnt allow uppercase schemas but this has now been fixed, send a PR over when your free :)

This has now been released on v 0.1.7