markwk/qs_ledger

interpreting apple watch timestamps

francescolosterzo opened this issue · 5 comments

Hi!

I am using this awesome work to look at my health data, and I have a question about how to interpret the timestamps/datetimes.

If I open the csv file I am interested in see that the creationDate of the first row is 2017-01-28 10:20:52 +0200, while if I read the file with pandas.read_csv(...) using the parse_dates argument, I get the corresponding value to be 2017-01-28 08:20:52.

In the blog post it says that the data contain UTC timestamps and that's ok, my problem is more about the timezone info: During this summer I spent the August in the US (I live in Europe), but I don't see this reflected in the data: here is what the creationDate of a row on August 8th looks like: 2019-08-08 02:03:20 +0200.

Shall I assume that the timestamp (i.e. the pure date & time, discarding the timezone info) is still UTC and then I have to figure out the daily time zone by myself? Or what else?

@francescolosterzo If you check the apple_health_data_processor.ipynb, you'll see this line:

convert_tz = lambda x: x.to_pydatetime().replace(tzinfo=pytz.utc).astimezone(pytz.timezone('Asia/Shanghai'))

There is where we set the tz and then subsequent lines parse it to correct timezones. Check that to see if it works for your needs.

To be honest, that code is one of the older parts so I'm not sure if it's up to data to any changes Apple made in their export files and timestamps.

@markwk I am continuing this since now I see other things I would like to ask you and they still fall under the title of this issue.

  1. I see that heart rate points are not equally spaced in time, in the night they are less frequent that during the day. I guess Apple understands to the activity level and adapts the sampling time (e.g. during the night a point every few minutes is ok). Or there might be anything else going on here?

  2. following your example, I am using the startDate as a reference and I see that data are sometimes not ordered in time, i.e. when I plot my heart rate vs time with matplotlib.pyplot.plot I see the line going back and forth every now and then. Do you have any idea why this is happening? Is it safe to sort the rows of the dataframe by startDate? Or will it mess thing up?

@francescolosterzo you didn't share your code so I'm only guessing but it sounds like you didn't convert it to datetime or process it like the other examples.

I went ahead and added a quick example in code showing and plotting HR now in that notebook. Still a small issue with x-axis labeling or ticks but you shouuld have the idea now.

Hi @markwk
you are right, here is what I do:

  1. load the data
df = pd.read_csv('HeartRate.csv', skipinitialspace=True)
  1. add a timezone column based on when I went to the US
cList = [
    (datetime.datetime(2019,7,24), datetime.datetime(2019,8,3,12), 'Europe/Zurich'),
    (datetime.datetime(2019,8,3,12,1), datetime.datetime(2019,8,13,12), 'America/Detroit'),
    (datetime.datetime(2019,8,13,12,1), datetime.datetime(2019,9,26), 'Europe/Zurich'),
]

dfList = []
for entry in chunkList:
        startDate, stopDate, tz = entry
        
        this_df = df[ (pd.to_datetime(df['startDate']) >= startDate) & (pd.to_datetime(df['startDate']) <= stopDate) ]
        
        this_df = this_df.assign(timezone = tz)
        
        dfList.append(this_df)
    
    df_out = pd.concat(dfList, ignore_index=True)
  1. assign the timezone and get the timezone-naive timestamp (in order to have things smoother)
df['timestamp'] = df.groupby('timezone')['startDate'].apply(lambda x: x.dt.tz_localize('UTC').dt.tz_convert(x.name).dt.tz_localize(None))

This might cause problems on the days in which I actually changed timezone, but this is not the problem right now.

The real problem I was referring to is that if I chart the HR vs timestamp in a single night, I get points flipped in time every now and then. Here is an example:

df[ (df.startDate >= datetime.datetime(2019,7,25,1)) & (df.startDate <= datetime.datetime(2019,7,25,2)) ][['startDate', 'timestamp', 'value']].values

array([ ...
       [Timestamp('2019-07-25 01:19:25'), 1564017565, 69.0],
       [Timestamp('2019-07-25 01:26:50'), 1564018010, 69.0],
       [Timestamp('2019-07-25 01:31:05'), 1564018265, 72.4528],
       [Timestamp('2019-07-25 01:30:15'), 1564018215, 66.0],
       [Timestamp('2019-07-25 01:34:12'), 1564018452, 68.0],
       [Timestamp('2019-07-25 01:43:51'), 1564019031, 68.0],
       ...], dtype=object)

As you can see above the 3rd and the 4th entries are flipped in time. This happens a few times in each night.

So I was wondering if you have any idea about why this happens?

@francescolosterzo Sorry. I haven't seen this error in my data and not sure why it's appearing for you. You should be able to do a new sort on the data to correct it without any issues.