Sensor can publish dive data to firebase app
Closed this issue · 8 comments
See project-hermes/hermes-firebase#15
3 topics, 3 functions on sensor for firebase responses:
- Sensor: publish
createDive(char format, int sensorDiveId, float latStart, longStart, latEnd, longEnd, int sampleCount, timestamp epochStart, epochEnd)
- Firebase generates new Dive document with ID firebaseDiveId (hash of properties to tolerate repeat? If not, on repeat we get two partial dives and retry)
- Firebase: poke sensor with
readyToReceive(string firebaseDiveId, int sensorDiveId)
- Sensor: loop publish
diveAppend(char format, int sensorDiveId, timestamp firstDataTime, 41x6byte(depth,temp1,temp2))
- Firebase determines firebaseDiveId of "active" dive to which to append via one of three options:
- stores both sensorDiveId and firebaseDiveId as "lastDiveId_sensor" and "lastDiveId_firebase" on the Sensor object, which it Gets
- queries Dives that are children of this Sensor for most recent LastUpdatedAt or LastCreatedAt, uses that one (and still verifies sensorDiveId match); this avoids storing state on the Sensor
- change message formate above: sensor sends full firebaseDiveId per packet (high overhead)
- Sensor: with tuned backoff, publish
diveDone(char format, int sensorDiveId, string firebaseDiveId)
- Firebase: count Data collection under active Dive: if == Dive.sampleCount, poke sensor with
diveComplete(string firebaseDiveId, int sensorDiveId)
-- non-response is "not done"
if Firebase ever errors, log error info under errorId, poke sensor with stopSending(string firebaseDiveId, int sensorDiveId, string errorId)
- if we receive enough of these or otherwise have to resend 3 times
- Sensor: emergency REST dump dive to raw-dive-dump endpoint for manual debugging and data recovery
format 3:
createDive:
- sprintf(buffer, "%d %d %f %f %f %f %d %d %d", {fields below})
- byte: 3
- byte: sensorDiveId
- float: latStart
- float: longStart
- float: latEnd
- float: longEnd,
- int: sampleCount
- int32: timeStart
- int32: timeEnd
append:
- byte[1] = 3
- byte[1] = (char)sensorDiveId
- byte[4] = timestamp (unix epoch, seconds)
- byte[1] = (char)sampleCount // can only fit 41; count is for this update, not whole dive
- byte[sampleCount][6] = dataPoint[sampleCount]
dataPoint:
- byte[2] = (uint16)depth // cm, good to 327m, negative values fail sanity check
- byte[2] = (uint16)floor((temp1+50)*200) // precision 0.005, range -50 to +277
- byte[2] = (uint16)floor((temp2+50)*200) // precision 0.005, range -50 to +277
(All multi-byte integers are unsigned and written in little-endian order)
😮 this might take a second to look through
@brendanwalters How big should the hash be?
NVM
@brendanwalters Is that the index of the byte or the size of the array?
All the numbers are intended as array sizes; not sure what hash you mean, but I think everything with format-specific size has been specified . . .
Example messages:
createDive:
3 42 37.939722 22.396944 17.316000 -87.535100 30 1446418800 1446418830
appendDive (b64 encoded because of non-ASCII characters):
AypwmTZWHgoAECc8KB4AaCl0JzIAwCusJkYAGC7kJVoAcDAcJW4AyDJUJIIAIDWMI5YAeDfEIqoA0Dn8Ib4AKDw0IdIAgD5sIOYA2ECkH_oAMEPcHg4BiEUUHiIB4EdMHSIBOEqEHA4BkEy8G_oA6E70GuYAQFEsGtIAmFNkGb4A8FWcGKoASFjUF5YAoFoMF4IA-FxEFm4AUF98FVoAqGG0FEYAAGTsEzIAWGYkEx4AsGhcEgoACGuUEQ==
(byte values for the above):
[3 42 112 153 54 86 30 10 0 16 39 60 40 30 0 104 41 116 39 50 0 192 43 172 38 70 0 24 46 228 37 90 0 112 48 28 37 110 0 200 50 84 36 130 0 32 53 140 35 150 0 120 55 196 34 170 0 208 57 252 33 190 0 40 60 52 33 210 0 128 62 108 32 230 0 216 64 164 31 250 0 48 67 220 30 14 1 136 69 20 30 34 1 224 71 76 29 34 1 56 74 132 28 14 1 144 76 188 27 250 0 232 78 244 26 230 0 64 81 44 26 210 0 152 83 100 25 190 0 240 85 156 24 170 0 72 88 212 23 150 0 160 90 12 23 130 0 248 92 68 22 110 0 80 95 124 21 90 0 168 97 180 20 70 0 0 100 236 19 50 0 88 102 36 19 30 0 176 104 92 18 10 0 8 107 148 17]
^ Dive data:
- 30 samples,
- starts at 317246400
- goes from depth 10 to 290 (2 samples at 290), then back to 10, moving 20cm at a time
- temp1 starts at 0 and goes up 3 degrees per sample
- temp2 starts at 1.5 and goes down 1 degrees per sample
10 0.0 1.5 => [10 10000 10300]
30 3.0 0.5 => [30 10600 10100]
50 6.0 -0.5 => [50 11200 9900]
70 9.0 -1.5 => [70 11800 9700]
90 12.0 -2.5 => [90 12400 9500]
110 15.0 -3.5 => [110 13000 9300]
130 18.0 -4.5 => [130 13600 9100]
150 21.0 -5.5 => [150 14200 8900]
170 24.0 -6.5 => [170 14800 8700]
190 27.0 -7.5 => [190 15400 8500]
210 30.0 -8.5 => [210 16000 8300]
230 33.0 -9.5 => [230 16600 8100]
250 36.0 -10.5 => [250 17200 7900]
270 39.0 -11.5 => [270 17800 7700]
290 42.0 -12.5 => [290 18400 7500]
290 45.0 -13.5 => [290 19000 7300]
270 48.0 -14.5 => [270 19600 7100]
250 51.0 -15.5 => [250 20200 6900]
230 54.0 -16.5 => [230 20800 6700]
210 57.0 -17.5 => [210 21400 6500]
190 60.0 -18.5 => [190 22000 6300]
170 63.0 -19.5 => [170 22600 6100]
150 66.0 -20.5 => [150 23200 5900]
130 69.0 -21.5 => [130 23800 5700]
110 72.0 -22.5 => [110 24400 5500]
90 75.0 -23.5 => [90 25000 5300]
70 78.0 -24.5 => [70 25600 5100]
50 81.0 -25.5 => [50 26200 4900]
30 84.0 -26.5 => [30 26800 4700]
10 87.0 -27.5 => [10 27400 4500]
A quick go script to generate the above test data (or edit to get other test data):
https://play.golang.org/p/NxojIMaSr4E
Correction: the data will have to be base85 encoded in addition to the above because contrary to some of the documentation / help threads, Particle.publish() does NOT support all char values, failing on at least 0x00 and some high char values. Verified in tests.
See
https://community.particle.io/t/how-to-publish-binary-data/36142/2
https://community.particle.io/t/utf-8-ascii-encoding-decoding-solved/25815/5