Just some general questions
wiesener opened this issue · 23 comments
- You defined the radio channel with: {0xDE, 0xAD, 0xBE, 0xEF, 0x19}?
Is there a particular reason for it? How many devices can be synchronized?
- The example code is a central and a peripheral. Is it possible to synchronize a two or more peripheral without a central? Sure that one needs to be the sync master.
Okay tested it right now. And it seems to work with just two peripherals. Very nicely done.
- Is there an api call to set the base timestamp to the current clock received by a smartphone?
Hello,
I'll try to answer your questions:
-
This value is the radio packet address . The radio hardware will uses this packet address as a filter basically. The transmitter and receiver needs to use the same address value in order to exchange packets. This address value is often dynamically assigned via some handshake mechanism, to avoid interference from other devices of the same type operating in the same area. Note that this is not related to encryption or authentication in any way.
-
The timing radio packets are one way only: a timing master transmit packets without knowing how many receivers there are. There is no limit to how many timing receivers/slaves there can be. The BLE behavior is independent of the timing packet transmission. The examples here include one BLE Central and one BLE Peripheral example. You can run two instances of the BLE Peripheral and have them synchronized, as long as one of the Peripherals is configured as a timing master and the other as a timing slave.
-
No, but this is a good idea. There is a Bluetooth "Current Time Service" that would be suited for this purpose, but I don't know if phone OS's typically implement this Service. The phone would be the GATT Server for this Service, and the nRF device the GATT Client. If this Service is not available, one could use the UART Service to send a string containing the current time of day. This would require a custom phone app though..
Hi,
this helped a lot.
I've just integrated the example into my code and it works pretty well.
But I am trying to get the current synchronized timestamp in the UTC in milliseconds.
I've tried to use the ts_timestamp_get_ticks_u64 but it gives something else and I am not sure which PPI channel to enter.
Furthermore, I saw that the example is somehow prepared to use the RTC instead of the 16 MHz timer. Could you just shortly explain how to change the time example to use the rtc.
Regarding 3. We would use the Time Service if it would not mean that our device need to be registered at the Bluetooth SIG. Hence we avoid to use the official services.
Thanks in adance,
C.W
Hello,
good to hear!
Regarding ts_timestamp_get_ticks_u64(), there are some corner-cases where this is not entirely accurate. However, if you are looking for millisecond resolution I think it should be ok. The parameter for this function is an available PPI channel number. For example, if you provide PPI channels 0-4 to ts_init(), you can use PPI channel #5 for ts_timestamp_get_ticks_u64().
Regarding the use of RTC, the original plan was to use RTC as an additional/alternative clock, specifically for use cases where the 16 MHz accuracy was not needed and one wants to use the lower power RTC instead.
This was never implemented, but there are no technical barriers here.
Note that one cannot update the RTC hardware counter directly, so some additional software logic is required to maintain an offset between the local RTC and the peer RTC. E.g. the receiver of a timer packet needs to calculate the difference between the received RTC value and the local RTC counter value. A timestamp_get function would then apply this offset when calculating a timestamp value.
As the RTC timing resolution is a lot more forgiving than the 16 MHz one, such logic shouldn't need to be very complicated.
Note that the RTC counter overflows every 512 seconds (24-bits @ 32768 Hz). Both the transmitter and receiver needs to keep track of the overflow count, and apply this accordingly when calculating timestamps.
Hi,
okay I've implemented the RTC part and I am now exchanging the RTC and a small 16 MHz counter which gets reset on each RTC count. Hence I can calculate the current nanoseconds since boot with up to 62.5 ns resolution.
Now I do not need to synchronize the timers anymore but I do want to change what gets send. I've just saw that the sending is done with this lines:
ppi_radio_tx_configure();
update_radio_parameters(p_pkt);
But somehow if I am transmitting the rtc counter I can get some spikes for the time between sending and receiving.
Sometimes it took about 5 ms to transmit the packet. Which does not make sense.
And is there a particular reason why the nrf_balloc module is used. If I only send the sync packets at a low frequency, there should not be an overrun, or?
ppi_radio_tx_configure();
update_radio_parameters(p_pkt);But somehow if I am transmitting the rtc counter I can get some spikes for the time between sending and receiving.
Sometimes it took about 5 ms to transmit the packet. Which does not make sense.
That's curious. There should be timed delay on the transmission. The logic is as follows:
Configure radio in TX mode, and enable the "ready -> start" radio shortcut.
This means that the radio should start transmitting as soon as it is ready. I.e. after the 40 microsecond ramp-up (or whatever the exact time here is). The radio "ready" event also triggers a capture of the 16 MHz timer. This captured value is copied into the radio packet by the CPU after the "wait for radio ready event" loop exits, which is essentially at the same time the transmission begins.
There's currently no logic to optimize radio use on the receiver side. The receiver is running the radio as much as possible (not very power efficient), which is the simplest way to ensure there is a high chance of receiving the transmitted packet.
Is it possible you are seeing a packet loss, followed by the reception of a subsequent radiopacket?
And is there a particular reason why the nrf_balloc module is used. If I only send the sync packets at a low frequency, there should not be an overrun, or?
nrf_balloc is used out of convenience. A static variable or array could be used for buffers instead, but then one has to keep track of the index manually.
Ah okay, that seems the problem, I thought that theese lines are uses to put the data into the packet
p_pkt->timer_val = m_params.high_freq_timer[0]->CC[1];
p_pkt->counter_val = m_params.high_freq_timer[1]->CC[1];
p_pkt->rtc_val = m_params.rtc->COUNTER;
Ah okay, that seems the problem, I thought that theese lines are uses to put the data into the packet
p_pkt->timer_val = m_params.high_freq_timer[0]->CC[1]; p_pkt->counter_val = m_params.high_freq_timer[1]->CC[1]; p_pkt->rtc_val = m_params.rtc->COUNTER;
Yeah, you're correct. That's the CPU copying data into the packet. Only at this point (after the radio ready event has triggered) are the "m_params.high_freq_timer[x]->CC[1]" values up-to-date. The m_params.rtc->COUNTER can be fetched directly without any explicit capturing first
How can I detect that I have a packet loss.
How can I detect that I have a packet loss.
The easiest would be to add a counter in your transmitted packet. Increase the counter by 1 every time the transmitter transmits a packet, and look for gaps in the sequence on the receiver side.
Hi,
okay now it is running and with only the 16 MHz timer I can synchronize the absolute clock and estimate the drift slope. So thanks for your advices. Just one thing I am facing is the follwing. I developed the prototype with 3 Boards. And now I went to the real Hardware. They use the BMD340 modules. And with the new modules the packet loss is dramatically higher like factor 10-100. When I move close the packet loss is okay. But when I move only 50 cm far away the loss is nearly 100 %. Is it possible to increase the signal strength or are there other parameters like schedulers etc. which can influence the issue?
In the Board example I was not using the scheduler.
Just to keep you updated. I've tested a lot with a higher transmission power but it did not help at all.
But If I change the channel from 125 to one between 1-32 (BLE channels) it works. Much better. I guess it is because of the antenna of the module I am using. The development board seems to have a wide range atenna but the BMD340 only has a antenna for the BLE channels.
Furthermore, sometime I am getting a Softdevice Assertion with a breakpoint pointing to 0x24a44. Is there any hint you can give what can cause this issue?
Just to keep you updated. I've tested a lot with a higher transmission power but it did not help at all.
But If I change the channel from 125 to one between 1-32 (BLE channels) it works. Much better. I guess it is because of the antenna of the module I am using. The development board seems to have a wide range atenna but the BMD340 only has a antenna for the BLE channels.
I think channel 125 should generally not be used. I should update this in the example code.
Any channel in the 0 - 80 range (2.4 - 2.48 GHz) should be fine, but some of these are more busy than others (e.g Wi-Fi interference in your given area).
Furthermore, sometime I am getting a Softdevice Assertion with a breakpoint pointing to 0x24a44. Is there any hint you can give what can cause this issue?
Yes, I can look this up. Specifically which softDevice are you using?
We are using the NRF52840 and the S140 Softdevice. Our current version 6.1.1 from the 15.3 SDK release.
And maybe it is important to notice, that in between the timeslot the peripheral is sending a lot of data which is triggered by an app_timer using the APP_SCHEDULER.
We are using the NRF52840 and the S140 Softdevice. Our current version 6.1.1 from the 15.3 SDK release.
And maybe it is important to notice, that in between the timeslot the peripheral is sending a lot of data which is triggered by an app_timer using the APP_SCHEDULER.
Hm, I can't find any assert information that matches this breakpoint address for S140 v6.1.1.
The address is within the SoftDevice flash memory range though.
Do you have any other information from the error handler?
Can you try to include hardfault_implementation.c and nrf52_handler\hardfault_handler_gcc.c? And set HARDFAULT_HANDLER_ENABLED 1 in sdk_config. This should produce some more helpful debug information.
The Hardfault you see could have been a "lesser" error, that is elevated to HardFault
Apparently the hardfault is gone. I've compared my version against the last stable and found a memory leak. Now it is working fine without hardfaults.
But still I am facing the problem, that only rarely and only if close to each other the sync packets are arriving. But I have to admit, that I send up to 3-4 packets per connection interval and the connection interval is around 12.5 ms with BLE.
Do you think this might be the issue?
Okay maybe it can be improved by changing the RADIO_MODE_MODE_..... to 125 kbit. I thought that these frequencies are used for long range. But when I set these frequencies I cannot get it running? Is there a reason or do I have to change some timings?
Apparently the hardfault is gone. I've compared my version against the last stable and found a memory leak. Now it is working fine without hardfaults.
But still I am facing the problem, that only rarely and only if close to each other the sync packets are arriving. But I have to admit, that I send up to 3-4 packets per connection interval and the connection interval is around 12.5 ms with BLE.
Do you think this might be the issue?
It might be that the receiver is not active when the transmitter is transmitting.
This example code is lacking in this regard, as it does not synchronize the transmission and receiving in any way. The approach is quite simple: the receiver runs the radio in receive mode as much as possible in order to maximize the chance of being active when the transmitter is transmitting without knowing exactly when a transmission occurs.
A better approach would be to run the transmitter at a fixed interval, and have the receiver line up with this. That is, do the following:
- Receiver runs the radio in receive mode continuously (or as much as permitted) until a timing packet is received
- Once a packet is received, the next receive timeslot can be scheduled to match the transmission rate (which can be known beforehand, or indicated at run-time).
- If the receiver has attempted x timeslots without receiving a packet, it can either go back to step 1, or gradually increase the receive window (running the radio for a longer period).
Okay maybe it can be improved by changing the RADIO_MODE_MODE_..... to 125 kbit. I thought that these frequencies are used for long range. But when I set these frequencies I cannot get it running? Is there a reason or do I have to change some timings?
Using the 125 kbit/s rate is not recommended or tested. For example the timing is very different, so all the timing in this code parameters would have to be updated.
The on-air packet layout is also different in these modes, but I'm not sure if his has any implications.
Thanks for the fast response. This makes it clear to me:
A better approach would be to run the transmitter at a fixed interval, and have the receiver line up with this. That is, do the following:
- Receiver runs the radio in receive mode continuously (or as much as permitted) until a timing packet is received
- Once a packet is received, the next receive timeslot can be scheduled to match the transmission rate (which can be known beforehand, or indicated at run-time).
Okay but how I am going to tell him when to receive the next packet? What do I have to change in update_radio_parameters. Right now I am trying to get the earliest or?
3.If the receiver has attempted x timeslots without receiving a packet, it can either go back to step 1, or gradually increase the receive window (running the radio for a longer period).
Hi again,
I wasn't successful in syncing the timeslots. But in the meantime I found a small bug.
In line 260 of time_sync you call timeslot_end_handler which sets the m_total_timeslot_length to 0 and then in line 265 you calculate the new timeslot time with
m_timeslot_req_normal.params.normal.distance_us = m_total_timeslot_length + m_timeslot_distance;
but there it would be needed to have the correct m_totoal_timeslot_length or am I wrong?
Nonetheless wouldn't it be possible to start the timestamp upon a COMPARE interrupt of a Timer? or does it have to be timer0?
Hi, sorry for the slow response!
Okay but how I am going to tell him when to receive the next packet? What do I have to change in update_radio_parameters. Right now I am trying to get the earliest or?
In line 260 of time_sync you call timeslot_end_handler which sets the m_total_timeslot_length to 0 and then in line 265 you calculate the new timeslot time with
m_timeslot_req_normal.params.normal.distance_us = m_total_timeslot_length + m_timeslot_distance;
but there it would be needed to have the correct m_totoal_timeslot_length or am I wrong?
Nonetheless wouldn't it be possible to start the timestamp upon a COMPARE interrupt of a Timer? or does it have to be timer0?
I think you are looking at the appropriate section of code here.
Also, if you haven't already, I recommend taking a look at the timeslot documentation which have some useful figures that shows the timing anchors for these requests: https://infocenter.nordicsemi.com/topic/sds_s132/SDS/s1xx/concurrent_multiprotocol_tsl_api/complete_session.html?cp=4_7_3_0_8_2_0
m_total_timeslot_length is used to keep track of how long a timing receiver has been active in a timeslot. There is a maximum duration of timeslots that you could reach if there is no Bluetooth activity that can cause a request to get rejected.
In the transmitter case, this variable is in effect always 0.
In your case, the receiver behavior should essentially be the same as the transmitter:
Instead of requesting timeslot extensions, the receiver should send a normal request with nrf_radio_request_normal_t.distance_us' such that the timeslot will open right before the next expected packet is to be received.
You would need to change this request: https://github.com/nordic-auko/nRF5-ble-timesync-demo/blob/master/nRF5_SDK_16.0.0_98a08e2/examples/common/time_sync.c#L270
And add some logic that determines the value of the distance_us parameter of the timeslot request (which is not of type normal, instead of earliest).
Note that timer0 is used to keep track of time elapsed since the beginning of the timeslot. You can capture the value of timer0 when a packet is received in order to calculate when the next timeslot should be requested.
Another note: consider setting the packet transmission timing such that it doesn't crash with for example the Bluetooth connection interval.