EFeru/DbcParser

Issue with unpacking 64 bits integer signals

Adhara3 opened this issue · 15 comments

What good timing I'm facing an issue with signed signals while trying to unpack messages using the packer class.
Here's an extract of the .dbc I'm using: https://drive.google.com/file/d/1RJc9zE_VEt6jdlhrABVl-E-nyLOPOi5l/view?usp=share_link

The main things to keep note of are the Latitude, Longitude and Altitude signals (which are all signed). No matter what I feed to the Packer.RxSignalUnpack function the value returned is 0 for all 3 signals.

Maybe this is because the factor used for them is in scientific notation instead of a decimal? i.e for Latitude the Factor is 1e-16

@Adhara3 Hopefully the .dbc file extract helps you in solving this issue

Originally posted by @taylorjdlee in #3 (comment)

If the problem is the factor then debugging into the SIgnal object you should see the wrong factor value. Is that the case?

A

If the problem is the factor then debugging into the SIgnal object you should see the wrong factor value. Is that the case?

A

Nope you're correct the factor value is present, so it shouldn't be an issue with the factor

EFeru commented

@taylorjdlee can you also give the code piece where you unpack the messages with the issue?

Latitude and longitude are long 64 bit signed, but the gain is 1e-16.
If you can provide a couple of raw values (the ones you pass to the Unpacker) we can write some unit tests.

Cheers
A

Dbc dbc = Parser.ParseFromPath("packerExample.dbc");
var specIDSelection = dbc
                    .Messages
		    .Where(m => m.ID == 435684862)
	            .ToArray();
foreach (var m in specIDSelection)  
{  
    byte[] dataBytesTest = {0, 0, 0, 0, 0, 0, 0, 55};//In this case latitude uses all the bytes
    m.Signals.ElementAt(3).StartBit = 0; //Change start bit to 0 instead of 56 to make sure all 8 bytes are used
    var receivedBytes = BitConverter.ToUInt64(dataBytesTest, 0);
    var resolvedValue = Packer.RxSignalUnpack(receivedBytes, m.Signals.ElementAt(3));
    Console.WriteLine(resolvedValue); //Value is always 0
}  

See above the current code snippet I'm using. Currently I have a byte array of length 8 and convert it to a Uint64 to be used with Rx.SignalUnpack to decode the latitude signal (located at element 3 in the signals list if you use the attacted .dbc file).

The latitude signal is 64 bits in length so uses all 8 bytes fed to it. Though no matter what bytes fed to it the value is always 0. Maybe this is because Latitude is long as previously stated. I can confirm I've decoded signal values < 64 bits correctly so it's leading me to believe that is the root of the issue.

In this case receivedBytes = 3963167672086036480 so resolvedValue = 3963167672086036480 * 1e-16 + 0 = 396.316767209 but as stated before resolvedValue equals 0.

Okay so update I added the line

m.Signals.ElementAt(3).Length = 64 - 1;

And it got the value of 396.31676720860366 for the resolvedValue (due to the LSB being removed). I did also try changing the signed value from Signed to Unsigned but both resulted in a value of zero.

TLDR: It seems Packer.RxSignalUnpack doesn't like signal lengths of 64 (Signed or Unsigned)

@EFeru, I assigned this to you as I never touched the Pack/Unpack stuff.
Mi suggestion is to start writing a test in the packer test class that replicates @taylorjdlee issue (no need to load the full dbc, just create the Signal object locally).

The test should fail.
Then it would be a good idea to write some more tests on that class to increase coverage (with different combinations of signed, unsigned, length, ect...) and finally try to fix what is broken (the 64 bit part).
Adding tests before starting with the refactoring will allow you to double check your fix does not introduce any regression

Cheers
A

Okay I've found a solution to the issue. It seems to occur when the bitMask value is 0 i.e when the signal length is 64. By not using the bitMask when it equals 0 we get the correct value for signals with length 64.

if(bitMask != 0){
    iVal = (int64_T)((RxMsg64 >> signal.StartBit) & bitMask);
}
else{
    iVal = (int64_T)(RxMsg64 >> signal.StartBit);
}

I'll make a fork with this fix

Okay I've found a solution to the issue. It seems to occur when the bitMask value is 0 i.e when the signal length is 64. By not using the bitMask when it equals 0 we get the correct value for signals with length 64.

if(bitMask != 0){
    iVal = (int64_T)((RxMsg64 >> signal.StartBit) & bitMask);
}
else{
    iVal = (int64_T)(RxMsg64 >> signal.StartBit);
}

I'll make a fork with this fix

I would rather calculate the correct bitMask reversing the shift

uint64_T bitMask = (ulong.MaxValue >> (64 - signal.Length));

instead of adding another if and having a wrong mask floating around.

A

Perfect I know my solution was very hacked together so your solution is far better. I've made the changes just now. I'll also add a 64 bit unit test for the packer class

#25

So I've made a hotfix pull request with the changes discussed. I do agree we need more unit tests for the packer class so we can catch bugs like this, so it should be on the to do list. Though for now I'm happy merging this hotfix into master until we decide on what the packer classes unit tests should cover i.e aim to cover as many edge cases as possible.

EFeru commented

Nice fix guys @taylorjdlee @Adhara3 . Really appreciate it and sorry for not being able to support much (trapped in some other topics).

EFeru commented

@taylorjdlee just wondering how were you able to make commits without being a Collaborator? Nothing against it, I just find it strange that is possible.

@taylorjdlee just wondering how were you able to make commits without being a Collaborator? Nothing against it, I just find it strange that is possible.

He did a PR, I merged it.
Where do you see something weird?

A

EFeru commented

Oh sorry, I missed the PR. I thought is some kind of safety leak on github. No worries all good. Thanks!