analogdevicesinc/libiio

TX data generation on V1 invalid

tfcollins opened this issue · 4 comments

Configuration:

TX Ubuntu (libiio v1 main branch) ->  Target ADRV9364 (libiio v0.25 and v0.24 tried) -> RX Ubuntu (libiio v0.25 IIO-Scope)

Example to generate data on TX side:

#include <assert.h>
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>

#include "iio/iio.h"

#define N_TX_SAMPLES 512
#define BYTES_PER_SAMPLE 2

struct iio_context *ctx;
struct iio_device *phy, *rx, *tx;
const struct iio_attr *attr;
struct iio_channel *chn;
struct iio_channels_mask *txmask, *rxmask;
struct iio_buffer *txbuf, *rxbuf;
struct iio_block *txblock, *rxblock;

int main() {

  int err;

  const char *uri = getenv("URI_AD9361");
  if (uri == NULL)
    exit(0); // Cant find anything don't run tests
  ctx = iio_create_context(NULL, uri);

  phy = iio_context_find_device(ctx, "ad9361-phy");
  assert(phy);
  rx = iio_context_find_device(ctx, "cf-ad9361-lpc");
  assert(rx);
  tx = iio_context_find_device(ctx, "cf-ad9361-dds-core-lpc");
  assert(tx);

  // Configure device into loopback mode
  attr = iio_device_find_debug_attr(phy, "loopback");
  assert(attr);
  iio_attr_write_string(attr, "1");

  // TX Side
  txmask = iio_create_channels_mask(iio_device_get_channels_count(tx));
  assert(txmask);

  chn = iio_device_find_channel(tx, "voltage0", true);
  assert(chn);
  iio_channel_enable(chn, txmask);
  chn = iio_device_find_channel(tx, "voltage1", true);
  assert(chn);
  iio_channel_enable(chn, txmask);

  txbuf = iio_device_create_buffer(tx, 0, txmask);
  err = iio_err(txbuf);
  if (err) {
    // dev_perror(tx, err, "Unable to create TX buffer");
    assert(err == 0);
  }

  txblock = iio_buffer_create_block(txbuf, N_TX_SAMPLES*BYTES_PER_SAMPLE);

  // Generate ramp signal on both I and Q channels
  int16_t *p_dat, *p_end;
  ptrdiff_t p_inc;
  int16_t idx = 0;

  p_end = iio_block_end(txblock);
  p_inc = iio_device_get_sample_size(tx, txmask);
  chn = iio_device_find_channel(tx, "voltage0", true);

  for (p_dat = iio_block_first(txblock, chn); p_dat < p_end;
       p_dat += p_inc / sizeof(*p_dat)) {
    // Bitshift 4 bits up. During loopback hardware will shift back 4 bits
    p_dat[0] = idx << 4;
    p_dat[1] = idx << 4;
    idx++;
  }
  iio_block_enqueue(txblock, 0, true);
  iio_buffer_enable(txbuf);

  // Sleep for 40 seconds
  printf("Open up the time scope to see data. Should be a ramp from 0->%d\n", N_TX_SAMPLES);
  sleep(40);

  return 0;
}

Result
2023-12-08-143610_1796x946_scrot

Expected to see ramp from 0-255 but there seems to be something off in the data. Tried playing with the bit shift but that didn't change anything.

@tfcollins The size of iio_buffer_create_block is in bytes, not in samples.

@pcercuei I updated the example to scale samples to bytes but its still an issue. In the original case it really just limited the max value passed since I was loop based on the buffer size. It looks like there is a bias in the data on the RX side or bits wrapping

Well now your BYTES_PER_SAMPLE is wrong, you enable two 16-bit channels, that's 4 bytes per sample.

I debugged the issue, which ended up being Libiio v1.0 continuously streaming blocks when they are enqueued with cyclic=true, when talking to IIOD v0.25. See PR #1099.