billfarrow/pcimem

A way to control request lengths?

Closed this issue · 8 comments

Hey, this tool is very nice.

Curious if it can or could be enhanced to support control of the request length.

I painfully discovered that my x86 Ubuntu system was sending MemRd requests formatted as 32 bit addressing, but Length == 2... so requesting 64 bits of data.

My endpoint design (Xilinx PCIE Example Design) only supports 32 bit requests.

So my endpoint just drops the request.

I think for now I am going to have to write my own PCIe User Logic.

Hey,
The pcimem tool accepts an optional data type on the command line that is applied to both Read and Write operations. Did you experiment with those byte, short, and word accesses and see what pcie transaction hit your fpga implementation ?

I suspect that your problem is in the Linux kernel driver for your platforms pcie controller chip, or maybe the PCI Config parameters.

Let me know if I can help dig into this more. If you find an enhancement to pcimem I will happily roll it in.

Hey,

I tried the half word access, and that successfully worked in returning 16 bits of read data.

I think this is a classic case of the platform interpreting the C data type as whatever it wants. In this case it interprets unsigned long to be 64 bits, and the Kernel goes and fetches 2DW.

I think maybe if you used uint32 / uint64 casting in pcimem.c it might be better. I'll play around with that in the next couple of days.

So far I haven't had to touch pcimem.c it has worked beautifully so thanks for contributing that to the community.

Bryan

Sent from my iPhone

On Nov 8, 2015, at 6:12 AM, Bill Farrow notifications@github.com wrote:

Hey,
The pcimem tool accepts an optional data type on the command line that is applied to both Read and Write operations. Did you experiment with those byte, short, and word accesses and see what pcie transaction hit your fpga implementation ?

I suspect that your problem is in the Linux kernel driver for your platforms pcie controller chip, or maybe the PCI Config parameters.

Let me know if I can help dig into this more. If you find an enhancement to pcimem I will happily roll it in.


Reply to this email directly or view it on GitHub.

Bill,

I globally replaced any occurrence of "unsigned long" with "uint32_t".

I added #include <stdint.h>

My Xilinx endpoint design is now working with your pcimem.c utility.

I think you are spot on, that the utility could be improved by using the
platform agnostic uint*_t data types.

Based on PCIE defenitions... I would propose the following access types:

uint8_t = "b"
uint16_t = "w"
uint32_t = "dw"
uint64_t = "qw"

PCIE defines 4 bytes to be a double word, and 8 bytes to be a quad word.

Wondering if I can hit you up for some related wisdom.

I actually work at Xilinx, and I am working on an application note to
create an opensource tool that will generate a basic hardware design for a
board controller FPGA.

Essentially its an FPGA to control a bunch of resets/enables, and monitor
board level interrupt signals.

From talking to various customers who use FPGAs in their real linux
systems... the feedback was that an FPGA driver is preferred to be in user
space, since its very custom hardware.

I wanted to use uio_pci_generic from the linux kernel to be the lone kernel
mode driver for handling interrupts, and accessing memory.

From my WWW searches, and failed attempts... it seems that uio_pci_generic
will not give me any capability to access the memory mapped PCI space.

(I tried using your pcimem.c utility, and passing it /dev/uio0 instead of
/sys/bus/pci/... and mmap would fail even though the driver was
successfully bound to my FPGA card.)

Does this mean for a user space PCI device drive, I must have a thread
using uio_pci_generic for legacy interrupt (INTx) monitoring, and another
utility that uses sysfs in the style of your pcimem.c for memory space
access?

Also... your pcimem.c code is for merely printing to the screen. Is it very
simple to modify it so that it is taking arguments from a calling function,
and returning success/read data?

I'm trying to grasp how a user space application uses a user space driver.
This is my first time experimenting with this software magic.

I've also heard about VFIO as an alternative to UIO. Wondered if you have
any thoughts on that.

Thanks,
Bryan

On Mon, Nov 9, 2015 at 5:09 AM, Bill Farrow notifications@github.com
wrote:

You could be right about it being a C data type issue on 64 bit target
architecture. We should convert pcimem.c to use standard types from
stdint.h: uint8_t, uint16_t and uint32_t.


Reply to this email directly or view it on GitHub
#1 (comment).

Hey Bryan,
Would you like to go through the process of sending me a pull request for
your changes ? If you are not comfortable with how github works I can make
the changes myself, I just didn't want to spoil the fun.

I have not written any userspace PCI drivers. I usually develop my drivers
"out of tree" modules. It is generally not to hard to keep the drivers
compatible with multiple kernel versions.

I had a quick read of the UIO documentation and I think you use the
uio_pci_generic driver to bind to the device and do interrupt handling, and
use sysfs to mmap the various BARs into your userspace program just like
pcimem.c does. What I don't understand is how you implement any type of
security with UIO, everything needs to run as root.

https://www.kernel.org/doc/htmldocs/uio-howto/index.html

Bill

Bill

On Mon, Nov 9, 2015 at 2:20 PM, wilderfield notifications@github.com
wrote:

Bill,

I globally replaced any occurrence of "unsigned long" with "uint32_t".

I added #include <stdint.h>

My Xilinx endpoint design is now working with your pcimem.c utility.

I think you are spot on, that the utility could be improved by using the
platform agnostic uint*_t data types.

Based on PCIE defenitions... I would propose the following access types:

unit8_t = "b"
unit16_t = "w"
unit32_t = "dw"
unit64_t = "qw"

PCIE defines 4 bytes to be a double word, and 8 bytes to be a quad word.

Wondering if I can hit you up for some related wisdom.

I actually work at Xilinx, and I am working on an application note to
create an opensource tool that will generate a basic hardware design for a
board controller FPGA.

Essentially its an FPGA to control a bunch of resets/enables, and monitor
board level interrupt signals.

From talking to various customers who use FPGAs in their real linux
systems... the feedback was that an FPGA driver is preferred to be in user
space, since its very custom hardware.

I wanted to use uio_pci_generic from the linux kernel to be the lone kernel
mode driver for handling interrupts, and accessing memory.

From my WWW searches, and failed attempts... it seems that uio_pci_generic
will not give me any capability to access the memory mapped PCI space.

(I tried using your pcimem.c utility, and passing it /dev/uio0 instead of
/sys/bus/pci/... and mmap would fail even though the driver was
successfully bound to my FPGA card.)

Does this mean for a user space PCI device drive, I must have a thread
using uio_pci_generic for legacy interrupt (INTx) monitoring, and another
utility that uses sysfs in the style of your pcimem.c for memory space
access?

Also... your pcimem.c code is for merely printing to the screen. Is it very
simple to modify it so that it is taking arguments from a calling function,
and returning success/read data?

I'm trying to grasp how a user space application uses a user space driver.
This is my first time experimenting with this software magic.

I've also heard about VFIO as an alternative to UIO. Wondered if you have
any thoughts on that.

Thanks,
Bryan

On Mon, Nov 9, 2015 at 5:09 AM, Bill Farrow notifications@github.com
wrote:

You could be right about it being a C data type issue on 64 bit target
architecture. We should convert pcimem.c to use standard types from
stdint.h: uint8_t, uint16_t and uint32_t.


Reply to this email directly or view it on GitHub
#1 (comment).


Reply to this email directly or view it on GitHub
#1 (comment).

Hey Bill, I am going to close this, open issue.. I will try out the pull request.

Bill, This pcimem program is wonderful. Exactly what I needed for my Xilinx FPGA PCIe work. Bryan, the uint32_t patch comes just in time for me too. Many thanks to both of you.

Pull request has been merged. Awesome !