ut-parla/Parla.py

Allow Device-Independent Resources

Opened this issue · 1 comments

Currently resources are exclusively owned by a single device. This model is fine in many cases but it does not cover cases where multiple devices may share a memory space between them. This ended up being a weird limiter on our CPU cores architecture (see #39), but it will hit is when we try to model memory locations that can be accessed by both the CPU and GPU as well as managing memory use by different NUMA domains.

@arthurp mentioned in #39 (comment) that this will require some changes to both the device model and scheduler. I agree. @arthurp if you have any additional thoughts beyond this it'd be nice if you could write them up at some point for whoever ends up tackling this.