zdevito/pytorch

TensorRandom.cwrap has duplicates to handle removing Generator

zdevito opened this issue · 2 comments

TensorRandom.cwrap has some methods that are the same except that on CUDA the Generator argument is removed because CUDA does not ever take generator arguments. This is problem for C++ wrap because it causes duplicate declarations. We can fix this by unifying them into a single declaration across CUDA/CPU and then have a cwrap plugin split it into the CPU/CUDA versions, removing the THGenerator* from the CUDA version.

[[
  name: multinomial
  defined_if: defined(TH_REAL_IS_FLOAT) || defined(TH_REAL_IS_DOUBLE)
  types:
    - floating_point
  processors:
    - CPU
    - CUDA
  variants:
    - method
    - function
  return: argument 0
  arguments:
    - arg: THIndexTensor* result
      output: True
    - arg: THGenerator* generator
      default: THPDefaultGenerator->cdata
      kwarg_only: True
    - THTensor* self
    - long num_samples
    - arg: bool replacement
      default: "false"
]]

[[
  name: multinomial
  defined_if: CUDA_FLOAT || CUDA_DOUBLE || CUDA_HALF
  types:
    - floating_point
  processors:
    - CUDA
  variants:
    - method
    - function
  return: argument 0
  arguments:
    - arg: THIndexTensor* result
      output: True
    - THTensor* self
    - long num_samples
    - arg: bool replacement
      default: "false"
]]

Looks like I made a mistake in the "processors" annotation for multinomial.

EDIT: actually no, I didn't. Why do you have CUDA for processors above?

I think we fixed this, closing...