EnzymeAD/Enzyme

Incorrect derivative result when nested void functions and recursive nature functions are used.

stebos100 opened this issue · 6 comments

I have recently been running an experiment testing some of the limitations associated with using Enzyme and have found the following interesting case and was wondering if anyone could help me or if applicable be added as an issue. When running a function that uses nested void functions, as shown in the script below, the incorrect derivative result is produced when compared to the finite difference approximation result.

1.) Can Enzyme accommodate nested functions containing void functions ? It seems that it cannot.

void fnOne(double* rate) {

    input[0] = input[0]*12;
    input[1] = input[1]*24;
    input[2] = input[2]*36;
}

void fnTwo(double* rate) {
    input[0] = input[0]+ 12;
    input[1] = input[1]/24;
    input[2] = input[2] +0.058;
}

void fnThree(double* rate) {
   
    double volatility = 0.023;
    double r = 0.04;
    double dt  = 1/12;
 
    for ( int i =0; i < 3; i++) {
        rate[i] = rate[i] + (r - 0.5*vol*vol)*dt + vol*(0.5*dt)*0.05;
   }
}

double sumtil(double* vec, int size) {
  double ret = 0.0;
  for (int i = 0; i < size; i++) {
    ret += vec[i];
    if (ret > 15) break;
    ret += vec[i];
  }
  return ret;
}

double fn(double* rate, int size) {

fnOne (rate); //void
fnTwo(rate); //void 
fnThree(rate); //void 

double ret = sumtil(rate, size);

return ret;

}

2.) The second limiting case I found is that when a function uses a recursive function shown in the script below, Enzyme produces the wrong derivative result when compared to the finite difference approximation, is there a way around this ?

More specifically it seems that the [i-1] indexing may be the underlying issue here

double gbmSim(double* rate, double* previous,  int index) {
   
    double volatility = 0.023;
    double r = 0.04;
    double dt  = 1/12;
   
     rate[index] = rate[index -1] + (r - 0.5*vol*vol)*dt + vol*(0.5*dt)*0.05;

     return rate[index]; 
}

double sumtil(double* vec, int size) {
  double ret = 0.0;
  for (int i = 0; i < size; i++) {
    if( i > 0) {
          double intermediateVal = gbmSim(vec, i);
         ret += intermediateVal[i];
    }
    else {
             ret += vec[i];
    }

    if (ret > 2.25) break;
    ret += intermediateVal[i];
  }
  return ret;
}

double fn(double* rate, int size) {

double ret = sumtil(rate, size);

return ret;

}

Thanks again everyone, please let me know if you would like the accompanying enzyme calls from main.

Hi @stebos100 yeah a full file that we can run to reproduce would be helpful here (and ideally an enzyme.mit.edu/explorer link) .

And yes Enzyme can deal with intermediate functions which return void, as well as recursive functions -- so seeing the full case and expected results would be helpful.

It would be also helpful to know what version of Enzyme and LLVM you are using.

Hi @wsmoses , thanks for checking up, I have altered the script slightly to get the point across.

I am using LLVM v16, and Enzyme v0.0.103.

It seems that when I use a recursive function within a void function Enzyme produces the incorrect derivative result. And when we perform the same function body on CPU vs GPU different results are produced. Please see the script below and please feel free to ask any questions if they are needed.

#include <cuda_runtime_api.h>
#include <cuda_runtime.h>
#include <iostream>
#include <cstdio>
#include <stdlib.h>
#include <cmath>
#include <device_launch_parameters.h>
#include <random>

int __device__ enzyme_dup;
int __device__ enzyme_out;
int __device__ enzyme_const;
int __device__ enzyme_dupnoneed;

void __enzyme_autodiff(...);

template<typename RT, typename ... Args>
RT __enzyme_autodiff(void*, Args ...);

#define N 12

inline
cudaError_t cudaCheck(cudaError_t result) {

    if(result != cudaSuccess){
        fprintf(stderr, "CUDA runtime error: %s \n", cudaGetErrorString(result));
        assert(result == cudaSuccess);
    }

    return result;
}

inline
__host__ __device__ void gbmSimulation(double* short_rate, double* gbm , double* rand, int maturity) {

    int size = maturity;

    double dt = 1.0/12.0;

    double s = 1.15;

    double vol = 0.02;

    gbm[0] = s;

    for (int i = 1; i < size; i++){
        gbm[i] = gbm[i-1] * exp((short_rate[i] - 0.5*vol*vol)*dt + vol*sqrt(dt)*rand[i]);
    }

}

inline
__host__ __device__ double sumtil( double* vec, double* gbms, double* randomNumbers, int size) {

    double ret = 0.0;
    double volatility = 0.023;
    double dt  = 1.0/12.0;
    double s = 1.15;

    gbms[0] = s;

    //the following line of code, if called and used in the calculations that follow produce the incorrect derivative result
    //when compared with the finite difference approximation

    gbmSimulation(vec,gbms,randomNumbers, size);

    for (int i = 0; i < size; i++) {

        ret += gbms[i];
    }

    //the following commented section highlights the second issue (please comment out the void function call as well as the previous for loop),
     // which is that the CPU and GPU implementations produce different results with the CPU based implementation producing the correct result when compared with the FDA
    //for (int i = 1; i < size; i++) {

            //correct CPU derivative result, the GPU implementation however does not !!

            //gbms[i] = gbms[i - 1] * exp((vec[i] - 0.5 * volatility * volatility) * dt + volatility * (0.5 * dt) * randomNumbers[i]);

            //ret += gbms[i];
    //}

    return ret;
}

__host__ __device__ double sumtilFda(double* vec, double* gbms, double* randomNumber, int size, int index) {

    double bump = 0.000000085;

    vec[index] += bump;

    double upVal = sumtil(vec, gbms, randomNumber, size);

    vec[index] -= 2*bump;

    double downVal = sumtil(vec, gbms,  randomNumber, size);

    double sensi = (upVal - downVal)/(2*bump);

    vec[index] += bump;

    return sensi;
}

typedef double (*f_ptr)(double*, double*, double*, int);

extern void __device__ __enzyme_autodiffCuda(f_ptr,
                                         int, double*, double*,
                                         int, double*,
                                         int, double*,
                                         int, int
);


__global__ void computeEnzymeGrad( double* d_vec, double* d_vec_res, double* d_gbms, double* d_randomNumbers, int maturity) {

    __enzyme_autodiffCuda(sumtil,enzyme_dup, d_vec, d_vec_res, enzyme_const, d_gbms, enzyme_const,d_randomNumbers, enzyme_const, maturity);

}


int main() {

    std::random_device rd{};

    std::mt19937 gen(42);

    std::normal_distribution<double> norm(5.15, 2.85);
    std::normal_distribution<double> normTwo(1.15, 0.65);
    std::normal_distribution<double> normThree(5.0, 3.0);

    size_t bytes = N*sizeof(double);

    double *vec = (double*)malloc(bytes);
    double *gbms = (double*)malloc(bytes);
    double *rands = (double*)malloc(bytes);
    double *results_x = (double*)malloc(bytes);

    double *device_vec, *device_gbms, *device_rands, *device_der_vec;

    cudaCheck(cudaMalloc(&device_vec, bytes));
    cudaCheck(cudaMalloc(&device_gbms, bytes));
    cudaCheck(cudaMalloc(&device_rands, bytes));
    cudaCheck(cudaMalloc(&device_der_vec, bytes));

    for (int i = 0; i < N; i++){
        vec[i] = norm(gen);
        gbms[i] = 0.0;
        rands[i] = normThree(gen);
        results_x[i] = 0.0;
    }

    int n = N;

    cudaCheck(cudaMemcpy(device_vec, vec, bytes, cudaMemcpyHostToDevice));
    cudaCheck(cudaMemcpy(device_gbms, gbms, bytes, cudaMemcpyHostToDevice));
    cudaCheck(cudaMemcpy(device_rands, rands, bytes, cudaMemcpyHostToDevice));
    cudaCheck(cudaMemcpy(device_der_vec, results_x, bytes, cudaMemcpyHostToDevice));

    computeEnzymeGrad<<<1,1>>>(device_vec, device_der_vec, device_gbms, device_rands,n);
    cudaCheck(cudaDeviceSynchronize());

    cudaCheck(cudaMemcpy(results_x, device_der_vec, bytes, cudaMemcpyDeviceToHost));

    //=================================================== FDA & HOST CHECK ====================================================

    double* fdaResults = (double*)malloc(bytes);
    double* EnzymeHost = (double*)malloc(bytes);

    for (int i = 0; i < N; i++) {
        fdaResults[i] = 0.0;
    }

    for (int i = 0; i < N; i++) {
        fdaResults[i] = sumtilFda(vec, gbms, rands, n, i);
    }

    __enzyme_autodiff((void*)sumtil,enzyme_dup, vec, EnzymeHost, enzyme_const, gbms, enzyme_const, rands, enzyme_const, n);

    for (int i = 0; i < N; i++) {
        printf("\nx[%d]='%f';\n", i, vec[i]);
        printf("FDA for grad_x[%d]='%.18f';\n", i, fdaResults[i]);
        printf("AAD CUDA Enzyme for grad_x[%d]='%.18f';\n", i, results_x[i]);
        printf("AAD Enzyme Host for grad_x[%d]='%.18f';\n", i, EnzymeHost[i]);
    }

    free(vec);
    free(gbms);
    free(rands);
    free(results_x);
    free(fdaResults);

    cudaFree(device_vec);
    cudaFree(device_gbms);
    cudaFree(device_rands);
    cudaFree(device_der_vec);
}

//=======================================================================================================================



Ah since you're on GPU my guess is that you may run out of device memory for the caches (and cuda throw an error not being caught).

To reduce unnecessary caching can you add restrict on all pointers arguments of the function you're autodiffing (assuming that they point to different locations in memory)?

If that doen't resolve I'll take a closer look and find a GPU machine to test on.

HI @wsmoses, even after adding the restrict qualifier it produces the incorrect derivative result on the GPU when compared to the CPU example.

Both the CPU and GPU results however do not accommodate the first issue (calling the void recursive function), which I believe is also an important issue to address. I have a GPU handy, so if needs be I could always try and run test scripts for you if needed.

Thanks so much again !

Hi @wsmoses, I was just wondering if you managed to see the above ? It seems that in a simulation setting, Enzyme is not able to produce the correct derivative results. I believe that the issue is that the recursive nature of the function ie gbm[4] = gbm[3] * ... and gbm[3] comprises of gbm[2]... and gbm[2] = gbm[1]..... Therefore gbm[3] should have derivatives for gbm[2] & gbm[1]. and for gbm[4] should have derivatives for gbm[3], gbm[2], gbm[1]. therefore for each new iteration gbm[N], there are derivatives for the gbm[N -1-> 0] elements.