mathLab/PINA

How to debug an equation

mathayay opened this issue · 4 comments

Hello guys,
I wanted to know if there is any way to debug the equation function . When computing my function and specifying it as a condition, no errors are thrown in the class declaration. No errors are thrown either when using problem.discretise_domain. However, I realize later that the function is buggy because I get this error using trainer.train() :

RuntimeError: Input points in ['D'] training are None. Please sample points in your problem by calling discretise_domain function before train in the provided locations.

I suppose that no points were generated because my conditions and my equation are badly declared. But I don't know where the errors are, because for some reason they are not thrown anywhere. My first guess was to run MyProblem.MyEquation(_input,_output) to do some debugging, but I cant't figure what parameters to use in the function. I also tried to replicate my equation function elsewhere (outside my class) and try to see what's wrong, and I could figure out some mistakes I made, but I thought that there was maybe a more efficient way to deal with this problem. What do you guys do in this case ?

Thank you in advance !

Hello @mathayay 👋🏻! At the moment we do not provide any debugger, but it is a nice feature that we might think to add.

Regarding your problem, I believe that you forgot to sample some variables inside the domain D. The error is usually thrown when you construct the Trainer, and, strangely, you get it when calling the .train() method. As an initial suggestion, I would say to correctly check if all variables are sampled. You can easily do it, once you have discretized the domain by calling problem.have_sampled_points. For example:

import torch
from pina.geometry import CartesianDomain
from pina import Condition
from pina.problem import TimeDependentProblem, SpatialProblem
from pina.operators import grad
from pina.equation import FixedValue, Equation

        
class FooProblem(TimeDependentProblem, SpatialProblem):

    # assign output/ spatial and temporal variables
    output_variables = ['u']
    spatial_domain = CartesianDomain({'x': [-1, 1]})
    temporal_domain = CartesianDomain({'t': [0, 1]})

    # problem condition statement
    conditions = {
        't0': Condition(location=CartesianDomain({'x': [-1, 1], 't': 0}), equation=FixedValue(0.)),
        'D': Condition(location=CartesianDomain({'x': [-1, 1], 't': [0, 1]}), equation=FixedValue(0.)),
    }


problem = FooProblem()
problem.discretise_domain(n=200, mode='grid', variables = 't', locations=['D'])
problem.discretise_domain(n=20, mode='grid', variables = 'x', locations=['D'])
problem.discretise_domain(n=150, mode='random', locations=['t0'])
print(f'Are all points sampled? {problem.have_sampled_points}')

# here I start sampling again D, I sample 't' but I don't sample 'x'
problem.discretise_domain(n=20, mode='grid', variables = 't', locations=['D'])
print(f'Are all points sampled? {problem.have_sampled_points}')

If this does not solve the problem, please provide a minimal script where I can look for spotting the error in the script or PINA source code

Hello, @dario-coscia ! Thank you for your help! Your solutions helped me find a bug, but I still have some. This is mainly due because we want our neurol network output to be a tensor.

As i don't think PINA supports that (I think ?) we tried a workaround : we define our variables normally, and we create a LabelTensorgtensor = LabelTensor(torch.eye(4),['t', 'x', 'y','z'] and we do this : gtensor[0, 0] = output_.extract(['g00']) ... for each value of the tensor. I think it may create some problems, but we are slowly getting there.

Hi! If you want to train using a physics informed loss you need labeltensors for finding the variables to differentiate. The situation changes when you train in a supervised setting. There you just need input and output data, so it could also work with a standard torch tensor. You can achieve this by creating a new solver inheriting from the one you want to use, and just rewrite the forward call as

def forward(self,x):
    return self.neural_net.torchmodel(x)

By doing this you avoid any call to labeltensors in the output. Be aware tho, that you loose automatic differentiation because you do not have labels.

Hello ! We tried your solution but we couldn't make it work. However, we found a (terrible) solution, that basically consists in creating multiple conditions for each value of the tensor. This way, we got something like

    def OurEquation(x1, x2, x3, input_, output_):
         #it does what we want 

    @staticmethod
    def OurEquation000(input, output_):
        return OurClass.OurEquation(0, 0, 0, input, output_)

    @staticmethod
    def OurEquation000(input, output_):
        return OurClass.OurEquation(0, 0, 1, input, output_)

    #all the way until OurEquation333

then, we define it in the conditions :

'D': Condition(location=CartesianDomain({'x': [0, 5], 'y': [0, 5], 'z': [0, 5], 't': [0, 5]}),
equation=SystemEquation([
OurEquation000,
OurEquation001,

...
OurEquation333
]))    

It kind of simulates what a tensor should be, but it is not very good in terms of performances