mathLab/PINA

Variables Sampling V0.1

dario-coscia opened this issue · 3 comments

Describe the bug
In v0.1 when in discretise_domain the variables kwargs is passed, the variables are ignored and all problem.input_variables are used for sampling.

To Reproduce

from pina.problem import SpatialProblem, TimeDependentProblem
from pina.geometry import CartesianDomain, EllipsoidDomain
from pina.equation.equation_factory import FixedValue
from pina import Condition


class FooProblem(SpatialProblem, TimeDependentProblem):
    output_variables = ['u1', 'u2']
    spatial_domain = CartesianDomain({'x': [0, 1], 'y': [0, 1]})
    temporal_domain =  CartesianDomain({'t': [0, 1]})
    conditions = {
        'D': Condition(
            location=CartesianDomain({'x': [0, 1], 'y': [0, 1], 't': [0, 1]}),
            equation=FixedValue(0.))
    }


foo_problem = FooProblem()
foo_problem.discretise_domain(n=10, mode='grid', variables=['x', 'y'])

print(foo_problem.input_pts['D'].shape)
print(foo_problem.input_pts['D'].labels)

Expected behavior

torch.Size([100, 2])
['x', 'y',]

Output

torch.Size([1000, 3])
['x', 'y', 't']

merged in b3f7b16 , closing the issue.

Hey @dario-coscia, I just wanted to say thanks a lot for all the hard work and quick responses. I gave your fix a try, but it seems like I'm still getting another error. Do you think I might be sampling the time-domain incorrectly?

from pina.problem import SpatialProblem, TimeDependentProblem
from pina.geometry import CartesianDomain
from pina.equation.equation_factory import FixedValue
from pina import Condition
from pina.solvers import PINN
from pina.trainer import Trainer
from pina.model import FeedForward
import torch
class FooProblem(SpatialProblem, TimeDependentProblem):
    output_variables = ['u1', 'u2']
    spatial_domain = CartesianDomain({'x': [0, 1], 'y': [0, 1]})
    temporal_domain =  CartesianDomain({'t': [0, 1]})
    conditions = {
        'D': Condition(
            location=CartesianDomain({'x': [0, 1], 'y': [0, 1], 't': [0, 1]}),
            equation=FixedValue(0.))
    }


foo_problem = FooProblem()
foo_problem.discretise_domain(n=10, mode='grid', variables=['x', 'y'])
foo_problem.discretise_domain(n=2, mode='grid', variables=['t'])

model = FeedForward(len(foo_problem.input_variables), len(foo_problem.output_variables))
solver = PINN(problem=foo_problem, model=model, optimizer=torch.optim.LBFGS)

trainer = Trainer(solver=solver, kwargs={'max_epochs': 2, 'accelerator': 'cpu'})
trainer.train()

I get the error:

RuntimeError: Trying to backward through the graph a second time (or directly access saved tensors after they have already been freed). Saved intermediate values of the graph are freed when you call .backward() or autograd.grad(). Specify retain_graph=True if you need to backward through the graph a second time or if you need to access saved tensors after calling backward.

I think the error is in the 'dicretize_domain' function of 'abstract_problem' calling the following twice:

self.input_pts[location].requires_grad_(True)
self.input_pts[location].retain_grad()

I think this shouldn't be in the function itself but right before the training step. I could workaround it by not using these lines inside the 'directize_domain' but as:

foo_problem.discretise_domain(50, 'grid', locations=['D'], variables=['x', 'y'])
foo_problem.discretise_domain(10, 'grid', locations=['D'], variables=['t'])
foo_problem.input_pts['D'].requires_grad_(True)
foo_problem.input_pts['D'].retain_grad()

Hey @LoveFrootLoops Thank you for spotting the error. Indeed it was a bug (to be fixed as soon as possible), and your suggestion about setting twice requires grad is right. I fixed it now in commit ba2ccd8 :)

PS.
I very appreciate that you like the software, and the bugs you find. If you believe our software is helpful for you please help us with a star so we can grow faster. Also, we are currently working on many stuff to release as soon as possible the v0.1 and if you are interested it would be great to collaborate on adding some features to PINA. Let us know 😄