ehsanhaghighat/sciann

How to run SciANN on GPU

Opened this issue · 8 comments

Dear community,
I have a trivial question: how can I run SciANN on GPU?
I want to use available GPU of google colab because I have not cuda GPU in my laptop.
Let's say the following script is what I want to run on GPU, how can I modify it to be useable on google colab?

import sciann as sn
from sciann.utils.math import diff
t_ic = 50
t_left = 70
t_right = 40
x = sn.Variable('x')
t = sn.Variable('t')
T = sn.Functional('T', [t,x], 4*[20], 'tanh')
L1 = diff(T, t, order=1) - lamda/(rho* cp)*diff(T, x, order=2)
BC_letf = (x==0.)*(T - t_right)
BC_right = (x==10.)*(T - t_right)
IC = (t==0.)*(T - t_ic)
m = sn.SciModel([x, t], [L1, BC_left, BC_right, IC])
x_data, t_data = np.meshgrid(np.linspace(0, 1, 100), np.linspace(0, 60, 100))
h = m.train([x_data, t_data], 4*['zero'], learning_rate=0.002, epochs=500)

In advance, I appreciate any hint and help.
Cheers
Ali

@ehsanhaghighat , Thanks a lot for your help.
I tried an example on both my laptop and GPU provided by google colab.
My laptop was faster. Is it something normal? I have seen your comments here.
This is configration of the model (my model is a smple 2d trasnsiet heat equation):

Total samples: 90000 
Batch size: 64 
Total batches: 1407 

It is super slow in colab an maybe I have made some stupid mistakes: I copied the script into a notebook in colab and set the runtime type to GPU!

@ehsanhaghighat , thanks for your quick response. I tried with the following configuration but still nothing happens in colab and m.train() continues running:

Total samples: 90000000 
Batch size: 15000 
Total batches: 6000 

If I make it bigger, colab kicks me out of the session and say I am out of RAM :-).

@ehsanhaghighat, Thanks for devoting time to my problem. It is PINN and I want to solve the second order transient heat PDE in x and y directions. This is the script I am running in colab:

import sciann as sn
from sciann.utils.math import diff
import tensorflow as tf
y = sn.Variable('y')
x = sn.Variable('x')
t = sn.Variable('t')
T = sn.Functional('T', [t,x,y], 10*[20], 'tanh')
L1 = sn.rename (diff(T, t, order=1) - (0.6*1e6) / (1000*4184) * (diff(T, x, order=2) + diff(T, y, order=2)), 'PDE')
t_sim = 600
nt = 600
nx = 500
ny = 300
x_leng = 50
y_leng = 30
t_left = 100
t_right = 10
t_bottom = 10
t_top = 80
t_ic = 0
BC_left = sn.rename ((x==0.)*(T-t_left), 'BC_left') # to calculate loss coming from left BC
BC_right = sn.rename ((x==x_leng)*(T-t_right), 'BC_right') # to calculate loss coming from right BC
BC_bottom = sn.rename ((y==0.)*(T-t_bottom), 'BC_bottom') # to calculate loss coming from left BC
BC_top = sn.rename ((y==y_leng)*(T-t_top), 'BC_top') # to calculate loss coming from right BC
IC = sn.rename ((t==0.)*(T-t_ic), 'IC') # to calculate loss coming from IC
m = sn.SciModel([x, y, t], [L1, BC_left, BC_right, BC_bottom, BC_top, IC])
x_data, t_data = np.meshgrid(
    np.linspace(0, x_leng, nx), 
    np.linspace(0, t_sim, nt)
)
x_ = np.linspace(0, x_leng, nx)
y_ = np.linspace(0, y_leng, ny)
t_ = np.linspace(0, t_sim, nt)
x, y, t = np.meshgrid(x_, y_, t_, indexing='ij')
assert np.all(x[:,0,0] == x_)
assert np.all(y[0,:,0] == y_)
assert np.all(t[0,0,:] == t_)

h = m.train([x, y, t], 6*['zero'], learning_rate=0.005, epochs=100, verbose=0, batch_size=150000)

It runs for about 10 mins and then Your session crashed after using all available RAM ...

hi, do you still need help with it? I may comment in case

Dear @sandhu-githhub , Thanks for your help. Still I need help because I could not figure it out. I very much apprecate if you let me know how I can run the code in GPU. As you see, the problem is that when I run the script in google colab it simply does not work. Sntax is here. If I reduce the batch_size it takes a long time and nothing happens and if I increase it I will face the RAM error.