SJ001/AI-Feynman

Error in Running Example

Closed this issue · 7 comments

yecyn commented

Hello all,

I was trying to run the example 1 on Google Collab: https://colab.research.google.com/drive/1Dd289fKUinhtnG4OU8ghUwjidYWtc5PY?usp=sharing

But I run into an error, and I was wondering if anyone has run into the same error or if anyone knows how to fix it. Thank you so much! I really appreciate it!


TypeError Traceback (most recent call last)
in
1 from aifeynman.S_run_aifeynman import run_aifeynman
2 # Run example 1 as the regression dataset
----> 3 run_aifeynman("/content/AI-Feynman/example_data/","example1.txt",30,"14ops.txt", polyfit_deg=3, NN_epochs=400)

2 frames
/usr/local/lib/python3.7/dist-packages/torch/_tensor.py in array(self, dtype)
755 return handle_torch_function(Tensor.array, (self,), self, dtype=dtype)
756 if dtype is None:
--> 757 return self.numpy()
758 else:
759 return self.numpy().astype(dtype, copy=False)

TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.

It got this error here. I just used .cpu() for every element and it was resolved.

yecyn commented

I changed it as follows:

else:
idx_min = np.argmin(np.array([symmetry_plus_result[0].cpu(), symmetry_minus_result[0].cpu(), symmetry_multiply_result[0].cpu(), symmetry_divide_result[0].cpu(), separability_plus_result[0].cpu(), separability_multiply_result[0].cpu()]))

But I'm still getting the same error.

can you paste the error output here

yecyn commented

I tried it again this morning, and it worked! Thank you so much!!

yecyn commented

Hi, I was using the same code on a different dataset, but i ended up getting another error on the same line as before, I was wondering if you know how to fix it? Thank you so much!


AttributeError Traceback (most recent call last)
in
1 from aifeynman.S_run_aifeynman import run_aifeynman
2 # Run Bass Example as the regression dataset
----> 3 run_aifeynman("/content/","Diamonds_Clean.txt",30,"14ops.txt", polyfit_deg=3, NN_epochs=400)

1 frames
/content/AI-Feynman/aifeynman/S_run_aifeynman.py in run_AI_all(pathdir, filename, BF_try_time, BF_ops_file_type, polyfit_deg, NN_epochs, PA)
94 idx_min = -1
95 else:
---> 96 idx_min = np.argmin(np.array([symmetry_plus_result[0].cpu(), symmetry_minus_result[0].cpu(), symmetry_multiply_result[0].cpu(), symmetry_divide_result[0].cpu(), separability_plus_result[0].cpu(), separability_multiply_result[0].cpu()]))
97 print("")
98 # Check if compositionality is better than the best so far

AttributeError: 'int' object has no attribute 'cpu'

I have also tried the code without adding .cpu(), and it still has the same errors:


AttributeError Traceback (most recent call last)
in
1 from aifeynman.S_run_aifeynman import run_aifeynman
2 # Run Bass Example as the regression dataset
----> 3 run_aifeynman("/content/","Diamonds_Clean.txt",30,"14ops.txt", polyfit_deg=3, NN_epochs=400)

1 frames
/content/AI-Feynman/aifeynman/S_run_aifeynman.py in run_AI_all(pathdir, filename, BF_try_time, BF_ops_file_type, polyfit_deg, NN_epochs, PA)
94 idx_min = -1
95 else:
---> 96 idx_min = np.argmin(np.array([symmetry_plus_result[0], symmetry_minus_result[0], symmetry_multiply_result[0], symmetry_divide_result[0], separability_plus_result[0], separability_multiply_result[0]]))
97 print("")
98 # Check if compositionality is better than the best so far

AttributeError: 'int' object has no attribute 'cpu'

the check symmetry and separability functions either return a tensor or -1

in this case you can remove .cpu(), since its not returning a tensor

yecyn commented

I tried it again without .cpu(), and it worked. But I'm getting another error:

I double checked my data, and it does not have any NaN or inf values.


LinAlgError Traceback (most recent call last)
in
1 from aifeynman.S_run_aifeynman import run_aifeynman
2 # Run Bass Example as the regression dataset
----> 3 run_aifeynman("/content/","Diamonds_Clean.txt",30,"14ops.txt", polyfit_deg=3, NN_epochs=400)

6 frames
/content/AI-Feynman/aifeynman/S_run_aifeynman.py in run_aifeynman(pathdir, filename, BF_try_time, BF_ops_file_type, polyfit_deg, NN_epochs, vars_name, test_percentage)
272 PA = ParetoSet()
273 # Run the code on the train data
--> 274 PA = run_AI_all(pathdir,filename+"_train",BF_try_time,BF_ops_file_type, polyfit_deg, NN_epochs, PA=PA)
275 PA_list = PA.get_pareto_points()
276

/content/AI-Feynman/aifeynman/S_run_aifeynman.py in run_AI_all(pathdir, filename, BF_try_time, BF_ops_file_type, polyfit_deg, NN_epochs, PA)
152 if len(data[0])>3:
153 # find the best separability indices
--> 154 decomp_idx = identify_decompositions(pathdir,filename, model_feynman)
155 brute_force_gen_sym("results/","gradients_gen_sym_%s" %filename,600,"14ops.txt")
156 bf_all_output = np.loadtxt("results_gen_sym.dat", dtype="str")

/content/AI-Feynman/aifeynman/S_gradient_decomposition.py in identify_decompositions(pathdir, filename, model, max_subset_size, visualize)
256 y = torch.Tensor(data[:, [-1]])
257 # Return best decomposition
--> 258 all_scores = filter_decompositions_relative_scoring(X, y, model, visualize=visualize)
259 assert(all_scores)
260 best_decomposition = all_scores[0][1]

/content/AI-Feynman/aifeynman/S_gradient_decomposition.py in filter_decompositions_relative_scoring(X, y, model, max_subset_size, visualize)
198 for i in range(random_indices.shape[0]):
199 samples = draw_samples(X, y, model, s, NUM_SAMPLES, point=random_indices[i])
--> 200 score, _ = score_consistency(evaluate_derivatives_andrew(model, s, samples))
201 hypot_scores.append(score)
202 for i in range(random_indices.shape[0]):

/content/AI-Feynman/aifeynman/S_gradient_decomposition.py in score_consistency(grads_tensor)
109 A = np.array(normalized_grads)
110 D = np.einsum('ij,ik', A, A)
--> 111 evals, evecs = np.linalg.eig(D)
112 nv = evals.shape[0]
113 assert(nv == A.shape[1])

<array_function internals> in eig(*args, **kwargs)

/usr/local/lib/python3.7/dist-packages/numpy/linalg/linalg.py in eig(a)
1315 _assert_stacked_2d(a)
1316 _assert_stacked_square(a)
-> 1317 _assert_finite(a)
1318 t, result_t = _commonType(a)
1319

/usr/local/lib/python3.7/dist-packages/numpy/linalg/linalg.py in _assert_finite(*arrays)
206 for a in arrays:
207 if not isfinite(a).all():
--> 208 raise LinAlgError("Array must not contain infs or NaNs")
209
210 def _is_empty_2d(arr):

LinAlgError: Array must not contain infs or NaNs