WillKoehrsen/Machine-Learning-Projects

ValueError: number sections must be larger than 0

Carlos-Henreis opened this issue · 2 comments

Hi!
I was reading your post and found it very interesting.
But when I went to test here I got this error:

/usr/bin/python3.6 /home/carlos/PycharmProjects/tfg/cross-validation/main.py
Fitting 3 folds for each of 50 candidates, totalling 150 fits
[CV] n_neurons=90, n_hidden_layers=7, max_checks_without_progress=30, learning_rate=0.1, dropout_rate=None, batch_size=64, batch_norm_momentum=None, activation=<function relu at 0x7f87410f0bf8>
2018-04-14 16:08:10.580413: I tensorflow/core/platform/cpu_feature_guard.cc:137] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE4.1 SSE4.2 AVX
[CV] n_neurons=90, n_hidden_layers=7, max_checks_without_progress=30, learning_rate=0.1, dropout_rate=None, batch_size=64, batch_norm_momentum=None, activation=<function relu at 0x7f87410f0bf8>, total= 1.1s
[Parallel(n_jobs=1)]: Done 1 out of 1 | elapsed: 1.1s remaining: 0.0s
[CV] n_neurons=90, n_hidden_layers=7, max_checks_without_progress=30, learning_rate=0.1, dropout_rate=None, batch_size=64, batch_norm_momentum=None, activation=<function relu at 0x7f87410f0bf8>
[CV] n_neurons=90, n_hidden_layers=7, max_checks_without_progress=30, learning_rate=0.1, dropout_rate=None, batch_size=64, batch_norm_momentum=None, activation=<function relu at 0x7f87410f0bf8>, total= 1.2s
[CV] n_neurons=90, n_hidden_layers=7, max_checks_without_progress=30, learning_rate=0.1, dropout_rate=None, batch_size=64, batch_norm_momentum=None, activation=<function relu at 0x7f87410f0bf8>
[CV] n_neurons=90, n_hidden_layers=7, max_checks_without_progress=30, learning_rate=0.1, dropout_rate=None, batch_size=64, batch_norm_momentum=None, activation=<function relu at 0x7f87410f0bf8>, total= 1.1s
[CV] n_neurons=120, n_hidden_layers=1, max_checks_without_progress=30, learning_rate=0.1, dropout_rate=0.5, batch_size=32, batch_norm_momentum=None, activation=<function relu at 0x7f87410f0bf8>
[CV] n_neurons=120, n_hidden_layers=1, max_checks_without_progress=30, learning_rate=0.1, dropout_rate=0.5, batch_size=32, batch_norm_momentum=None, activation=<function relu at 0x7f87410f0bf8>, total= 0.5s
[CV] n_neurons=120, n_hidden_layers=1, max_checks_without_progress=30, learning_rate=0.1, dropout_rate=0.5, batch_size=32, batch_norm_momentum=None, activation=<function relu at 0x7f87410f0bf8>
[CV] n_neurons=120, n_hidden_layers=1, max_checks_without_progress=30, learning_rate=0.1, dropout_rate=0.5, batch_size=32, batch_norm_momentum=None, activation=<function relu at 0x7f87410f0bf8>, total= 0.5s
[CV] n_neurons=120, n_hidden_layers=1, max_checks_without_progress=30, learning_rate=0.1, dropout_rate=0.5, batch_size=32, batch_norm_momentum=None, activation=<function relu at 0x7f87410f0bf8>
[CV] n_neurons=120, n_hidden_layers=1, max_checks_without_progress=30, learning_rate=0.1, dropout_rate=0.5, batch_size=32, batch_norm_momentum=None, activation=<function relu at 0x7f87410f0bf8>, total= 0.5s
[CV] n_neurons=30, n_hidden_layers=2, max_checks_without_progress=20, learning_rate=0.1, dropout_rate=None, batch_size=32, batch_norm_momentum=0.9, activation=<function relu at 0x7f87410f0bf8>
[CV] n_neurons=30, n_hidden_layers=2, max_checks_without_progress=20, learning_rate=0.1, dropout_rate=None, batch_size=32, batch_norm_momentum=0.9, activation=<function relu at 0x7f87410f0bf8>, total= 1.4s
[CV] n_neurons=30, n_hidden_layers=2, max_checks_without_progress=20, learning_rate=0.1, dropout_rate=None, batch_size=32, batch_norm_momentum=0.9, activation=<function relu at 0x7f87410f0bf8>
[CV] n_neurons=30, n_hidden_layers=2, max_checks_without_progress=20, learning_rate=0.1, dropout_rate=None, batch_size=32, batch_norm_momentum=0.9, activation=<function relu at 0x7f87410f0bf8>, total= 1.3s
[CV] n_neurons=30, n_hidden_layers=2, max_checks_without_progress=20, learning_rate=0.1, dropout_rate=None, batch_size=32, batch_norm_momentum=0.9, activation=<function relu at 0x7f87410f0bf8>
[CV] n_neurons=30, n_hidden_layers=2, max_checks_without_progress=20, learning_rate=0.1, dropout_rate=None, batch_size=32, batch_norm_momentum=0.9, activation=<function relu at 0x7f87410f0bf8>, total= 1.4s
[CV] n_neurons=120, n_hidden_layers=4, max_checks_without_progress=30, learning_rate=0.05, dropout_rate=None, batch_size=64, batch_norm_momentum=None, activation=<function relu at 0x7f87410f0bf8>
[CV] n_neurons=120, n_hidden_layers=4, max_checks_without_progress=30, learning_rate=0.05, dropout_rate=None, batch_size=64, batch_norm_momentum=None, activation=<function relu at 0x7f87410f0bf8>, total= 0.9s
[CV] n_neurons=120, n_hidden_layers=4, max_checks_without_progress=30, learning_rate=0.05, dropout_rate=None, batch_size=64, batch_norm_momentum=None, activation=<function relu at 0x7f87410f0bf8>
[CV] n_neurons=120, n_hidden_layers=4, max_checks_without_progress=30, learning_rate=0.05, dropout_rate=None, batch_size=64, batch_norm_momentum=None, activation=<function relu at 0x7f87410f0bf8>, total= 0.8s
[CV] n_neurons=120, n_hidden_layers=4, max_checks_without_progress=30, learning_rate=0.05, dropout_rate=None, batch_size=64, batch_norm_momentum=None, activation=<function relu at 0x7f87410f0bf8>
[CV] n_neurons=120, n_hidden_layers=4, max_checks_without_progress=30, learning_rate=0.05, dropout_rate=None, batch_size=64, batch_norm_momentum=None, activation=<function relu at 0x7f87410f0bf8>, total= 0.8s
[CV] n_neurons=30, n_hidden_layers=7, max_checks_without_progress=30, learning_rate=0.1, dropout_rate=0.5, batch_size=16, batch_norm_momentum=0.9, activation=<function relu at 0x7f87410f0bf8>
[CV] n_neurons=30, n_hidden_layers=7, max_checks_without_progress=30, learning_rate=0.1, dropout_rate=0.5, batch_size=16, batch_norm_momentum=0.9, activation=<function relu at 0x7f87410f0bf8>, total= 6.2s
[CV] n_neurons=30, n_hidden_layers=7, max_checks_without_progress=30, learning_rate=0.1, dropout_rate=0.5, batch_size=16, batch_norm_momentum=0.9, activation=<function relu at 0x7f87410f0bf8>
[CV] n_neurons=30, n_hidden_layers=7, max_checks_without_progress=30, learning_rate=0.1, dropout_rate=0.5, batch_size=16, batch_norm_momentum=0.9, activation=<function relu at 0x7f87410f0bf8>, total= 5.5s
[CV] n_neurons=30, n_hidden_layers=7, max_checks_without_progress=30, learning_rate=0.1, dropout_rate=0.5, batch_size=16, batch_norm_momentum=0.9, activation=<function relu at 0x7f87410f0bf8>
[CV] n_neurons=30, n_hidden_layers=7, max_checks_without_progress=30, learning_rate=0.1, dropout_rate=0.5, batch_size=16, batch_norm_momentum=0.9, activation=<function relu at 0x7f87410f0bf8>, total= 5.6s
[CV] n_neurons=50, n_hidden_layers=2, max_checks_without_progress=20, learning_rate=0.005, dropout_rate=0.5, batch_size=128, batch_norm_momentum=None, activation=<function relu at 0x7f87410f0bf8>
Traceback (most recent call last):
File "/home/carlos/.local/lib/python3.6/site-packages/numpy/lib/shape_base.py", line 463, in array_split
Nsections = len(indices_or_sections) + 1
TypeError: object of type 'int' has no len()

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/home/carlos/PycharmProjects/tfg/cross-validation/main.py", line 54, in
random_search.fit(X_train, y_train)
File "/home/carlos/.local/lib/python3.6/site-packages/sklearn/model_selection/_search.py", line 639, in fit
cv.split(X, y, groups)))
File "/home/carlos/.local/lib/python3.6/site-packages/sklearn/externals/joblib/parallel.py", line 779, in call
while self.dispatch_one_batch(iterator):
File "/home/carlos/.local/lib/python3.6/site-packages/sklearn/externals/joblib/parallel.py", line 625, in dispatch_one_batch
self._dispatch(tasks)
File "/home/carlos/.local/lib/python3.6/site-packages/sklearn/externals/joblib/parallel.py", line 588, in _dispatch
job = self._backend.apply_async(batch, callback=cb)
File "/home/carlos/.local/lib/python3.6/site-packages/sklearn/externals/joblib/_parallel_backends.py", line 111, in apply_async
result = ImmediateResult(func)
File "/home/carlos/.local/lib/python3.6/site-packages/sklearn/externals/joblib/_parallel_backends.py", line 332, in init
self.results = batch()
File "/home/carlos/.local/lib/python3.6/site-packages/sklearn/externals/joblib/parallel.py", line 131, in call
return [func(*args, **kwargs) for func, args, kwargs in self.items]
File "/home/carlos/.local/lib/python3.6/site-packages/sklearn/externals/joblib/parallel.py", line 131, in
return [func(*args, **kwargs) for func, args, kwargs in self.items]
File "/home/carlos/.local/lib/python3.6/site-packages/sklearn/model_selection/_validation.py", line 458, in _fit_and_score
estimator.fit(X_train, y_train, **fit_params)
File "/home/carlos/PycharmProjects/tfg/cross-validation/dnn_classifier.py", line 194, in fit
for rnd_indices in np.array_split(rnd_idx, num_instances // self.batch_size):
File "/home/carlos/.local/lib/python3.6/site-packages/numpy/lib/shape_base.py", line 469, in array_split
raise ValueError('number sections must be larger than 0.')
ValueError: number sections must be larger than 0.

Process finished with exit code 1

The DNN_Classifier code is here: dnn_classifier.py
My main where I run things: main.py
And I'm using the iris base: iris.xls