error in fully_decoded: ValueError: only one element tensors can be converted to Python scalars
Closed this issue · 21 comments
This happens when enabling fully_decoded
in get_preds
. I don't know where this comes from, any ideas of where to start looking?
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-62-6fffcf6bb38e> in <module>
----> 1 foo = learn.get_preds(fully_decoded=True)
~/.local/lib/python3.6/site-packages/fastinference/inference/inference.py in get_preds(x, ds_idx, dl, raw_outs, decoded_loss, fully_decoded, **kwargs)
70 else:
71 outs.insert(0, raw)
---> 72 if fully_decoded: outs = _fully_decode(x.dls, inps, outs, dec_out, is_multi)
73 if decoded_loss: outs = _decode_loss(x.dls.vocab, dec_out, outs)
74 return outs
~/.local/lib/python3.6/site-packages/fastinference/inference/inference.py in _fully_decode(dl, inps, outs, dec_out, is_multi)
14 inps[i] = torch.cat(inps[i], dim=0)
15 else:
---> 16 inps = tensor(*inps[0])
17 b = (*tuplify(inps), *tuplify(dec_out))
18 try:
/usr/local/lib/python3.6/dist-packages/fastai2/torch_core.py in tensor(x, *rest, **kwargs)
108 # if isinstance(x, (tuple,list)) and len(x)==0: return tensor(0)
109 res = (x if isinstance(x, Tensor)
--> 110 else torch.tensor(x, **kwargs) if isinstance(x, (tuple,list))
111 else _array2tensor(x) if isinstance(x, ndarray)
112 else as_tensor(x.values, **kwargs) if isinstance(x, (pd.Series, pd.DataFrame))
ValueError: only one element tensors can be converted to Python scalars
What subject is it? (Vision, NLP) and is it classification? Regression? If I can get a minimal idea of how to repeat this I can look into it.
Time Series Classification using this repo:
https://github.com/ai-fast-track/timeseries
No worries, I'll try to do get_preds
in the nominal example of the README there, and it case it fails, then it is clear that it must be something related to the Transforms made there for time series.
That would explain it. This library is designed around working within the main fastai library only, as it’s quite specific in what it needs so there will absolutely be bugs around other implementations, specifically with fully_decoded only (I imagine). Does any of the other methods (ie fully decoded to false, etc) work?
At the least without decoding should translate over to other implementations, it just gets sticky when it comes to decodes. (If not then I’ve done something wrong and I’ll look into how to fix that)
Yes, it works perfectly fine if I leave fully_decoded
as false!
If you can give me a minimal working example I can try to find a solution to our fully_decoded troubles, and if it can generalize then I’ll include it in. (IE is there troubles with any of the tutorial notebooks trying that?)
Yes, the easiest way to reproduce this issue is to run the index.ipynb
of the repo in Colab, install fastinference
, load it and call learn.get_preds(fully_decoded=True)
.
Awesome! I'll take a look at it when I can, thanks!
Interestingly, learn.predict
does work, and it calls get_preds(fully_decoded=True)
under the hood...
That means it’s probably how decode stacks the tensor I think.
Just letting you know @vrodriguezf I'll be looking at this today.
Alright so, that was an adventure. The newest version will fix the issue, however I am not pushing an upgrade until certain fastai bits are in place. For now build from github by doing !pip install git+https://github.com/muellerzr/fastinference.git
Wow, thank you @muellerzr , you are amazing!!!
Let me know if you still have the issue @vrodriguezf, I believe it should have fixed it (I had to completely rewrite input gatherings, oopsie)
Thanks! I did !pip install -q git+https://github.com/muellerzr/fastinference.git
in the colab notebook of the timeseries repo, call get_preds(fully_decoded=True)
and now I get:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-47-6fffcf6bb38e> in <module>()
----> 1 foo = learn.get_preds(fully_decoded=True)
13 frames
/usr/local/lib/python3.6/dist-packages/fastai2/torch_core.py in _f(self, *args, **kwargs)
278 def _f(self, *args, **kwargs):
279 cls = self.__class__
--> 280 res = getattr(super(TensorBase, self), fn)(*args, **kwargs)
281 return retain_type(res, self, copy_meta=True)
282 return _f
RuntimeError: The size of tensor a (64) must match the size of tensor b (8) at non-singleton dimension 0
However, in my own data using that repo, it works! It just returns the decoded xs though. Is this expected to return also the ys (targets)?
No, it just returns the decoded x's, as doing both is a bit redundant. (IE the y's are already decoded). (this was a design change by me). Can you copy the full stack trace for me to read? That little tiny bit tells me nothing :)
Sure!
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-48-6fffcf6bb38e> in <module>()
----> 1 foo = learn.get_preds(fully_decoded=True)
13 frames
/usr/local/lib/python3.6/dist-packages/fastinference/inference/inference.py in get_preds(x, ds_idx, dl, raw_outs, decoded_loss, fully_decoded, **kwargs)
75 else:
76 outs.insert(0, raw)
---> 77 if fully_decoded: outs = _fully_decode(x.dls[0], inps, outs)
78 if decoded_loss: outs = _decode_loss(x.dls.vocab, dec_out, outs)
79 return outs
/usr/local/lib/python3.6/dist-packages/fastcore/dispatch.py in __call__(self, *args, **kwargs)
97 if not f: return args[0]
98 if self.inst is not None: f = MethodType(f, self.inst)
---> 99 return f(*args, **kwargs)
100
101 def __get__(self, inst, owner):
/usr/local/lib/python3.6/dist-packages/fastinference/inference/inference.py in _fully_decode(dl, inps, outs)
11 def _fully_decode(dl:TfmdDL, inps, outs):
12 "Attempt to fully decode the `inp"
---> 13 inps = dl.decode(inps)
14 dec = []
15 for d in inps:
/usr/local/lib/python3.6/dist-packages/fastai2/data/core.py in decode(self, b)
75 if isinstance(f,Pipeline): f.split_idx=split_idx
76
---> 77 def decode(self, b): return self.before_batch.decode(to_cpu(self.after_batch.decode(self._retain_dl(b))))
78 def decode_batch(self, b, max_n=9, full=True): return self._decode_batch(self.decode(b), max_n, full)
79
/usr/local/lib/python3.6/dist-packages/fastcore/transform.py in decode(self, o, full)
195
196 def decode (self, o, full=True):
--> 197 if full: return compose_tfms(o, tfms=self.fs, is_enc=False, reverse=True, split_idx=self.split_idx)
198 #Not full means we decode up to the point the item knows how to show itself.
199 for f in reversed(self.fs):
/usr/local/lib/python3.6/dist-packages/fastcore/transform.py in compose_tfms(x, tfms, is_enc, reverse, **kwargs)
140 for f in tfms:
141 if not is_enc: f = f.decode
--> 142 x = f(x, **kwargs)
143 return x
144
/usr/local/lib/python3.6/dist-packages/fastcore/transform.py in decode(self, x, **kwargs)
71 def name(self): return getattr(self, '_name', _get_name(self))
72 def __call__(self, x, **kwargs): return self._call('encodes', x, **kwargs)
---> 73 def decode (self, x, **kwargs): return self._call('decodes', x, **kwargs)
74 def __repr__(self): return f'{self.name}:\n{self.encodes} {self.decodes}'
75
/usr/local/lib/python3.6/dist-packages/fastcore/transform.py in _call(self, fn, x, split_idx, **kwargs)
80 def _call(self, fn, x, split_idx=None, **kwargs):
81 if split_idx!=self.split_idx and self.split_idx is not None: return x
---> 82 return self._do_call(getattr(self, fn), x, **kwargs)
83
84 def _do_call(self, f, x, **kwargs):
/usr/local/lib/python3.6/dist-packages/fastcore/transform.py in _do_call(self, f, x, **kwargs)
87 ret = f.returns_none(x) if hasattr(f,'returns_none') else None
88 return retain_type(f(x, **kwargs), x, ret)
---> 89 res = tuple(self._do_call(f, x_, **kwargs) for x_ in x)
90 return retain_type(res, x)
91
/usr/local/lib/python3.6/dist-packages/fastcore/transform.py in <genexpr>(.0)
87 ret = f.returns_none(x) if hasattr(f,'returns_none') else None
88 return retain_type(f(x, **kwargs), x, ret)
---> 89 res = tuple(self._do_call(f, x_, **kwargs) for x_ in x)
90 return retain_type(res, x)
91
/usr/local/lib/python3.6/dist-packages/fastcore/transform.py in _do_call(self, f, x, **kwargs)
86 if f is None: return x
87 ret = f.returns_none(x) if hasattr(f,'returns_none') else None
---> 88 return retain_type(f(x, **kwargs), x, ret)
89 res = tuple(self._do_call(f, x_, **kwargs) for x_ in x)
90 return retain_type(res, x)
/usr/local/lib/python3.6/dist-packages/fastcore/dispatch.py in __call__(self, *args, **kwargs)
97 if not f: return args[0]
98 if self.inst is not None: f = MethodType(f, self.inst)
---> 99 return f(*args, **kwargs)
100
101 def __get__(self, inst, owner):
/usr/local/lib/python3.6/dist-packages/timeseries/core.py in decodes(self, x)
191 def decodes(self, x:TensorTS):
192 f = to_cpu if x.device.type=='cpu' else noop
--> 193 x_orig = ((x-self.scale_range[0])/(self.scale_range[1] - self.scale_range[0]))*(self.max - self.min) + self.min
194 return f(x_orig)
195
/usr/local/lib/python3.6/dist-packages/fastai2/torch_core.py in _f(self, *args, **kwargs)
278 def _f(self, *args, **kwargs):
279 cls = self.__class__
--> 280 res = getattr(super(TensorBase, self), fn)(*args, **kwargs)
281 return retain_type(res, self, copy_meta=True)
282 return _f
RuntimeError: The size of tensor a (64) must match the size of tensor b (8) at non-singleton dimension 0
Edit:
This is an issue on their end not ours, if you notice the second to last call there's an issue on the decodes in timeseries.
Essentially that means their decodes aren't compatible with it on multiple batches I believe. I'd investigate more but my time is limited on trying to get fastinference_pytorch
running. In the meantime if you want to debug on your own, how I debug the fully_decoded
is output the inputs as your return from get_preds
and work from there (I'll leave this open in case you do :) )
Mmm I have to think about it. In my own problem it works and I just extended the class `TensorTS``, but didn't change the decode:
class ToTensorMotion(ItemTransform):
def encodes(self, x): return TensorMotion(x)
and the `tfms`` pipeline:
[ItemGetter(0), ToTensorTS(), ToTensorMotion()]
Do not worry about this any longer, I'm happy as it is now ;)
Got it, okay :) Glad we got it sorted out!