/greenscreen-machine

What do Teachable Machine models see when people are composited into different backgrounds?

Primary LanguageJavaScript

greenscreen-machine

playing with training sets and models from a webcam in Teachable Machine

things:

  • find the person & swap background. how does the prediction change?
  • does it improve generalization if used as augmentation (eg, inside/outside, other wall in the room)?

screenshots

how it works

intro

greenscreen onto different backgrounds

greenscreen

take a model you trained with the webcam

training

add your model predictions to the greenscreens

all-waving

seems to see more 'waving' than it should...

mixed