The sketchfeat
project seeks to model the features of sketches produced by participants in our neurosketch
study conducted at Princeton (Winter 2016-2017).
We generally use feature representations learned by deep convolutional neural networks pre-trained on object categorization of photographs in the Imagenet database.
Currently, we are using VGG-19 as our primary visual encoding model.