The sketchfeat
project analyzes sketches produced by participants in the neurosketch
study conducted at Princeton (Winter 2016-2017).
Specifically, we measure object-diagnostic information in each sketch using feature representations learned by deep convolutional neural networks pre-trained on object categorization of photographs in the Imagenet database. Currently, we are using VGG-19 as our primary visual encoding model.