Style transfer in the pixel domain has been explored with networks that can map the style of one image onto the content of another image. This work explores the extension of style transfer to 3D meshes and point clouds. First, we handle 3D-3D style transfer using a 3DSNet network. Next, we synthesize the local geometric texture using two techniques, a mesh encoder-decoder network based on MeshCNN and a hierarchical generator-discriminator model. Lastly, we use a style image to colorize the 3D mesh to generate a novel shape with the user's choice of color and texture style. We present both quantitative and qualitative results on ShapeNet data from our experiments.