Weekend project reading through this interesting paper. The gist of the paper is that it is possible to extract the content of one image, and the style from another and produce a third with the content and style mixed. This is enabled by deep convolution neural networks. Convolution networks learn features in a hierarchy, where lower layers learn features such as line segments, and each layer above learn higher abstractions, such as nose, face or a scenery. Example above is my own school picture plus Picaso art work produced a fascinating painting of me.
The paper is available here, A Neural Algorithm of Artistic Style
The source code for it is available here GitHub Source
The fastest way to get started on a mac or pc is to use a docker image, with precompiled dependencies. It took about 10 hours to generate the master piece. On a GPU based machine it would take 15 or so minutes.
Get Docker and start docker terminal. Set the docker virtual box memory to a higher number 8GB, stop the virtual machine and change the system settings in virtual box.
docker pull kchentw/neural-style docker run -i -t kchentw/neural-style /bin/bash #To exit shell without terminating container do CTRL P, Q docker ps # will show container id docker cp picaso.jpg :/tmp docker cp kid.jpg :/tmp docker attach cd ~/neural-style th neural_style.lua -gpu -1 -style_image /tmp/picaso.jpg -content_image /tmp/kid.jpg
Wait for about 10 or so hours and it will produce out.png, the picaso master piece!