• Member Since 11th Apr, 2012
  • offline last seen 7 hours ago

Bad Horse


Beneath the microscope, you contain galaxies.

More Blog Posts758

Mar
16th
2016

More "computer art" · 9:33pm Mar 16th, 2016

"neural-style" is a program that takes two paintings, and repaints the first using the "style" of the second. I don't (yet) know how. It's explained in this paper.

See imgur gallery. You can make your own here (and post them here!). Source code here.

I began with this nice picture of Coco Pommel!

Unfortunately for all of you, fimfiction policy prevents me from posting the result. :moustache:

Now, to work on my Google Glasses app, "Undresser". All I need is some color photographs of nudes that are in the public domain.

Report Bad Horse · 916 views · #art #AI
Comments ( 32 )

This is amazing!

That gallery was wonderful and terrifying.

I'm not seeing the first two pictures. I'm seeing that tiny icon of a hill and a cloud for each of them.

3811582 It comes and goes for me. They're both in the link "imgur gallery", so just look there.

I'm honestly afraid to ask what your second image was. :twilightoops: Still, this is really darn cool.

My understanding is that NN are weighed to expect certain inputs via training set and the objective of deep dream was to take a sampling of noise data to and have the NN fill in the rest. The image that is being edited is likely only contributing it's most easily parsed surface features while the NN is doing the heavy lifting with a probability estimate.

3811582
3811587
imgur has blacklisted the fimfiction site, as such imgur links will not display here.

It's explained in a bit more detail here.

If you can't post the result I am going to assume you can't post the second image used in combination with Miss Pommel. Which you have me very curious about.

How the hell did they thread the vermicelli through the Vienna sausages?

I'm fairly fond of this mashup.

3811612
Protip - cook the pasta afterward. :twilightsmile:

3811582 3811607 Ah! So the imgur images appear in my browser because I had them cached. You should see them now.

I'm not sure how the website works. I've uploaded two photographs... what am I supposed to do next?

3812328
The install script (https://raw.githubusercontent.com/torch/distro/master/install-deps) is pretty straightforward. I think the only potentially tricky part would be installing OpenBLAS. You can find those instructions here: https://github.com/torch/dok/blob/master/docinstall/blas.md.

They have Docker cutorch images if you want to try something quickly: https://github.com/torch/torch7/wiki/Cheatsheet.

3812328 There are no install docs in the torch distribution. (I don't consider 3812396 "install docs". At all. Critical fail, torch.)
Go to their website and there are install instructions.
They are wrong, however. Where it says

# in a terminal, run the commands
git clone https://github.com/torch/distro.git ~/torch --recursive
cd ~/torch; bash install-deps;
./install.sh

it should say

# in a terminal, run the commands
git clone https://github.com/torch/distro.git ~/torch --recursive
cd ~/torch; bash install-deps;
./update.sh
./install.sh

Don't do that; see my next comment.

There are no docs on how to install as admin for everyone on the machine. There is no cygwin install, no makefile, no general Unix or Linux install. If your OS is not one of the 6 or so specific ones recognized (Debian, Ubuntu, I forget what else), it will not install and you'll have to hack it.

Ubuntu 15 has torch in synaptics, but it will install torch3 instead of torch7, so who knows if that would work.

For kicks, a link on Torch vs. Theano vs. Caffe.

3812328 3812396 There are instructions for installation here and here. I used the second one. DON'T use the instructions on the torch website; they did not install the torch executables on my path, nor add its manifest to luarocks.

3812396 3812328 3812193 The code is VERY CPU-intensive, so I wouldn't be surprised if you get your results back... never. You can install Ubuntu on something (if you use Windows, install VirtualBox and then install Ubuntu 15.10 on that), then follow these instructions to install neural-style on Ubuntu. (Do not create the directory DeepStyle where it tells you to; it will create its own directory.)

MAKE SURE your virtual machine has at least 6G of RAM. I'm running neural-style and it's taking over 4G, and swapping, making my VM run at maybe 1/50th its normal speed. It's been working on one picture for 10 hours and is about 1/4 done. Here's the result so far of applying the style of a nude Jessica Alba to Coco Pommel:

s25.postimg.org/t2fv5co7z/Coco_Alba_300.png

Basically it made her more photo-real and three-dimensional.

3814285
3814203
3812193
style(X) * content(Y) = Z
s14.postimg.org/l3eqjv6wt/out.jpg

style(X) * content(Y) = Z
s15.postimg.org/vx2jq6wbb/out.jpg

This brings an unexpected amount of joy.

3815618
I'm not sure which ones you're talking about. I tried viewing them from a different computer and they all seem to work. The X's and Y's were intended to be links, not inline images. Only the right-hand sides were intended to be inline images.

3813979

For now I'm doing this as a without root privileges, because why the hell should I do something in my home folder with root privileges?

I use Ubuntu, and Ubuntu is so security-conscious that most operations fail unless you sudo them. That's why internet instructions for Ubuntu start with "sudo" on every line. Adding your account to groups that are supposed to have permission to do things never works (e.g., can't install packages except as root). The end result is I have to sudo every script I run, and then every script installs everything with owner=root, no one else has read privs, so whatever that script installed works only for root.

Eventually I end up logging in as root for everything, to avoid the hassle.


3815419 The images don't show for me on fimfiction, tho the URLs are good. I see them if I type them in.

3815898

Bad Horse reason for root

In the last 9 years, the only times I've had permission issues on Ubuntu (2 years Mint) were when X randomly chowned ~/.Xauthority to root:root. What are you doing with your system? :trixieshiftright:

As far as I know, most Linux distributions are built to be single-user, so most people don't consider a local install to be worth supporting. Android is the only Linux-based distribution I know of to support it, in some severely limited form. I'm still waiting for a git-based package manager...

I updated the images again. They seem to work from a different IP address and a different computer, so if it still doesn't work then I'm not sure what's wrong. I included the links to the images as well.

3816150 Ubuntu doesn't have an admin group. It has 'adm', but it isn't used for anything AFAIK. You have to type 'sudo' all the time. I shouldn't have to keep typing my password to install software when I'm using an "admin account".

And sometimes you just have to log in as root. Like, if you use VirtualBox and have a partition auto-mounted on the VM, it's mounted rwxrwx--- root,root,users, so only the root can even list its contents. You can't even get in with "sudo cd /media/foo".

3816372
The package manager thing is stupid. You're right, users shouldn't have to keep entering sudo credentials to install things. I think Android has a more sane model in this regard, though Android is only able to do this because it uses the concept of "Linux users" more intelligently and in a way that's probably incompatible with most Linux software.

VirtualBox is more understandable, though they really should introduce a group that can interact with auto-mounted partitions.

cd doesn't execute a program so executing it with sudo should have no effect. I'm not entirely sure how sudo works, but at best that command would just spawn a new process that goes to that directory then return immediately, leaving you in the directory you started in.

FYI

gpu memory usage from nvidia-smi
| 1 10339 C /usr/local/bin/luajit 3331MiB | # nn
| 2 10601 C /usr/local/bin/luajit 1612MiB | # cudnn
| 3 10229 C /usr/local/bin/luajit 3293MiB | # clnn

total run time
# nn (gpu 1)
real 3m24.724s
user 2m53.011s
sys 0m34.867s

# cudnn (gpu 2)
real 3m42.164s
user 3m6.551s
sys 0m38.797s

# clnn (gpu 3)
real 9m38.615s
user 6m17.890s
sys 3m13.846s

I reran it with nn and cudnn swapping the gpus and the results were roughly the same. The output images on all runs were pretty much the same. All of these were run with the default arguments.

-cudnn_autotune is supposed to speed up FFT speeds. Runtime for cudnn with -cudnn_autotune is very slightly higher than without -cudnn_autotune, and memory usage increases to 2188MiB (about +25%), meaning the flag is worse than useless. Output images were pretty much the same with and without the flag. It's not clear if this is because the cost of -cudnn_autotune outweighs the benefit or if cudnn just doesn't use FFT-based convolutions.

clnn is by far the slowest. nn and cudnn have comparable speeds, with nn being slightly faster. cudnn uses a lot less memory. Don't use -cudnn_autotune.

3817232 I can't use my GPU, because I'm running in a virtual machine. It takes about 10 hours to do 1000 iterations (the default) at 512x512 output size.

What do you mean by gpu 1, gpu 2, gpu 3? You have 3 GPUs? I thought one generally had either 1 GPU, or hundreds of them. And you can configure each of them separately? I'm unfamiliar with whatever you're doing.

Also, I don't know what nn, cudann, and clnn are. I suppose cudann uses CUDA and clnn uses Open-CL, but then I don't understand what nn by itself would imply. Are you in each case using just 1 processor? In which case I don't understand what CUDA or Open-CL would do, as their point is to distribute load among many GPUs.

3817276
I have 4 logical GPUs (2x Titan Zs, which show up to the host OS as 2 GPUs each). I'm not rich enough to summon a GPU cluster of hundreds for this :pinkiesad2:. Someday...

Most neural net applications (neural_style included) are built to run on a single GPU. Usually the cost of writing for multiple GPUs (and especially for more than a single motherboard can handle) far outweighs the cost of just waiting a little longer for results. Some tools (like DL4J and whatever Microsoft's one is) natively support multiple GPUs and machines, but I don't think they're that popular. The big companies that actually need multiple GPUs for a single applications (like Google) I think just write their own logic to handle that on top of libraries like Torch and Caffe.

The configuration is all done on the side of the host OS by neural_style. neural_style can only run on one logical GPU at a time, so I just opened up three terminals and ran neural_style three times with each one pointing to a different logical GPU (-gpu 1, -gpu 2, -gpu 3). Each logical GPU has a slightly different maximum clock speed, but it wasn't enough to make a huge difference in these tests.

nn, cudnn, and clnn are the different backends that neural-style can run on. They all correspond to Torch-compatible libraries. I believe nn is entirely built off of Torch, which uses either CUDA or the CPU. cudnn uses NVIDIA's CUDA Deep Neural Network library. clnn uses OpenCL. All three were running on a GPU for the above tests.

In the GPU cases, Torch has a single processor that drives the GPUs. In each of my above cases, it's using 1 GPU plus 1 logical CPU processor to drive the GPU. For every tests, the corresponding logical GPU was running at or near 100% cycle utilization.

3817514 I was interpreting "GPU" as "core". A CUDA graphics board (NVIDIA) has hundreds of cores on it.

Login or register to comment