

Training can take a very long time, especially with large data sets, so the GPU acceleration is a big plus. Torch-rnn is built on Torch, a set of scientific computing tools for the programming language Lua, which lets us take advantage of the GPU, using CUDA or OpenCL to accelerate the training process. One might think this would output random character soup, but the results are startlingly coherent, even more so than more traditional Markov output. It learns what letters are most likely to come after others, and the text is generated the same way. The details about how all this works are complex and quite technical, but in short we train our neural network character-by-character, instead of with words like a Markov chain might.

#Install lua for mac how to#
Many of these, Google’s Deep Dream being the most well-covered, use and generate images, but what about text? This tutorial will show you how to install Torch-rnn, a set of recurrent neural network tools for character-based (ie: single letter) learning and output – it’s written by Justin Johnson, who deserves a huge “thanks!” for this tool. There have been many recent examples of neural networks making interesting content after the algorithm has been fed input data and “learned” about it.

In the meantime, please see the comments for common sticking points and troubleshooting. Update three! A lot has changed since 2016, so I’ll be posting a new version of this tutorial soon. Update number two! Zach in the comments offers a really helpful fix if you’re on Sierra. Update! For El Capitan and users of newer version of OS X, you may run into issues installing Torch or Lua packages.
