This project is a GPU implementation of a Neural Network with 2 hidden layers, also known as a Multi Layer Perceptron (MLP).
The implemenation leverages CUBLAS to help with the matrix multiplication parts of the neural networks. The completed program will implement a neural network that can be trained using the Backpropagation algorithm. The goal is to do both the forward pass and backpropogation on the GPU to enable parallelism where possible and increase the speed of training and classification.
Code also at https://github.com/MaxRobinson/CudaNN
Usage is: ./network.exe --archFile <> --weights <optional> --training <trainingDataFile> --groundTruth <gtFile> --evaluation <dataFileForEval> --output <networkWeightSaveFile> --alpha <.1> --epochs <200>To quickly see how the program works, three convenience scripts are supplied.
run.shruns the network with the suppliedarchfile, loads in weights fromweightsTest.txt, loads training data, trains, and then writes the weights back toweightsTest.txt.runWithoutWeights.shruns the program without any specified weights file.eval.shruns the program with only the evaluation set of data and loads weights from theweightsTest.txtfile.
Run make in the main directory.
NOTE: Ensure that the nvcc compiler is in your path.
If not, run something like the following before running make export PATH=$PATH:/usr/local/cuda-8.0/bin
This assumes you have CUDA installed.