Skip to content

MaxRobinson/CudaNN

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

67 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CudaNN - Class Project for Intro to GPU Programing JHU

This project is a GPU implementation of a Neural Network with 2 hidden layers, also known as a Multi Layer Perceptron (MLP).

The implemenation leverages CUBLAS to help with the matrix multiplication parts of the neural networks. The completed program will implement a neural network that can be trained using the Backpropagation algorithm. The goal is to do both the forward pass and backpropogation on the GPU to enable parallelism where possible and increase the speed of training and classification.

Code also at https://github.com/MaxRobinson/CudaNN

Usage

Usage is: ./network.exe --archFile <> --weights <optional> --training <trainingDataFile> --groundTruth <gtFile> --evaluation <dataFileForEval> --output <networkWeightSaveFile> --alpha <.1> --epochs <200>

To quickly see how the program works, three convenience scripts are supplied.

  • run.sh runs the network with the supplied arch file, loads in weights from weightsTest.txt, loads training data, trains, and then writes the weights back to weightsTest.txt.
  • runWithoutWeights.sh runs the program without any specified weights file.
  • eval.sh runs the program with only the evaluation set of data and loads weights from the weightsTest.txt file.

Compilation

Run make in the main directory. NOTE: Ensure that the nvcc compiler is in your path.
If not, run something like the following before running make export PATH=$PATH:/usr/local/cuda-8.0/bin
This assumes you have CUDA installed.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published