prophet 0.4.1

A neural network implementation with a focus on cache-efficiency and sequential performance.
Documentation
TODO-LIST
=========

This is the list of things that are planned for the next updates:

 - Write blog post about the type state pattern and publish new version.
 - Implement some unit tests, testing the internals of layers.
 - Optimize `FullyConnectedLayer` so that all associated layers have equal sizes.
 - Split `FullyConnectedLayer` into `DenseLayer` and `ActivationLayer`.
 - Add `InputLayer` as a new topology layer for initial topology building.
 - Make topology layers mirroring true layer types.
 - Improved layer architecture to support convolutional layers.
 - Improved error handling: Less panics, more Results!
 - Parallelization based on Producers from rayon interfacing via ndarray.
 - Generic float type support (e.g. for future `f16`, `f32`, `f64`, `f128`)
 - Implement GPU based parallelization.