A Neural Network library that does not make use of allocations or the standard library at all. It does all its work on the stack.
This has some advantages:
Embedded devices without any operating system are now able to run at least simple neural networks.
Since the whole network layout needs to be known at compile time the dimensions of inputs and outputs are checked.
No need for OpenCL or CUDA, it just runs on your CPU. Or basically any other CPU for that matter.
The whole network being known to the compiler might enable some optimizations. That said the library is currently not very well optimized.
Also it was a fun challenge and actually worked out :)
Check the examples directory for some simple networks to get started.
Activations for use in layers
a Layer implementing a SoftMax activation function