collenchyma 0.0.3

fast, parallel, backend-agnostic computation on any hardware
docs.rs failed to build collenchyma-0.0.3
Please check the build logs for more information.
See Builds for ideas on how to fix a failed build, or Metadata for how to configure docs.rs builds.
If you believe this is docs.rs' fault, open an issue.
Visit the last successful build: collenchyma-0.0.8

Collenchyma • Join the chat at https://gitter.im/autumnai/collenchyma Build Status Coverage Status Crates.io License

Collenchyma provides a common Rust interface to run operations on potentially Cuda or OpenCL supported machines, making deployment of high-performance code as easy and platform-agnostic as common code.

Collenchyma's abstracts over the different computation languages (Native, OpenCL, Cuda) and let's you run highly-performant code, thanks to easy parallelization, on servers, desktops or mobiles without the need to adapt your code for the machine you deploy to. Collenchyma does not require OpenCL or Cuda on the machine and automatically falls back to the native host CPU, making your application highly flexible and fast to build.

Collenchyma was started at Autumn to support the Machine Intelligence Framework Leaf with backend-agnostic, state-of-the-art performance.

  • __Parallelizing Performance__ Collenchyma makes it easy to parallelize computations on your machine, putting all the available cores of your CPUs/GPUs to use. Collenchyma also provides optimized operations for the most popular operations, such as BLAS, that you can use right away to speed up your application. Highly-optimized computation libraries like open-BLAS and cuDNN can be dropped in.

  • __Easily Extensible__ Writing custom operations for GPU execution becomes easier with Collenchyma, as it already takes care of Framework peculiarities, memory management and other overhead. Therefore extending the Backend becomes a straight-forward process of defining the kernels and mounting them on the Backend.

  • __Butter-smooth Builds__ As Collenchyma does not require the installation of various frameworks and libraries, it will not add significantly to the build time of your application. Collenchyma checks at run-time if these frameworks can be used and gracefully falls back to the standard, native host CPU if they are not. No long and painful build procedures for you or your users.

For more information,

Disclaimer: Collenchyma is currently in a very early and heavy stage of development. If you are experiencing any bugs that are not due to not yet implemented features, feel free to create a issue.

Getting Started

If you're using Cargo, just add Collenchyma to your Cargo.toml:

[dependencies]
collenchyma = "0.0.3"

If you're using Cargo Edit, you can call:

$ cargo add collenchyma

Examples

Backend with custom defined Framework and Device.

extern crate collenchyma as co;
use co::framework::IFramework;
use co::backend::{Backend, BackendConfig};
use co::frameworks::Native;
fn main() {
   let framework = Native::new(); // Initialize the Framework
   let hardwares = framework.hardwares(); // Now you can obtain a list of available hardware for that Framework.
   // Create the custom Backend by providing a Framework and one or many Hardwares.
   let backend_config = BackendConfig::new(framework, hardwares);
   let backend = Backend::new(backend_config);
   // You can now execute all the operations available, e.g.
   // backend.dot(x, y);
 }

Machine-agnostic Backend.

extern crate collenchyma as co;
use co::framework::IFramework;
use co::backend::{Backend, BackendConfig};
use co::frameworks::Native;
fn main() {
    // Not yet implemented.
    // No need to provide a Backend Configuration.
    let backend = Backend::new(None);
    // You can now execute all the operations available, e.g.
    // backend.dot(x, y);
}

Contributing

Want to contribute? Awesome! We have instructions to help you get started contributing code or documentation. And high priority issues, that we could need your help with.

  • Finish the OpenCL implementation. #2
  • Finish the Cuda implementation. #4
  • Make the Backend machine-agnostic #5
  • Finish BLAS library for Native, OpenCL, Cuda #6

We have a mostly real-time collaboration culture and happens here on Github and on the Collenchyma Gitter Channel. You can also reach out to the Maintainers {@MJ, @hobofan}.

License

Collenchyma is released under the MIT License.