collenchyma 0.0.1

fast and parallel CPU/GPU computation through unified API
docs.rs failed to build collenchyma-0.0.1
Please check the build logs for more information.
See Builds for ideas on how to fix a failed build, or Metadata for how to configure docs.rs builds.
If you believe this is docs.rs' fault, open an issue.
Visit the last successful build: collenchyma-0.0.8

CollenchymaJoin the chat at https://gitter.im/autumnai/collenchyma Build Status Coverage Status Crates.io License

Collenchyma is a framework for fast, parallel and hardware-agnostic computation, similar to Arrayfire.

Collenchyma was started at Autumn to support fast and parallel computations, at the Machine Intelligence Framework Leaf, on various backends such as OpenCL, CUDA, or native CPU. Collenchyma is written in Rust, which allows for a modular and easily extensible architecture and has no hard dependency on any drivers or libraries, which makes it easy to use, as it removes long and painful build processes.

Collenchyma comes with a super simple API, which allows you to write code once and then execute it on any device (CPUs, GPUs) without the need to care for the specific computation language (OpenCL, CUDA, native CPU) or the backend.

For more information,

Disclaimer: Collenchyma is currently in a very early and heavy stage of development. If you are experiencing any bugs that are not due to not yet implemented features, feel free to create a issue.

Getting Started

If you're using Cargo, just add Collenchyma to your Cargo.toml:

[dependencies]
collenchyma = "0.0.1"

If you're using Cargo Edit, you can call:

$ cargo add collenchyma

Contributing

Want to contribute? Awesome! We have instructions to help you get started contributing code or documentation.

We have a mostly real-time collaboration culture and happens here on Github and on the Collenchyma Gitter Channels. You can also reach out to the Maintainers {@MJ, @hobofan}.

License

Leaf is released under the MIT License.