Rust SocketCAN
This library implements Controller Area Network (CAN) communications on Linux using the SocketCAN interfaces. This provides a network socket interface to the CAN bus.
Please see the documentation for details about the Rust API provided by this library.
Latest News
Version 3.x adds integrated async/await and more!
Version 3.0 adds integrated support for async/await, with the most popular runtimes, tokio, async-std, and smol. To get started we have already merged the tokio-socketcan crate into this one and implemented async-io
.
Unfortunaly this required a minor breaking change to the existing API, so we bumped the version to 3.0.
The async support is optional, and can be enabled with a feature for the target runtime: tokio
, async-std
, or smol
.
Additional implementation of the netlink control of the CAN interface was added in v3.1 allws an application to do things like set the bitrate on the interface, set control modes, restart the inteface, etc.
What's New in Version 3.1
- Added netlink functionality:
- Set the bitrate PR #50, and the FD data bitrate
- Set the control modes (Loopback, Listen-Only, etc)
- Set automatic restart delay time
- Manual restart
- PR #45 Dump handles extended IDs
- PR #44 Fix clippy warnings
- PR #43 Implement AsPtr for CanAnyFrame
What's New in Version 3.0
- Support for Rust async/await
- All of tokio-socketcan has been merged into this crate and will be available with an
async-tokio
build feature. - #41 Added initial support for
async-io
for use withasync-std
andsmol
- Split
SocketOptions
trait out ofSocket
trait for use with async (breaking) - Added cargo build features for
tokio
orasync-io
. - Also created specific build features for
async-std
andsmol
which just bring in theasync-io
module and alias the module name toasync-std
orsmol
, respectively, and build examples for each.
- All of tokio-socketcan has been merged into this crate and will be available with an
Next Steps
A number of items still did not make it into a release. These will be added in v3.x, coming soon.
- Issue #22 Timestamps, including optional hardware timestamps
- Issue #32 A number of important netlink commands were added in v3.1, particularly the ability to set bitrates, reset the interface, and set control modes. But the implementation is still incomplete, particularly in regard t
- Better documentation. This README will be expanded with basic usage information, along with better doc comments, and perhaps creation of the wiki
Minimum Supported Rust Version (MSRV)
The current version of the crate targets Rust Edition 2021 with an MSRV of Rust v1.65.0.
Note that, at this time, the MSRV is mostly diven by use of the clap v4.0
crate for managing command-line parameters in the utilities and example applications. The core library could likely be built with an earlier version of the compiler if required.
Async Support
Tokio
The tokio-socketcan crate was merged into this one to provide async support for CANbus using tokio.
This is enabled with the optional feature, tokio
.
Example bridge with tokio
This is a simple example of sending data frames from one CAN interface to another. It is included in the example applications as tokio_bridge.rs.
use StreamExt;
use ;
use tokio;
async
async-io (async-std & smol)
New support was added for the async-io runtime, supporting the async-std and smol runtimes.
This is enabled with the optional feature, async-io
. It can also be enabled with either feature, async-std
or smol
. Either of those specific runtime flags will simply build the async-io
support but then also alias the async-io
submodule with the specific feature/runtime name. This is simply for convenience.
Additionally, when building examples, the specific examples for the runtime will be built if specifying the async-std
or smol
feature(s).
Example bridge with async-std
This is a simple example of sending data frames from one CAN interface to another. It is included in the example applications as async_std_bridge.rs.
use ;
async
Testing
Integrating the full suite of tests into a CI system is non-trivial as it relies on a vcan0
virtual CAN device existing. Adding it to most Linux systems is pretty easy with root access, but attaching a vcan device to a container for CI seems difficult to implement.
Therefore, tests requiring vcan0
were placed behind an optional feature, vcan_tests
.
The steps to install and add a virtual interface to Linux are in the scripts/vcan.sh
script. Run it with root proveleges, then run the tests: