(L5.1) User space vs kernel space
Writing code in user space and kernel space, whether it be in C or in Rust is very different, mainly because neither the C nor the Rust Standard Library can be used in kernel space. While the Linux kernel provides much requested and used C library functions, the coverage for Rust standard Library functions is far more limited.
In user space, Rust code can make use of the Standard Library and the ecosystem of libraries that build on it. This allows developers to avoid "reinventing the wheel" by using reliable, well-tested libraries. This is particularly advantageous when a project requires unsafe Rust constructs. Many libraries in the Rust ecosystem implement these constructs and expose fully safe APIs, enabling developers to focus on designing their code without worrying about memory safety.
Kernel space is, however, a very different environment. While the goal is also to make sure that developers can use only safe code, achieving this is much harder. In kernel space, only the core Rust library is available and most of the kernel API is still only available in C. The Rust for Linux project aims to create a safe Rust kernel API so driver developers, who are often less experienced or less known to the Linux kernel community, can work entirely in safe Rust. Currently, any developer wanting to create a Rust driver must still contend with unsafe Rust and follow the general good practices for unsafe Rust code development (see report L2.2).
Not only is there far less support to enable writing safe Rust in kernel space, the kernel space API has also been in flux for the last couple of years with many major changes. Currently there is a direct line for upstreaming code from the Rust for Linux Kernel to the mainline Linux kernel, this suggests that while Rust support in the kernel will continue to grow, major design shifts may be behind us for the foreseeable future. The usergroup presentation of june 5th goes over these hardships and changes.
(L5.2) Best practices for reliable drivers
Due to the limited Rust Linux kernel API, any new driver will need to rely heavily on C kernel APIs. General best practices for integrating Rust and C have already been covered in the workshop of 12 December 2023. The course materials can be found here, the slides that introduce the workshop can be found here. The workshop emphasizes that a safe wrapper must encapsulate unsafe code, ensuring that as little unsafe code as possible is present in the core logic. Building on this idea, the Rust for Linux kernel adopts a rule that all foreign function interfaces (FFIs) from C to Rust must be wrapped in a safe Rust wrapper if possible or, if necessary, in an unsafe Rust wrapper. These wrappers must all be added to the Linux core, not the drivers. The goal is, that with time, most interfaces will have a Rust counterpart such that driver developers will no longer need to worry about unsafe code. Therefore, they will not need to worry about memory safety or data races. One-time and unknown contributors are, after all, mostly present in driver code. Driver code is also verified the least, and has the most memory safety issues.
(L5.3) Microbenchmarks for kernel APIs
As presented on june 5th, the Rust for Linux kernel is constantly in flux and even went through a massive overhaul in the last year. The big change, which was necessary for more streamlined upstreaming to the mainline Linux kernel, resulted in even less support for Rust drivers than before. Therefore, any microbenchmarks of the Rust kernel API would be outdated very quickly. Currently the more interesting microbenchmarks are those of Linux drivers written in Rust backed by bigger companies and open source projects.
These companies and open-source projects that are working on Rust drivers, usually do so in their own branch of the Linux kernel. The more stable of these larger projects provide performance benchmarks compared to the old C implementation. There are microbenchmarks available of four different Rust drivers. These benchmarks show that on average the Rust drivers are slightly less performant than the original C drivers.
Microbenchmarks NMVe Driver
These are the latest official results for the NVMe driver. The evolution of results since January 2023 can be found on the Rust for Linux website. While the performance is very similar, the C driver slightly outperforms the Rust driver in most microbenchmarks.
Setup
AMD Ryzen 5 7600
32 GB 4800 MT/s DDR5 on one channel
1x Samsung 990 Pro 1TB (PCIe 4.0 x4 16 GT/S)
NixOS 24.05
Results
40 samples
Difference of means modeled with t-distribution
P95 confidence intervals
https://rust-for-linux.com/nvme-driver
https://rust-for-linux.com/nvme-driver
Microbenchmarks Null Block Driver
The null block driver has made it into the mainline Linux kernel and, as can be seen below, has comparative but slightly worse performance overall compared to the C driver. These mcirobenchmarks are sourced from the Rust for Linux website.
Setup
AMD Ryzen 5 7600
32 GB 4800 MT/s DDR5 on one channel
1x Samsung 990 Pro 1TB (PCIe 4.0 x4 16 GT/S)
NixOS 24.05
Results
Plot shows (mean_iops_r - mean_iops_c) / mean_iops_c
40 samples for each configuration
Difference of means modeled with t-distribution
P95 confidence intervals
https://rust-for-linux.com/null-block-driver
Microbenchmarks Binder and e1000
The latency microbenchmarks that are provided by Hongyu Li et al. (2024) for e1000 and binder show that the Rust version in both cases has a higher latency. The Rust e1000 driver latency is 11 times higher than the C driver, however, this is deemed to be due a lack of features available in the Rust version that are not present in the C driver. Therefor, this is not an indication that the Rust driver is inherently slower due to the difference in language.
The binder drivers have a more comparable latency, with Rust having a 10% higher latency.
https://www.usenix.org/conference/atc24/presentation/li-hongyu