How to make plugins system with Rust and WebAssembly

Why WASM anyway?

First of all I want to provide some context:

Working on asset database for my game engine prototype I found that I want to import assets ahead of the game start. And convert them from tool specific format into engine native format. This would allow to reduce amount of code in the engine itself as well as improve performance as no content conversion would be required at runtime. I could try to provide importers for all asset types recognized by engine. But there are many possible formats from which, say, 3d models can be imported. 3d modeling tools each has their own formats and may support various open standards. Same story for images, sound assets and virtually any kind of content. Additionally each game may need new asset types.

Importing library must be extensible. And specific importers must be packed into plugins and loaded at runtime. This can be implemented in different ways. Among them:

  • Shared libraries (.so, .dll, .dylib).
  • Separate binaries and some kind of inter-process communication
  • Importers as services

Originally I implemented plugins with shared libraries and it worked fine. Until it exploded. Any change in plugin interface must be carefully adjusted in both library and plugins. And common Rust types are not FFI-safe, including trait-objects, so no &dyn Plugin can be used in plugin interface. If plugins was compiled with with slightly different interface UB is imminent. For precaution I included hashed source code of whole interface module in shared library exports. This in turn made plugins rejected when documentation or formatting changes in that interface module.

Still it was problematic. And dangerous. Plugins were searched in directories and shared libraries were loaded first to see if they are compatible plugins. And crashes were still occurring from time to time when plugin implemented unsafe shim incorrectly. While it was not so annoying for CLI tool to crash on new asset import (but annoying nonetheless), but plugins could crash whole game on asset loading when source file is modified and re-import is attempted.

Safer alternative would be to implement plugins as standalone executable which will be executed as child process by CLI tool or asset loader. However importers must be able to store sub-assets referenced in asset being imported, which would require more complex communication. Another problem would be in distinguishing plugin binaries from others. Each plugin would have to be registered manually.

Registering plugins as services running in background looks like over-engineering.

WASM enters the scene

WebAssembly modules are in many ways similar to shared libraries. Both require writing FFI-safe shims, can be loaded at runtime and exported functions can be easily enumerated, both can perform simple callbacks. But running WebAssembly modules is totally safe. Even glue code on host requires zero unsafe blocks. WASM modules are running in complete isolation from running process and OS, so crashing and misbehaving plugin would cause no problem to the CLI tool or game process.

And it looked very interesting to implement plugins as WASM modules, I had only brief experience with running WASM in browser. And I wasn't disappointed.

All WASM embedding tutorials I could find were covering only basic operations and it was not obvious how to perform something one bit more complex than calling a function that operations on integers, so I went hard way of trial-and-error.

I learned quite a few things and it inspired me to writing this post in a hope that it will help next WebAssembly embedding newcomer to overcome initial problems and get in going.

Cookbook for absolute beginners

WebAssembly has small set of types that can be sent between host and guest (running WASM module instance). There are integers of various size, floating point numbers, functions and opaque external references.

There is no way to make function to accept an array, string or object with vtable. For example &str, &[T] and &dyn Trait in Rust are so called "fat-pointers" and occupies memory as pair of usizes. Where one is pointer to first byte of the string and another one is length. Which is which? Can't remember, and maybe for the better, as representation of fat-pointers in Rust is unspecified and can change between compiler releases. It also can be different for different compilation targets.

How to pass an array, slice or string

To accept a slice function must take two arguments, one of which would be pointer to first first byte and other is length, i.e. just convert fat pointer into FFI-safe representation.

Same as with any FFI you'd say, but there's a twist - guest has no access to host memory! That's it, function in wasm module cannot take pointer to host memory and read from it. Fortunately host has full access to guest memory. Which is linear in the same sense as host's memory space is linear - each byte can be accessed by offset and relatively fast.

Given the above here's first recipe:

To pass a string to a function in wasm module host copies string to a range in module's memory, and then passes offset to first byte of the string and length into the function.


In wasm module:

/// Function accepting string.
pub unsafe fn foo(ptr: *const u8, len: usize) {
    let slice = std::slice::from_raw_parts(ptr, len);
    let string = std::str::from_utf8_unchecked(slice);
    // Alternatively use `std::str::from_utf8(slice).unwrap()`.

    // do stuff with `string`

On host:

Following examples assume that wasmer crate is used to run wasm modules. With wasmtime it may a be bit different.

fn copy_string(string: &str, ptr: wasmer::WasmPtr<u8, wasmer::Array>, len: u32) {
    debug_assert_eq(string.len(), len as usize);
    let slice = ptr.deref(&self.state.memory, 0, source_path.len() as u32).unwrap();
    slice.iter().zip(source_path).for_each(|(cell, c)| cell.set(*c));

Careful reader may already wonder, "which range?" And rightfully so. Memory of the module belongs to the wasm module. It may use it as sees fit. For wasm modules compiled from langues like Rust or C++ there is no way for host to know what part of memory is currently unused. Is it?

Well, maybe there is some widely used method to find memory chunk of required size which currently is unused? Why, this is allocation! Host needs to allocate memory range on guest memory and the simplest way to do so is to export malloc/free like pair of functions from the module.

And this gives us our second recipe:

Modules should export functions for allocating and freeing memory, whenever other exported functions signature suggests that host would need to allocate or free guest's memory. Which is the case for most nontrivial cases.


In wasm module:

/// Export this function from WASM module.
/// It would allow host to allocate guest's memory.
/// # Safety
/// This function is FFI-safe wrapper for standard function `alloc::alloc::alloc`.
/// Same safety principles applies.
pub unsafe fn malloc(size: usize, align: usize) -> *mut u8 {
    let layout = std::alloc::Layout::from_size_align(size, align).unwrap();

/// Export this function from WASM module.
/// It would allow host to deallocate guest's memory.
/// # Safety
/// This function is FFI-safe wrapper for standard function `alloc::alloc::dealloc`.
/// Same safety principles applies.
pub unsafe fn free(ptr: *mut u8, size: usize, align: usize) {
    let layout = std::alloc::Layout::from_size_align(size, align).unwrap();
    std::alloc::dealloc(ptr, layout);

What are function pointers?

Many languages that can be compiled into WebAssembly have first-class functions. In Rust functions are generally passed as generic types with trait bounds. But at times function pointers are necessary and are supported by Rust. Specifically any function can be coerced to function pointer with matching signature. As well as closures with no enclosed values. Rust guarantees that function pointers are FFI safe and would match C's function pointer.

Function pointers are crucial for plugin system, as plugin exports array of "vtables" which are made of function pointers.

Yet in WebAssembly function pointer cannot be represented the same way as data pointer - 32 bit offset into linear memory, after all they do not reside in memory. Rust compiler solves this problem by simply storing any function to which pointer is created and used in global table. And then index in the table is passed around as function pointer. On call this index is used to fetch function from the table. This sounds simple enough, but using this index from host can be tricky, as table does not get exported so the host can't access it. But function in wasm module can!

Here's third recipe:

When exported function returns pointer to another function from the module, and that function is expected to be invoked from host, module should also export a shim, which would take same arguments list plus function pointer itself and invoke function pointer. At least one exported shim function per signature is required.

This allows treating index value as opaque function pointer and never try to make sense of it on the host.


In wasm module:

pub fn foo() -> fn(u32) -> u32 {
    |value: u32| value * 2

/// Function name encodes signature.
/// It is not required, but can be useful convention.
pub fn shim_1_u32_u32(f: fn(u32)->u32, value: u32) -> u32 {

On host:

let instance: wasmer::Instance = todo!();

// Actual type parameter to `WasmPtr` doesn't matter here.
// But using different one for different function signatures helps
// to not mess up one with another.
type MyWasmFnPtr = wasmer::WasmPtr<fn(u32) -> u32>;

let foo = instance.exports
    .get_native_function::<(), MyWasmFnPtr>("foo");

let shim_1_u32_u32 = instance.exports
    .get_native_function::<(MyWasmFnPtr, u32), u32>>("shim_1_u32_u32");

let fun_ptr =;
assert_eq!(84,, 42));

Now it is relatively easy to manually define vtables as #[repr(C)] structs with function pointers and construct them for trait implementations. All that left is to export a function which would fill array of vtables (and data pointers if necessary).


Turned out, embedding WebAssembly modules is not much harder than using shared libraries through FFI. Few additional tricks are required, but overall it was not so bad.

And the result is even better than I expected. There is no noticeable performance penalty, asset importing is costly enough on its own to be kept far away from hot paths. And now whole importing is totally safe!

No Comments Yet