Skip to content

Commit

Permalink
Implement support for dynamic memories in the pooling allocator
Browse files Browse the repository at this point in the history
This is a continuation of the thrust in bytecodealliance#5207 for reducing page faults
and lock contention when using the pooling allocator. To that end this
commit implements support for efficient memory management in the pooling
allocator when using wasm that is instrumented with bounds checks.

The `MemoryImageSlot` type now avoids unconditionally shrinking memory
back to its initial size during the `clear_and_remain_ready` operation,
instead deferring optional resizing of memory to the subsequent call to
`instantiate` when the slot is reused. The instantiation portion then
takes the "memory style" as an argument which dictates whether the
accessible memory must be precisely fit or whether it's allowed to
exceed the maximum. This in effect enables skipping a call to `mprotect`
to shrink the heap when dynamic memory checks are enabled.

In terms of page fault and contention this should improve the situation
by:

* Fewer calls to `mprotect` since once a heap grows it stays grown and
  it never shrinks. This means that a write lock is taken within the
  kernel much more rarely from before (only asymptotically now, not
  N-times-per-instance).

* Accessed memory after a heap growth operation will not fault if it was
  previously paged in by a prior instance and set to zero with `memset`.
  Unlike bytecodealliance#5207 which requires a 6.0 kernel to see this optimization this
  commit enables the optimization for any kernel.

The major cost of choosing this strategy is naturally the performance
hit of the wasm itself. This is being looked at in PRs such as bytecodealliance#5190 to
improve Wasmtime's story here.

This commit does not implement any new configuration options for
Wasmtime but instead reinterprets existing configuration options. The
pooling allocator no longer unconditionally sets
`static_memory_bound_is_maximum` and then implements support necessary
for this memory type. This other change to this commit is that the
`Tunables::static_memory_bound` configuration option is no longer gating
on the creation of a `MemoryPool` and it will now appropriately size to
`instance_limits.memory_pages` if the `static_memory_bound` is to small.
This is done to accomodate fuzzing more easily where the
`static_memory_bound` will become small during fuzzing and otherwise the
configuration would be rejected and require manual handling. The spirit
of the `MemoryPool` is one of large virtual address space reservations
anyway so it seemed reasonable to interpret the configuration this way.
  • Loading branch information
alexcrichton committed Nov 4, 2022
1 parent fba2287 commit 0edebb7
Show file tree
Hide file tree
Showing 6 changed files with 475 additions and 260 deletions.
43 changes: 9 additions & 34 deletions crates/fuzzing/src/generators/config.rs
@@ -1,8 +1,7 @@
//! Generate a configuration for both Wasmtime and the Wasm module to execute.

use super::{
CodegenSettings, InstanceAllocationStrategy, MemoryConfig, ModuleConfig, NormalMemoryConfig,
UnalignedMemoryCreator,
CodegenSettings, InstanceAllocationStrategy, MemoryConfig, ModuleConfig, UnalignedMemoryCreator,
};
use crate::oracles::{StoreLimits, Timeout};
use anyhow::Result;
Expand Down Expand Up @@ -81,14 +80,6 @@ impl Config {
pooling.instance_table_elements = 1_000;

pooling.instance_size = 1_000_000;

match &mut self.wasmtime.memory_config {
MemoryConfig::Normal(config) => {
config.static_memory_maximum_size =
Some(pooling.instance_memory_pages * 0x10000);
}
MemoryConfig::CustomUnaligned => unreachable!(), // Arbitrary impl for `Config` should have prevented this
}
}
}

Expand Down Expand Up @@ -130,14 +121,6 @@ impl Config {
pooling.instance_memory_pages = pooling.instance_memory_pages.max(900);
pooling.instance_count = pooling.instance_count.max(500);
pooling.instance_size = pooling.instance_size.max(64 * 1024);

match &mut self.wasmtime.memory_config {
MemoryConfig::Normal(config) => {
config.static_memory_maximum_size =
Some(pooling.instance_memory_pages * 0x10000);
}
MemoryConfig::CustomUnaligned => unreachable!(), // Arbitrary impl for `Config` should have prevented this
}
}
}

Expand Down Expand Up @@ -319,27 +302,19 @@ impl<'a> Arbitrary<'a> for Config {
// https://github.com/bytecodealliance/wasmtime/issues/4244.
cfg.threads_enabled = false;

// Force the use of a normal memory config when using the pooling allocator and
// limit the static memory maximum to be the same as the pooling allocator's memory
// page limit.
// Ensure the pooling allocator can support the maximal size of
// memory, picking the smaller of the two to win.
if cfg.max_memory_pages < pooling.instance_memory_pages {
pooling.instance_memory_pages = cfg.max_memory_pages;
} else {
cfg.max_memory_pages = pooling.instance_memory_pages;
}
config.wasmtime.memory_config = match config.wasmtime.memory_config {
MemoryConfig::Normal(mut config) => {
config.static_memory_maximum_size =
Some(pooling.instance_memory_pages * 0x10000);
MemoryConfig::Normal(config)
}
MemoryConfig::CustomUnaligned => {
let mut config: NormalMemoryConfig = u.arbitrary()?;
config.static_memory_maximum_size =
Some(pooling.instance_memory_pages * 0x10000);
MemoryConfig::Normal(config)
}
};

// Forcibly don't use the `CustomUnaligned` memory configuration
// with the pooling allocator active.
if let MemoryConfig::CustomUnaligned = config.wasmtime.memory_config {
config.wasmtime.memory_config = MemoryConfig::Normal(u.arbitrary()?);
}

// Don't allow too many linear memories per instance since massive
// virtual mappings can fail to get allocated.
Expand Down

0 comments on commit 0edebb7

Please sign in to comment.