LanguagesArchitecture
Note: This design later evolved into the cell mechanism, which is how the language automatically temporarily freezes regions for read-only operations. This post is up to show the history and what led to the newer design.

Vale's hybrid-generational memory is a new memory model that aims to combine all the best parts of existing memory strategies: easy as garbage collection, deterministic as reference counting, and as fast as borrow checking. 0

Note that hybrid-generational-memory is not implemented yet, it's still just a design.

There are three ingredients to make hybrid-generational memory work:

  • Generational references!
  • Static analysis (sometimes referred to as the "automatic borrow checker") that can eliminate generation checks when it knows an object is alive.
  • Scope tethering, to keep an object from getting freed when a local has a reference into it.

Start with Generational References

Hybrid-generational memory is built upon generational references. Recall:

  • Every heap allocation has a u4812 generation number before the object.
  • Non-owning references contain a raw pointer and a u48 "target generation" number. 3
  • Before dereferencing an object, assert that the target generation number matches the allocation's generation number.
Side Notes
(interesting tangential thoughts)
0

Vale has three release modes:

  • Resilient mode, which is fast and memory safe; it will halt the program when we try to dereference a freed object.
  • Assist mode, for development, to detect potential problems even earlier.
  • Unsafe mode, which turns off all safety.

Resilient mode uses hybrid-generational memory.

1

u48 means a 48-bit unsigned integer.

2

We chose 48 bits, but we could push it as high as 60 bits if we adjusted the below inlining mechanisms. 48 bits is more than enough though.

3

And a u16 offset to know where the generation is relative to the object, see generational references.

Static Analysis: Eliminate Most Liveness Checks

Use static analysis to reduce the number of liveness checks as much as possible. For example:

  • For each dereference, figure out if an in-scope local indirectly owns it. If so, skip the liveness check. For more on this, see HGM Static Analysis, Part 1.
  • Automatically track this information through intermediate stores/loads from struct members, where possible.
  • Automatically track this information through function calls like an automtic borrow checker, where possible.

This static analysis only works when a nearby local holds the owning reference. The scope tethering explained further below will make it work with non-owning locals too.

Add Scope Tethering

The above static analysis only worked when a nearby local holds the owning reference. Now we'll make it work when a nearby local holds a non-owning reference too.

We'll add a u1 "tethered" bit to every allocation, next to the u48 generation number. A local with a non-owning reference can set this bit to 1 to keep its allocation alive. 4 Inside the scope of the local, we can skip all generation checks.

  • When the object is allocated, the tethered bit will be 0.
  • When a local wants to delay the object's destruction, it will:
    • Do a generation check, to see if the object is still alive. If live, load the pointer to the object, otherwise load null. 5
    • Save the old value of the tethered bit. 6
    • Write a 1 to the tethered bit.
  • When the local goes out of scope, it will:
    • Write the old value back to the tethered bit.
  • When the object is deallocated, if the tethered bit is 1, we'll add it to a queue to check later. 7

Not every non-owning local will tether. Static analysis will make a non-owning tether when it's dereferenced several times. Otherwise, it will just allow the generation checks to happen.

4

Someone letting go of the object's owning reference will still call its destructor, regardless of the tethered bit. If the tethered bit is 1, the destructor will not free the object. Instead, the last tethering local will free the object.

5

Loading from null is a memory safe operation: it's guaranteed to correctly seg-fault if we load from it.

6

The old tethered bit will usually be 0, but if another local is tethering the object, it could be 1 already.

7

Specifically, every time we allocate, we check the front of the queue to see if something's tether has expired, and if so, reuse that object. If not, move it to the back of the queue and ask generational malloc instead. Similar to a free-list!

That's basically it! There are some more things we could do to speed it up even more, using virtual memory, regions, or more static analysis, but we'll stop the explanation here.

Minor Extra Details

To address some frequently asked questions:

  • When we move something across thread boundaries, we must recurse through 8 and:
    • Assert each tethered bit is zero; assert that there are no locals pointing at the object.
    • Increment each generation number, effectively cutting off access to the rest of this thread.
  • When a generation number hits the maximum, don't use that generation number anymore.
    • genFree could slice up the allocation into smaller ones that don't include the initial 8b.

8

Similar to how Pony scans all incoming and outgoing objects.

Potential Weaknesses

Some potential weaknesses to explore:

  • Storing the generation number at the top of a <=64b allocation means a liveness check won't incur an extra cache miss since we're about to dereference the object anyway, and the entire object is on one cache line. However, for larger objects, it does incur an extra cache miss. Most objects are small, but programs with an unusually large proportion of medium sized objects not in an array could suffer a small performance hit.
  • Adding the offset to every reference could interfere with optimizations. If so, we'll have to write our own LLVM pass. 9
  • In environments without virtual memory 10, memory fragmentation could be worse, because we can't give pages back to the OS. This is mitigated by regions, where region-calling can guarantee no references pointing into a certain region. 11
9

Presumably, we would make every generational reference have a pointer to the object, and a target generation number, and a pointer to the current generation. The LLVM pass would eliminate the latter.

10

Every mainstream OS has virtual memory, but WASM does not.

11

One day, we could write a compactor for Vale which could also help this, though its probably unnecessary.

We're looking for sponsors!

With your help, we can launch a language with speed, safety, flexibility, and ease of use.

We’re a very small team of passionate individuals, working on this on our own and not backed by any corporation.

If you want to support our work, please consider sponsoring us on GitHub!

Those who sponsor us also get extra benefits, including:

  • Early access to all of our articles!
  • A sneak peek at some of our more ambitious designs, such as memory-safe allocators based on algebraic effects, an async/await/goroutine hybrid that works without data coloring or function coloring, and more.
  • Your name on the vale.dev home page!

With enough sponsorship, we can:

  • Start a a 501(c)(3) non-profit organization to hold ownership of Vale. 12
  • Buy the necessary computers to support more architectures.
  • Work on this full-time.
  • Make Vale into a production-ready language, and push it into the mainstream!

We have a strong track record, and during this quest we've discovered and implemented a lot of completely new techniques:

  • The Linear-Aliasing Model that lets us use linear types where we need speed, and generational references where we need the flexibility of shared mutability.
  • Region Borrowing, which makes it easier to write efficient code by composing shared mutability with the ability to temporarily freeze data.
  • Higher RAII, where the language adds logic safety by enforcing that we eventually perform a specific future operation.
  • Perfect Replayability makes debugging race conditions obsolete by recording all inputs and replaying execution exactly.

These have been successfully prototyped. With your sponsorship we can polish them, integrate them, and bring these techniques into the mainstream. 13

Our next steps are focused on making Vale more user-friendly by:

  1. Finalizing the compiler's error messages and improving compile speeds.
  2. Polishing interop with other languages.
  3. Growing the standard library and ecosystem!

We aim to combine and add to the benefits of our favorite languages:

We need your help to make this happen!

If you're impressed by our track record and believe in the direction we're heading, please consider sponsoring us:

If you have any questions, always feel free to reach out via email, twitter, discord, or the subreddit. Cheers!

12

Tentatively named the Vale Software Foundation.

13

Generational references, the linear-aliasing model, and higher RAII are all complete, and region borrowing, fearless FFI, and perfect replayability have been successfully prototyped. Be sure to check out the experimental version of the compiler!