LanguagesArchitecture

App development is one of the fastest-growing areas of software engineering today. There are 3.5 billion smartphone users in the world, using 2.2 million App Store apps and 2.8 million Play Store apps. Many of those have counterparts on the web, too.

Most app developers write separate code for Android, iOS, and web, resulting in triple the amount of code.

Empires have risen and fallen trying to solve this problem, but they all have drawbacks. The world is looking for a way to share fast code without the usual language barriers.

Every good language has one thing that it can do better than any others. This post will show how Vale's unique blend of single ownership and aliasing makes it the perfect language for cross-platform code, and the only language that can bring truly native speed to all three platforms.

These are planned features. Now that Vale has reached version 0.1, we can start exploring this combination of seamless cross-compilation and the native shared core.

Shared Code

There's two main strategies for sharing code today: use a framework or make a shared core.

Frameworks

The most common approach to sharing code between platforms is to use some sort of garbage collected language in a framework that abstracts away all the details. Ionic, React Native, Flutter, Xamarin, and Unity all try to do this.

If your application doesn't need anything super special, these can work well. Unfortunately, they use more battery life and CPU, lag behind the latest features offered by their OS, and the quirks in the underlying platforms leak through their abstractions and cause bugs. It's amazing they've accomplished what they have, given the challenge they face. As the great nwallin aptly put it, "Cross platform UI is probably the hardest problem in software engineering."

Shared Core

The other approach is to use a shared core. In this approach, we have a thin platform-specific UI layer which calls into a shared "business logic" common library.

Transpiling a GC'd Shared Core

JVM languages are making some strides here. Kotlin Native and Scala Native both compile to native machine code which uses garbage collection. They have a bit of a performance ceiling, but they do their job well! 0

One can also transpile Java straight to objective-C. j2objc is the tool that cross compile's the Java code to make the iOS apps for GMail, Chat, Calendar, Docs, and others. Instead of using a garbage collector, it compiles to objective-C, which is a bit slower, and the transpiled Java code can leak if it makes any reference cycles. 1

These solutions has some great benefits, and will still be the best approach for some cases. However, there's a big aspect where we can do even better: performance!

0

JVM languages rely on Just-in-Time (JIT) compilation for speed, but Apple doesn't allow JIT on iOS. A cross-compiled JVM language will unfortunately not be as fast on iOS as it is on Android, because of the lack of JIT.

1

Once one can identify the memory leak, they can break reference cycles by annotating their code with @Weak.

Using a Native Shared Core

Experienced app developers will tell you: performance is important. Code can take some time to do something, and past a certain threshold, the user notices. Users, especially on iOS, have very high standards, and that slight bit of lag can turn their delight into discontent.

Performance helps the user, and it also helps battery life. Users notice when a certain app uses a lot of battery life, and users don't appreciate an app draining their battery needlessly.

And sometimes, we want performance because we have a lot of calculations to do! Mobile gaming isn't the only performance-hungry kind of app; apps that manipulate a sizeable amount of data like maps, spreadsheets, images, or apps that have a lot of complex applicate state will need every bit of performance they can get their hands on.

Historically, performance comes with difficulty. In today's most widely used languages, one often has to make a choice between easy but slow (e.g. Python, Javascript) and fast but difficult (e.g. C++, Rust), and there's not much inbetween. However, with the advent of languages like Lobster, Cone, and Vale, we'll soon have languages that are incredibly fast and also very easy.

Many companies have turned to C++ (Slack, Earth, Dropbox), which is much faster. We can say from our own experiences that it's a headache to communicate back and forth between C++ and JVM languages, and it's very difficult for people to learn C++.

This is where Vale can shine: native speed, and easy interop with the host language.

Vale's Approach

Vale's unique blend of single ownership, regions, and high-level design makes it able to both cross-compile to JVM and iOS, and seamlessly drop down into native code for speed.

Cross-Compilation with Optimizability

Most high-level languages, such as Java/Javascript/Python, completely abstract away the details of how objects are represented in memory, and give us no way to optimize, but also lets the language cross-compile (for example, from Java to JS). On the other end, there's languages like C/C++/Rust/Zig/Nim, which expose raw memory and give us the ability to optimize.

Vale is the best of both worlds:it lets us write faster code with keywords like inl, 2 which are ignored for environments that don't support that optimization, without changing the semantics of the program.

Another example: specifying the allocation strategy (heap, bump, pool, etc) 3 is similarly ignored in environments that don't support them, and the program will still behave correctly.

Vale is high-level enough to work on all environments, yet gives us tools to write incredibly efficient code.

2

The compiler is intelligent and will put objects on the stack whenever possible, but the user can use the inl keyword to force it. The inl keyword would be obeyed on native, but ignored on JVM or JS.

3

See Zero-Cost Borrowing with Vale Regions for more about regions and how they can drastically speed up a program.

Features Compile According to Environment

Some features of Vale are chosen to be more optimal depending on the environment they're in. For example:

  • weakable objects, which allow weak references to point to them:
    • In native environments, they have a pointer to a "weak ref count" integer or a "generation" number, depending on the region.
    • In JVM or JS, they have a reference to a simple object with a back-pointer pointing back at it.
  • Interface references:
    • In native environments, these are represented as a "fat pointer" struct containing a pointer to the object and a pointer to a vtable.
    • In JVM, these are plain references, and the object itself has the vtable pointer.

Universal Semantics

Vale is a language based on single ownership. Single ownership traditionally means that there's one reference is an "owning" reference, and when it goes away, the object is deallocated. Vale's single ownership is more general; single ownership tracks responsibility to eventually call a certain method, 4 more akin to linear types.

Native, garbage collected, and reference-counted environments all benefit from single ownership. 5 In native environments, destroying an owning reference will also free an object, but in non-native environments, they don't have to, and can still be used for other purposes.

4

See The Next Steps for Single Ownership and RAII for more about how single ownership is about much more than just freeing memory.

5

For example, adding single ownership to Java or JS would guarantee that you never forget to call .dispose(), .unregister(), .close(), .resolve(x) methods ever again. See The Next Steps for Single Ownership and RAII for more on this.

Seamless Communication

Today's languages don't let us have references between native code and the host environment (JVM, iOS, JS), and we often have to construct entire layers of infrastructure to route information to where it needs to go.

In Vale, we can use regions to express to the compiler which objects are in the native environment, and which objects are in the host environment, and a function can have references to both at the same time.

This makes it incredibly easy to, for example, have a Javascript button call into a Vale presenter when it's clicked. The Javascript button will call into the Vale-transpiled code, which knows how to communicate across the boundary.

How does it work?

Depending on the native code's kind of region, this can work in different ways. We use certain tables in thread-local storage to serve as our references into native memory.

In native memory, there will be:

  • Native Constraint Ref Table: 6 Map<u64, [ &Any, int ]>. The &Any is a regular borrow reference to an object, and the int is how many times we've "locked" this object to prevent its deletion.
  • Native Weak Ref Table: Map<u64, &&Any>. The dead ones are eventually cleaned up by constant-time incrementing a rabbit pointer. 7
  • An owning reference pointing into native memory will also use the Native Constraint Ref Table.

In JVM memory, there will be a:

  • Java Constraint Ref Table: Map<u64, [ Object, int ]> 8
  • Java Weak Ref Table: Map<u64, ObjectWeakBox>
  • Java Owning Ref Table: Map<u64, Object>

6

If these are in a shared buffer (JVM "native memory"), then both sides can reach into it. That will be useful for incrementing/decrementing that count.

8

We might have this int be in a shared buffer instead, so we can increment/decrement it quickly from both sides.

Let's see it in action!

Now we'll show what this could look like. Keep in mind, this is still very theoretical, and the syntax will likely be improved.

Here we're using 'core to refer to the native side and 'host to refer to the JVM side, these are defined by the user elsewhere.

This snippet has a function that will run on the JVM.

  • spaceship is compiled to a long, which is an index into the global Native Constraint Ref Table.
  • e is a regular Java click event.
  • callback is a regular Java Func1<Bool, Void>.

Calling flyTo here will:

  • Serialize e.location into a buffer.
  • Give callback a spot in the Java Constraint Ref Table, and remember the index.
  • JNI call into _flyto_wrapper, which takes:
    • u64 spaceshipID
    • char[] loc
    • u64 callbackID
  • which will...
  • Get a constraint ref for spaceshipID from the Native Constraint Ref Table.
  • Deserialize loc.
  • Call flyTo.

vale
func launch(
spaceship &core'Spaceship,
e ClickEvent,
callback ICallback)
// change to fn(bool)void
'host {
dist = e.spaceship.loc.distance(loc);
println("Flying " + dist + " parsecs!");
spaceship.flyTo(e.location, callback);

}

This snippet is compiled to native assembly.

  • spaceship is a normal native constraint ref.
  • loc is a normal native Loc.
  • callback is compiled to a u64, an index into the Java Constraint Ref Table.

vale
func flyTo(
ship &Spaceship,
loc Loc,
callback platform'ICallback)
// change to fn(bool)void
'core {
if (ship.fuel < ship.loc.distance(loc)) {
callback(false);
} else {
set spaceship.target = loc;
set spaceship.fuel = spaceship.fuel - 10; // change to mut spaceship.fuel -= 10;
set spaceship.flyingCallback = callback;

}
}

This struct is compiled in both worlds.

The distance function here is also compiled to both worlds.

Since this is a normal struct without any region markers, it can be compiled to both sides. There will be a native version of distance which works on native Locs, and a JVM version of distance that works on JVM Locs.

vale
struct Loc imm {
x Int;
y Int;

func distance(this Loc, that Loc) {
sqrt(sq(this.x - that.x) + sq(this.y - that.y))
}
}

This struct is compiled in native only.

JVM can still have a reference to it (perhaps through the Tables), but there's no JVM Spaceship class.

vale
struct Spaceship 'core {
fuel Int;
loc! Loc;
target! Loc;
callback platform'ICallback; // change to fn(bool)void;
}

Vale can call seamlessly into native code, with only a few annotations.

The crown jewel here is that we can make some functions, like distance above, compiled to both JVM and native. This minimizes the number of times we cross the JNI boundary, and avoids a lot of unnecessary JNI calls for tiny one-off functions like getters.

Vale's combination of high-level and fast gives it the unique ability to live in both worlds. Vale is a language that doesn't fear the boundary, but thrives on it. Code can fluidly change between native and host, enabling amazing performance for our apps.