b_e_n_t_o_n a day ago

45% slower seems pretty decent considering they use a wasm kernel they developed to mimic the unix kernel so they can run non-modified unix programs inside the browser. It's actually pretty impressive that they did this, and even more impressive that it works and like another commentator said, is not even an order of magnitude slower.

I'm more interested in 1) usages of wasm in the browser that don't involve running unmodified unix programs and 2) wasm outside the browser for compile-once-run-anywhere usecases with sandboxing / security guarantees. Could it be the future for writing native applications?

Languages like Kotlin, C#, Rust, as well as C/C++ etc support wasm quite well. Could we see that be a legitimate target for applications in the future, if the performance gap was closer to 10%-ish? I would personally prefer running wasm binaries with guaranteed (as much as possible ofc) sandboxing compared to raw binaries.

edit: it's from 2019, there have been significant improvements made to wasm since then.

  • apitman a day ago

    > wasm outside the browser for compile-once-run-anywhere usecases with sandboxing / security guarantees

    I've been using it this way for DecentAuth[0]. It's awesome. I compile a single native codebase to wasm, and I can use my library from JS, Go, or Rust. New host languages only require about 1000 lines of glue. I don't have to worry at all about building for different architectures.

    [0]: https://github.com/lastlogin-net/DecentAuth

  • wmf a day ago

    wasm outside the browser for compile-once-run-anywhere usecases with sandboxing / security guarantees

    Please just use Docker in a microVM or whatever. It's 0% slower and 100% more mature.

    • unoti a day ago

      > Please just use Docker in a microVM or whatever. It's 0% slower and 100% more mature.

      Wasm has different characteristics than docker containers and as a result can target different use cases and situations. For example, Imagine needing plugins for game mods or an actor system, where you need hundreds of them or thousands, with low latency startup times and low memory footprints and low overheads. This is something you can do sanely with wasm but not with containers. So containers are great for lots of things but not every conceivable thing, there’s still a place for wasm.

      • Groxx a day ago

        yeah, I mostly see it competing with Lua and small function execution in a safe sandbox (e.g. similar scope as eBPF). and maybe for locking down problematic stuff that isn't ultra performance sensitive, like many drivers.

        so agreed, plugins. in games or in the kernel.

    • RussianCow a day ago

      But way more difficult and with a much higher attack surface area.

      And also, it's not necessarily apples to apples. It would be nice to be able to drop a compiled WASM module into your codebase and use it from just about any language on the backend. You could reuse a lot of code that way across different services without the overhead of spinning up yet another container. And you could potentially even run untrusted code in a sandboxed way.

    • saghm a day ago

      Please just use a custom FPGA hand-coded to the exact specifications of the program. It's even less than 0% slower than Docker in a microVM, and unlike Docker, it at least provides one of the two benefits that you quoted from the parent comment. Good thing we already changed the parameters of what they said they're looking for!

    • b_e_n_t_o_n a day ago

      Getting an end user to set up and run docker to run an app is a non starter for most things.

    • jcelerier a day ago

      does that allow me to do GPU and real-time audio work on windows and macos

    • eviks a day ago

      Even for small plugins in your app?

    • rowanG077 a day ago

      Setting up docker and a microVM is orders and orders of magnitude harder and less ergonomic then using your browser. These are not at all interchangeable.

      • wmf a day ago

        wasm outside the browser

rlili a day ago

That it’s not even an order of magnitude slower sounds actually pretty good!

icsa a day ago

45% slower to run everywhere from a single binary...

I'll take that deal any day!

  • gishh a day ago

    That which is old is new again. The wheel keeps turning…

    “Wait we can use Java to run anywhere? It’s slow but that’s ok! Let’s ride!”

    • bloppe a day ago

      There's a reason Java applets got deprecated in every browser. The runtime was inherently insecure. It just doesn't work for the web.

      Also, targeting the JVM forces you to accept garbage collection, class-based OO and lots of pointer chasing. It's not a good target for most languages.

      Java's pretty good, but wasm is actually a game changer.

      • hashmash a day ago

        The Java runtime isn't any more inherently insecure than the JavaScript runtime, and JavaScript seems to work just fine for the web.

        The key reason why applet security failed was because it gave you the entire JDK by default, and so every method in the JDK needed to have explicit security checking code in place to restrict access. The model was backwards -- full control by default with selective disabling meant that every new feature in the JDK is a new vulnerability.

        • bloppe 11 hours ago

          Just look up "Java applet sandbox escape". There were tons of ways to do it. Here are some [0]. Then there's the coarse-grained permissions that were essentially useless to begin with.

          [0]: https://phrack.org/issues/70/7

          • hashmash 7 hours ago

            Yes, I'm familiar with these. Many of the earliest problems were to due bugs in the verifier, and there were several different vendors with their own set of bugs. The bulk of these problems were identified and resolved over 25 years ago.

            Most of the later problems are due to the fact that the API attack surface was too large, because of the backwards SecurityManager design. And because it existed, it seems there was little incentive to do something better.

            Once the instrumentation API was introduced (Java 5), it made it easier to write agents which could limit access to APIs using an "allow" approach rather than the awful rules imposed by the SecurityManager. Java 9 introduced modules, further hardening the boundaries between trusted and untrusted code. It was at this point the SecurityManager should have been officially deprecated, instead of waiting four more years.

            Going back to the earlier comment, the problem isn't due to the runtime being somehow inherently insecure, but instead due to the defective design of the SecurityManager. It hasn't been necessary for providing security for many years.

      • qingcharles a day ago

        How does .Net stack up?

        • bloppe 12 hours ago

          I'm not too sure, but the main reason MS developed it was because they just wanted Java without licensing it from Oracle, so I imagine they made a lot of similar design decisions.

          Anyway, it's great if you compile it to Wasm.

      • gishh a day ago

        I am a huge, huge fan of wasm. The first time I was able to compile a qt app to Linux, windows, Mac, and wasm targets, I was so tickled pick it was embarrassing. Felt like I was truly standing on the shoulders of giants and really appreciated the entirety of the whole “stack” if you will.

        Running code in a browser isn’t novel. It’s very circular. I actually met someone the other day that thought JavaScript was a subset of Java. Same person was also fluent in php.

        Wasm is really neat, I really love it. My cynical take on it is that, at the end of the day, it’ll just somehow help ad revenue to find another margin.

        • bloppe a day ago

          Fair. Running in the browser isn't novel, but JS/TS are some of the most popular languages in history and that almost certainly never would have happened without monopolizing the browser.

          Expanding margins are fine by me. Anticompetitive markets are not. My hope is that wasm helps to break a couple strangleholds over platforms (cough cough iOS cough Android)

          • binary132 a day ago

            I really don’t think Apple is going to let anyone get away with too much browser appifying of iOS.

            • bloppe a day ago

              It's not a question of Apple letting anyone do anything. It's just a question of governments forcing it to do so.

  • andyferris a day ago

    45% slower to run everywhere from a single binary... with less security holes, without undefined behavior, and trivial to completely sandbox.

    Its definitely a good deal!

    • pron a day ago

      > without undefined behavior

      Undefined behaviour is defined with respect to the source language, not the execution engine. It means that the language specification does not assign meaning to certain source programs. Machine code (generally) doesn't have undefined behaviour, while a C program could, regardless of what it runs on.

    • ben-schaaf a day ago

      Native code generally doesn't have undefined behaviour. C has undefined behaviour and that's a problem regardless of whether you're compiling to native or wasm.

  • ori_b a day ago

    Is compiling so hard?

PantaloonFlames a day ago

45% slower means..?

Suppose native code takes 2 units of time to execute.

“45% slower” is???

Would it be 45% _more time?_

What would “45% _faster_” mean?

  • gjm11 a day ago

    What looks like the relevant table has a summary line saying "geometric mean: 1.45x" so I think that in this case "45% slower" means "times are 1.45x as long".

    (I think I would generally use "x% slower" to mean "slower by a factor of 1+x/100", and "x% faster" to mean "faster by a factor of 1+x/100", so "x% slower" and "x% faster" are not inverses, you can perfectly well be 300% faster or 300% slower, etc. I less confidently think that this is how most people use such language.)

    • degamad a day ago

      What would 300% faster mean?

      If the original process took 30 minutes to process 10 items, how long would the 300% faster method take?

      • fainpul 18 hours ago

        300% faster = 400% original speed = 4 times as fast = 1/4 the time

        • degamad 6 hours ago

          Of course, my mind glossed over the point that the factor is being applied to the speed, so 300% faster than 20 items per hour is 80 items per hour. That makes sense. It's also analogous to "300% more than 20 is 80".

          But then it's hard to make sense of the idea that 300% slower is 5 items per hour (if I'm understanding correctly), since it works differently from "75% less than 20 is 5".

  • oersted a day ago

    It’s a fair point, that way of expressing it is always a bit confusing. Is it the original time plus 45%? Is it 45% of the original speed?

    I think it is easier to understand in terms of throughput.

    So 45% less work per unit of time, so 55% of the work.

  • azakai a day ago

    0% slower means "the same speed." The same amount of seconds.

    10% slower means "takes 10% longer." 10% more seconds.

    So 45% slower than 2 seconds is 1.45 * 2 = 2.9 seconds.

  • tharakam a day ago

    I guess it is clearer if expressed like "Native application took only x% of WASM equivalent".

azakai a day ago

The data here is interesting, but bear in mind it is from 2019, and a lot has improved since.

baudaux a day ago

I have built Fibonacci wasm wasi executable for Rust. When I execute it in https://exaequos.com (with wex runtime under development), it is faster than the native app on my MacBook

vlovich123 a day ago

This is pretty good actually considering the low hanging optimizer optimizations left and that the alternative is JS which generally performs 2-10x slower.

I think vectorization support will narrow the aggregate difference here as a lot of SPEC benefits from auto vectorization if I recall correctly.

turbolent a day ago

... in browsers. Which at best JIT compile. There are several WASM runtimes that AOT compile and have significantly better performance (e.g. ~5-10% slower).

The title is highly misleading.

  • astafrig a day ago

    It’s not misleading to measure the performance of WebAssembly in a web browser.

    • bjconlan a day ago

      Yeah, but it's specifically testing things that implement against a posix API (because generally that's what "native" apis do (omiting libc and other os specific foundation libraries that are pulled in at runtime or otherwise) I would suspect that if the applications that linked against some wasi like runtime it might be a better metric (native wasi as a lib/vs a was runtime that also links) mind you that still wouldn't help the browser runtime... But would be a better metric for wasm (to native) performance comaparison.

      But as already mentioned we have gone through this all before. Maybe we'll see wasm bytecodes pushed through silicon like we did the Jvm... Although perhaps this time it might stick or move up into server hardware (which might have happened, but I only recall embedded devices supporting hardware level Jvm bytecodes).

      In short the web browser bit is omitted from the title.

    • wffurr a day ago

      WebAssembly is neither web nor assembly. It’s a low level byte code format most similar to LLVM IR.

  • pyrolistical a day ago

    Just means the browsers can catch up.

    Initially slower but then faster after full compilation

  • padenot a day ago

    Browsers have been doing (sometimes tiered) AOT compilation since wasm inception.

  • chalcolithic a day ago

    could you please name them?

    • wffurr a day ago

      WAMR (WebAssembly Micro Runtime), wasm2c in WABT (WebAssembly Binary Toolkit), Wasmtime.

ModernMech a day ago

Yeah, I've seen this when test Rust code compiled into native and wasm. I don't know about 45% though, I haven't measured it.