Recently I’ve been to BostonJS at Bocoup, where Naveed Ihsanullah from Mozilla shared some of the upcoming concurrency features that will come to live supposedly in about a year (and this talk took place on 30th of April 2015).

Previously, closest we had to concurrency was usage of Web Workers, which solved some of the problems. It provided us with ability to offload heavy computations from our single threaded javascript. It was safe in terms of playing well with single threaded execution model, but involved some overhead for communication between main thread and workers, as well as imposing restrictions to what kind of code could run in those.

Looks like things are going to change. There’s a draft spec “Spec: JavaScript Shared Memory, Atomics, and Locks”, and this gist that talks about SharedArrayBuffer primitive that would be concurrently accessible. You could read about motivation and roadmap in February entry in Mozilla blog.

Naveed joked about bringing deadlocks & race conditions to previously safe js (which is great pun and will help make future interviews ‘funnier’: ‘- do you want to create static pages for us & validate some forms? - Yeah! - Great, tell us about concurrency & deadlocks in javascript’). I might have got previous paragraph distorted, so here’s an actual comment by Naveed:

Deadlocks are actually possible now in JavaScript. Shared memory and the associated locks, however, would potentially allow data races and new deadlocks because of synchronization. A bit different. What I meant is shared memory can be complex and JavaScript was spared that complexity in the past. Depending on how we finally choose to surface this functionality that may not be the case in the future.

It seems that API will be going through some dramatic changes over the course of next months. Naveed mentioned that on the day of the talk there was an interesting idea of looking into the way C++ handles threads & modifying the API slightly. For more details, you might want to take a look at this draft.

Closing up was the demo of visualizing Mandelbrot fractal, which used all 8 cores. Notably, it was also using SIMD, which made code run several times faster. The fractal demo itself was about 6x faster (using 8 cores instead of 1), than normal JS because of shared memory & threading. SIMD brought another 2.5-3.5x pefromance boost.

Such an API might be used to enable some great things in the web, as well as bring us another way to shoot ourselves in the foot.

Naveed did mention necessity of future involvement of library authors to wrap this powerful low-level primitives to bring end-users handy tools that would enable easier work, especially when it will involve VR, image processing and other heavy-computational activities that will get only more widespread as we go forward. Oh, Naveed did mention that there was a public indication that V8 team had intention of implementing this, so let’s just hope we will have this generally available in around a year.

All in all, this looks like interesting space to follow & I would definetely be looking forward to Naveed’s talk on JSCONF US 2015.

I want to thank Naveed for taking time to review this post & provide additional details I didn’t get right the first time 😼