JavaScript & beyond

The History

Mocha / Livescript / JavaScript

In May 1995, Brendan Eich designed and wrote what we know today as JavaScript or ECMAscript (more on that later). He was working for Netscape at the time and they needed a lightweight language to put into their new Netscape 2.0 Beta browser. Over 10 days, Eich wrote the prototype of JavaScript. Under development it was called Mocha, then Livescript. However, when it was ultimately released publicly in September 1995, its name changed yet again to JavaScript1.

They hoped to jump off the popularity of Java at the time. Although the two languages have very little in common, the name was enough for people to look at Netscape's new language.

The mid 90's was in the middle of the web bubble. Competition was extraordinarily fierce. Netscape needed to compete with Sun's Java and Microsoft's Visual Basic. Sun was gaining popularity on the web with its portable Java virtual machine which could be used to write web applications. However, Java and its object-orientation and type system were meant for the experienced programmer. Netscape saw the importance of making an easily approachable dynamic language for the web. There needed to be a middle ground between Java applets and static HTML+CSS websites. So Brendan Eich was put on the task of making a language for the web that was intended to be used by 'amateurs'.

The language needed to be interpreted, familiar, and memory-managed. Eich thought that websites would only last in memory for a few seconds or minutes so user-level memory management was not important2. Programmers at the time were familiar with C/C++, so the syntax was intended to look familiar. Similarity was key—forcing programmers to learn a new paradigm would only slow adoption and perhaps kill JavaScript before it could even take off.


At the height of the browser wars, Microsoft's Internet Explorer implemented a clone of Netscape's JavaScript. Jscript was close but not quite the same as JavaScript. This divide forced developers to choose between creating a dynamic website that worked for either Netscape or Internet Explorer.

In 1996, Netscape submitted their version of JavaScript to the ECMA governing body (European Computer Manufacturers Association) for standardization. Without this standardization, JavaScript may have been in two forms forever, neither of which could gain enough market share to squash the other. Without reform and standardization, the language may have died from fragmentation. The language was standardized in 1997 to create ECMAscript–the combination of JavaScript and Jscript.

The Good

Overall, a simple scripting language to dynamically change a website's appearance and behavior massively increases the usability of the web. An amateur doesn't have to make static documents or learn Java to make a website interactive. Moreover, JavaScript has access to the Document Object Model (DOM) which means JavaScript can modify the structure HTML parses to. The web became more than a document storage medium. The web became a user experience—fully interactive and dynamic to the user's inputs.

To start, the language was easy to learn. It quickly became one of the most popular programming languages in the world. As easily approachable language facilitates more users which means more people in the community. It means there's a larger base of support. Simply put, its ease of adoption brought in more programmers to teach others, increasing its popularity.

Unlike its main comptetitor Java, the ECMAscript interpreter is part of the browser—not a plugin. Additionally, ECMAscript is permitted to interact with the DOM while a Java applet is relegated to a hole within the DOM. It's easier for a person surfing the web to use since there is no extra software to install. While its deep integration with a website's layout was at first used to frustrate users with bounding ads, ECMAscript opened a path for people to get more creative with websites. Websites could react to users inputs, buttons could change a page without a full reload!

Between about 1999-2004, a new practice became common in JavaScript code. An XMLHTTPRequest allows JavaScript to send a request back home to get new information. For the first time, getting new information into the page did not require a page reload3. Not only did this reduce bandwidth required for an interactive website, but it sped up websites. The browser did not have to reparse a new HTML document, reflow it, and style it. Instead, new information could be passed straight to JavaScript to take action. It simplified the logic of web applications which allowed more people to write them!

There have been more additions to JavaScript over time such as Websockets (a TCP socket to a server), polyfills (emulating new JavaScript features in old browsers), and improvements to ECMAScript (we're mostly using revision 6 now).

The Bad

As the Rule of Least Power goes, "powerful languages inhibit information reuse"4. To this effect, it's easy to let JavaScript take care of more and more of the website, taking more and more information away from the HTML. This makes websites harder to crawl—they must be evaluated in an interpreter to understand the information displayed.

As more features are added to JavaScript, programmers find more ways to use it. Some practices are for the better: they simplify previously complex tasks (such as the request library for AJAX requests). But more often than not, the language's flexibility leads to unreadable and brittle code.

JavaScript was created in 10 days. There simply wasn't time to consider the ramifications of all of these choices.

Number type

There is only one kind of number in JavaScript: IEEE 754 Double Precision Floating Point values. This simplification of numbers went too far. For example:

99999999999999999 == 100000000000000000 => true

This choice may have made the language easier to program. One type of number makes the implementation simpler for sure. However this tradeoff came at the cost of speed and reasonability. Firstly, nearly all math operations are floating point operations, which are more intensive than integer operations. This adds continual overhead to evaluation within the interpreter. Secondly, and more importantly, conflating numerical values makes reasoning about a program more difficult. Floating point errors must be taken into account5.

Type System

JavaScript lets variables change types. It lets values change types. It has a powerful type coercion system. For the amateur programmer it's really wonderful. The following code works:

var a = [1,2,3];

But wait. What's going on here? while is coercing an array of numbers to a boolean! [1,2,3] == true and [] == false! Everything can be converted to a boolean. Everything can be the predicate for a loop. That is amazingly simple. That is amazingly bad.

A program becomes significantly harder to reason about. There are at least a dozen rules that specify how two non-boolean variables compare. This multiplicity of rules makes following the logical flow of a program more complex.

However these rules allow JavaScript to gracefully handle errors. The browser does not crash when a website's programmer uses an array as a boolean variable. A correct JavaScript interpreter liberally accepts programs. But we as a community of programmers should not be testing the limits of that. We need to be more conservative with what we send.


JavaScript has a speed problem. It's slow. In the mid-2000's there was a push for browsers to speed up their javascript interpreters. Projects such as Chrome's V8 and SpiderMonkey pushed the speeds but the language itself cannot go much further. There are two inherent problems. The first is that it is singly threaded. A JavaScript program cannot use more than one thread. They must be single-threaded. Secondly, the weak type system and duck-typing prevent many optimizations and add significant computational overhead to what should be only a couple machine code instructions6.

The Fixes

There have been three primary movements to improve the state of the most common programming language used on the web.

1. Fix JavaScript

This first class of approaches is straightforward. The general idea is to extend JavaScript to make it faster, safer, or otherwise easier to use.

Web Workers

Web Workers are a way to have multi-threading in JavaScript. They are an abstraction which allow the browser to run and encapsulate code in another thread. First introduced in 2010 and becoming a living standard in 2012, Web workers are almost as powerful as the main JavaScript thread. They are not permitted to access the DOM. This tradeoff prevents bad concurrency practice and helps to enforce separation of models and views7.


The ECMA governing body every few years proposes an update to ECMAscript. ES6 Harmony is a draft as of this document's writing. It features significant improvements to the language. Some allow more readable code, such as value destruction for pattern matching: var [a,b] = [1, 2];. Some changes make the language more stable and add maturity, such as tail call optimization (which prevent a naive Fibonacci implementation from overflowing the stack)8.

These do not go far enough. The language needs breaking changes to remove features like eval or improvements to the number representation. The language needs these kinds of changes to permit better reasoning and for speed improvements.


JavaScript has a rich selection of libraries in the community. Some are of higher quality than others. High quality libraries gain popularity and help clean up code. They provide useful abstractions which make code more readable and uniform across different browsers. For example, the underscore library provides many higher order functions for iterating over structures.

2. Move away from JavaScript

Native Client

In 2010, Google released what they called Native Client (NaCl) for Chrome. It allows programmers to write C/C++ to be used on the web. A compiler writes architecture independent byte code which can be distributed and run in a sandbox. Google wanted to reach speeds faster than what JavaScript could do9.

Native code is faster, it runs at 85-95% speed of actual native byte code. Google intended it for 3D games and interactive simulations. It does not force programmers to learn a new language by extended a language many programmers already know. Google was hoping to leverage the large C/C++ network of programmers for instant popularity. It would allow software developers with a native application to easily port it to the web.

However, Mozilla declared they would not permit Native Client code in Firefox. Native Client does not have the same interaction with HTML and the DOM that JavaScript does. Like a Java applet, or a Flash application, Native Client runs in a black box within the web page. It requires help (from Google's Pepper plugin) to interact with JavaScript10.

Ultimately it did not take off. Developers lost interest in Native Client.

3. Compile to JavaScript

Libraries don't fix a systemic problem of bad JavaScript. Neither do new features. They make it easier to write better JavaScript. Moving away from JavaScript entirely gives the programmer more power but also lets them mess up in a less forgiving way. A segfault in Native Client is nowhere near as graceful as a bug in a JavaScript program. In recent years there has been a move to cross compile. The programmer should write no JavaScript but should write something that compiles to JavaScript.

These classes of languages give the user no more expressive power than JavaScript already has. In many of these cases, it is less expressive than vanilla JavaScript. However this choice trades off flexibility for code-correctness and ease of comprehensibility.


There is a subset of JavaScript that can run efficiently in an interpreter. It looks like a sort of assembly language for the web. asm.js is a project to provide a standard low-level compilation target that can be used portably on the web.

This is not introducing a new language. A programmer can write in any language and cross-compile to asm.js (although it is intended for the C programmer). It has strict types which allow the interpreter to make guarantees and use specific optimizations to improve performance.

However it has a drawback. It is not meant to overshadow vanilla JavaScript. It is meant as a compilation target for a memory-managed language. Quake could be ported to asm.js in four days, but it is not meant to facilitate easy interactions with the DOM. In a sense, asm.js is just like Native Client, Flash, and Java: it runs in a box. Although this box doesn't require a plugin11.


This one is a new programming language. It compiles to a subset of JavaScript. Elm is a statically typed, pure functional-reactive language with a growing community. It does not compile to anything other than JavaScript, so it has deep ties to the DOM and performing common web-programming tasks.

The virtue from being functional comes through elm code's ease of readability. Values have clear types that explain what they do. Crashes simply cannot happen because the language is type-checked before being compiled to JavaScript. Moreover, since there is no mutable state, the program can safely be re-executed or even step back in the history of its own evaluation12.

Again this approach to JavaScript's mediocrity comes at a cost. A programmer has to learn another language.


There is a third class of compile-to-JavaScript approaches. Haxe is a new language with many compilation targets. It can compile to JavaScript among many others, including Python, C++, and Java. It offers a high-level multi-paradigm approach to programming. It has many libraries, some of which give power to web-applications13.

It offers logical improvements over JavaScript from its static type system forcing the programmer to think more carefully about the program. It provides better code abstractions through algebraic data types and their generalized siblings (GADTs).

However, just as with Elm, the programmer must learn Haxe to use it.

There is no solution

JavaScript beat out Java applets, Flash, and just about every other contender fighting to become a core part of as user's experience on the Web. Its standardization to ECMAscript was key for it to survive as and through a unified language. It was simple for its time so more people could use it. More people using it made it more popular. However, with its popularity brought out the worst in the language. It can be slow and hard to follow in some cases, but in others it can be sharpened to run at a reasonable speed.

During the mid-2000s, browsers saw a huge speed boost in JavaScript interpreter performance. Many of the easy optimizations have been had. In addition, many of our JavaScript interpreting devices are fast enough for a good user experience.

The next big thing is to make JavaScript better to program with. Its broken number construct, type system, and other quirks stand in the way of beautiful code. Perhaps some compile-to-JavaScript language will become mainstream. Perhaps that language will become so popular it could have its own runtime. But in any case it looks unlikely that JavaScript will disappear or overcome all of its ugliness in the next five years.

  1. Netscape Public Relations. Mountain View, CA. 1995.

  2. Severance, Charles. JavaScript: Designing a Language in 10 Days. IEEE Computer Society. February 2012.

  3. Dutta, Sunava. Native XMLHTTPSRequest object. Microsoft Corporation, IEBlog. January 2006.

  4. Berners-Lee, T. and Mendelsohn, N. The Rule of Least Power, W3C TAG Finding 23 February 2006.

  5. R.B. Why Does JavaScript Suck?. August 2014.

  6. Bernhardt, Gary. The Birth & Death of JavaScript. PyCon 2014.

  7. Hickson, Ian. Web Workers. W3C, May 2014.

  8. ECMA International. ECMA-262 6th Edition Final Draft. April 2015.

  9. Stefansen, Christian. Games, apps and runtimes come to Native Client. Google Code Blog, December 2011.

  10. Metz, Cade. Mozilla: Our browser will not run native code. The Register, Jun 2010.

  11. Herman, D; Wagner, L.; Zakai, A. asm.js Working Draft. August 2014.

  12. Czaplicki, Evan. Asynchronous Functional Reactive Programming for GUIs. PLDI 2013.