1. blog.izs.me: On ES 6 Modules →

    Isaac Schlueter published an interesting blog post about the possible future regarding an eventual module system for JavaScript (ES6). I totally support his omission of the module { … } syntax because it requires a pretty severe changes to the current code.

    izs:

    A few things have rubbed me the wrong way about the current Modules and Module Loader specification. I regret that I have not been very clear about what exactly my objections are, and worse still, I have not been very clear about what I think a better direction would be.

  2. Dependency Injection in JavaScript

    There are several ways to build a stable and testable application. Probably the most common approach is dependency injection (DI). The core DI concept is that components don’t look up their dependencies somewhere in a global context or using a service locator but simply declare what they need and their creator is the one responsible for delivering the dependencies.

    In a dependency injection-based system, there are three required parts:

    • a dependent component,
    • a declaration of the component’s dependencies and
    • an injector component that provides the dependent component with its dependencies.

    One of the main reasons or advantages of DI is that components built in a way that adhere to the concept are easily testable. Because the component you are testing relies on its creator for providing it with its dependencies, we can actually pass a mock component to it and test it that way.

    Now, let’s move to the code. We will have a user page controller and a user repository—the controller being the dependent component and the repository being the controller’s dependency.

    function UserPageController(user_repo) {
      this.user_repo = user_repo;
    }
    
    function UserRepository() { … }
    

    It’s as simple as it can get. The controller simply expects a user repository component being passed to it’s constructor.

    We can now have an injector component which will instantiate the controller. The issue is now how to tell the injector what are the actual dependencies. The most minimal way is simply hardcoding the declaration in the injector like this:

    function createUserPageController() {
      var user_repo = new UserRepository();
      return new UserPageController(user_repo);
    }
    

    This is not very flexible as it requires us to modify the injector when the dependencies change. There are two possible ways of doing the declaration.

    The first way is to parse the actual argument names of the constructor (from UserPageController.toString). This is fine in many cases but if you know me, you are aware of my need to have all the code compilable via the Google Closure Compiler. The problem you run in is that arguments are renamed in the compilation thus the injector gets a wrong declaration of dependencies.

    What we need is an array listing all the dependencies stored on the constructor (or more precisely—for the compiler not to classify it as dead code—on its prototype).

    UserPageController.prototype.$deps = [ 'user_repo' ];
    

    The injector has to know to which component should it match each of the keys in the declaration. The odds are that most of the dependencies are going to be services (singleton-lifetime components). We are going to assume there are no “request-based” dependencies needed in constructors (we can inject them later via setter injection).

    Your injector will probably be defined somehow similar to this:

    function Injector() {
      this.factories = {};
      this.services = {};
    }
    
    Injector.prototype.addService = function (key, factory) {
      this.factories[key] = factory;
    };
    
    Injector.prototype.getService = function (key) {
      var service = this.services[key];
      if (!service) {
        var factory = this.factories[key];
        service = factory();
        this.services[key] = service;
      }
      return service;
    };
    
    Injector.prototype.create = function (Constructor) {
      var Dependant = function () {};
      Dependant.prototype = Contructor.prototype;
    
      var instance = new Dependant();
      this.inject(Constructor, instance);
    
      return instance;
    };
    
    Injector.prototype.inject = function (Constructor, instance) {
      var keys = Constructor.prototype.$deps || [];
      var deps = keys.map(this.getService, this);
    
      Constructor.apply(instance, deps);
    };
    

    Somewhere in the initialization code of your app, we need to instantiate the Injector and provide it with the service component factories.

    var injector = new Injector();
    injector.addService('user_repo', function () {
      return new UserRepository();
    });
    

    Now we can simply ask the injector to create a controller instance and the injector will take care of the depencency injection:

    var controller = injector.create(UserPageController);
    

    Subclassing

    A whole new problem arises when we want to subclass (inherit from) a constructor/prototype without breaking the DI.

    The usual thing you do when extending is that you call the super constructor in the context of the new (extended) instance. There is a strong possibility that the new constructor will have different dependencies than the original one. This would mean that the new constructor would have to declare the original constructor’s dependencies as its own which does not really make much sense (at least in my opinion).

    var Original = function (dep1) {
      this.dep1 = dep1;
    };
    
    var Extended = function (dep1) {
      Original.call(this, dep1);
    };
    

    The Extended constructor does not use the `dep1` service nor does it really need to know about it. The Original constructor should be called through the injector which would provide it with its dependencies.

    The only issue here is how to provide the new constructor with the injector. The best way I came up with is to simply inject it in the constructor along with its other dependencies.

    var Extended = function (injector) {
      injector.inject(Original, this);
    };
    Extended.prototype.$deps = [ '$injector' ];
    

    The Injector would feature itself under the key `$injector`:

    function Injector() {
      this.factories = {};
      this.services = {
        '$injector': this
      };
    }
    

    I uploaded the complete Injector as a Gist. Feel free to clone it, fork it, use it and let me know what you think. Thanks!

    by Jan Kuča

  3. Constructor Arguments from an Array

    There are times at which you need to instantiate something and you have the arguments you want to pass in an array.

    The only way is utilizing the apply method of the constructor function. The problem is that this method cannot be combined with the new keyword.

    The trick lays in the way JavaScript object model works—the prototypal inheritance. When you have two constructors sharing one prototype object, an instance of one of them is also an instance of the other one.

    function A() {};
    function B() {};
    B.prototype = A.prototype;
    
    var a = new A();
    a instanceof A === true
    a instanceof B === true
    

    To solve our problem, we can create a temporary constructor function and use it for the instantiation. Then, we apply the arguments to the original constructor in context of the created instance.

    var Original = function (a, b) {
      this.a = a;
      this.b = b;
    };
    
    var Temp = function () {};
    Temp.prototype = Original.prototype;
    
    var args = [ 2, 3 ];
    var instance = new Temp();
    Original.apply(instance, args);
    
    instance.a === 2
    instance instanceof Original === true
    

    Discovering this solution made my day.

    by Jan Kuča

  4. Vyšší maturity z matematiky (2012)

    This post is written in Czech because it has nothing to do with international problems, web development or any other stuff I usually post about. It is about this year’s graduation tests in the Czech Republic.

    Před týdnem se konaly státní maturitní zkoušky. Taty zkoušky jsou nyní silně zpochybňovány a na CERMAT (instituci, která testy sestavuje) je vrháno černé světlo.

    Včera byl v pořadu Hyde Park České televize zpovídán ředitel CERMATu, Pavel Zelený. Tento člověk šel do pořadu s vědomím, že jde na popravu, a i přes to si po celou dobu pořadu zachoval svůj klid a poměrně přesně zodpovídal jemu pokládané dotazy.

    Z maturitní zkoušky jsou nejvíce kritizovány testy vyšší úrovně z matematiky. Studenti si stěžují, že byla zkouška příliš těžká a že s tím nemohli počítat. Já osobně jsem tuto zkoušku také skládal a musím říct, že s těmito studenty příliš nesoucítím. V testu nebyla jediná úloha, která by nespadala do požadavaných okruhů definovaných MŠMT v katalozích požadavků. Pan Zelený včera uznal dvě chyby, kterých se při kompilování testu CERMAT dopustil.

    Jednou z nich byly nevhodně zvoleny tzv. “motivační úlohy”, které mají studentovi dodat sebevědomí. Předpokládám, že mezi tyto úlohy spadaly první dvě s 1b ohodnocením. Údajně většina studentů, kteří dosáhli vyššího bodového ohodnocení, měla s těmito úlohami větší problém než studenti slabší. Osobně jsem s těmito úlohami také měl problém. První úlohu, kde bylo úkolem najít nejnižší sudé číslo (…) jsem musel prostě tipovat. Druhá úloha se součtem komplexních čísel mi také nesedla.

    Druhým uznaným nedostatkem byla příliš nízká časová dotace na vypracování testu. S tímto mohu osobně souhlasit, ale jen do určité míry. Chybělo mi asi 5 minut a to pouze proto, že jsem kvůli špatně přečtenému zadání u jedné z úloh nedopočítal poslední část a musel se vracet. Neměl jsem tak čas dořešit první úlohu a musel jsem tipovat.

    Z mého pohledu obsahoval test pouze jednu závadnou úlohu a to konkrétně úlohu 8, jejíž zadání znělo nějak takto: "Jsou zadány dvě kružnice (k, l) se středy (S1, S2). Ty mají jeden společný bod dotyku (vnější nebo vnitřní dotyk), který leží na jedné z os souřadného systému. Určete rovnici kružnice tak, aby byl poloměr co nejmenší." V zadání nebylo přesně řečeno, co se po nás chce. Rovnici které z kružnic máme napsat? Zde jsem opravdu musel tipnout, jak si mám zadání vyložit. Jsem velmi zvědavý, jestli jsem počítal tak, jak CERMAT zamýšlel.

    Abych to shrnul, nemyslím si, že by byla nastolena větší složitost než u ilustračních testů. Ba naopak, ilustrační mi přišly složitější (což byl původní záměr CERMATu). Kdyby bylo zadání úlohy 8 lépe formulováno, byl by test naprosto v pořádku. Možná by se hodily jiné motivační úlohy za 1 bod, ale to je pouze výmluva, protože i tyto úlohy spadaly do požadovaných okruhů.

    Pokud si někdo zvolil vyšší obtížnost zkoušky z matematiky a teď se diví, že byla nějaká těžká, není to úplně chyba CERMATu. Pokud věřím, že jsou mé znalosti z matematiky tak dobré, že vyřeším libovolnou úlohu z požadovaných okruhů (ty jsou známé již dva roky před zkouškou), neměl bych mít při vypracovávání vyšší varianty testu problém. Pokud si dám vyšší variantu z důvodů jako “jsem přece na gymplu, jsem lepší než ty lamy ze střední” nebo “mám celé čtyři roky jedničky z matematiky, vyšší varianta nemůže být problém”, nejednám tak z objektivního přesvědčení (známky na vysvědčení bohužel často neodpovídají znalostem) a nemůžu předpokládat, že nebudu mít problém u maturity. Osobně studuji na střední průmyslové škole a mám z testu dobrý pocit, pocit, že alespoň 70% správně mám (to je limit pro garantované přijetí na ČVUT FIT).

    Jsem zvědav, jak CERMAT dořeší situaci s úlohou 8, kterou při post-validaci musí shledat vadnou.

    by Jan Kuča

  5. Dot vs. Array-Access Property Notation

    If you’re just learning JavaScript, you probably know that the dot property notation is always the preferred way while the array-access notation should be used only when we cannot use the dot notation.

    This is definitely true if you simply include your source JavaScript files in the page. Your production code should, however, be compiled or minified to save the user some time and CPU cycles needed to download and parse the code.

    The most advanced JavaScript compiler out there is probably the Google Closure Compiler . Its true power is noticable when you set the compilation mode to advanced optimizations .

    What it does is that it…

    1. flattens nested properties
      object.property.property > object$property$property > a
    2. renames variables and properties unless they are recognized as standard
      object.property > a.b
    3. removes dead code which would never get executed by your app

    These is genuinely awesome stuff but it can break your application if you’re not careful. One thing to be aware of is that array-access notation cannot be flattened .

    Most of the time, it is better to use the dot notation because the intelligent static analysis performed by the compiler makes sure that connections are not broken. The situation where you want to be careful and use array-access notation is when you handle externally received data . If you work with an external (meaning not included in the compilation) API, you need to use array-access notation because the keys in the response would otherwise be different from what the application expected. Here is an example:

    Consider this the response to the HTTP request made by the script below:

    { "username": "jankuca" }

    The following code works as it should when it is not compiled:

    var xhr = new XMLHttpRequest();
    xhr.open('GET', '…', true);
    xhr.responseType = 'json';
    xhr.onload = function () {
      var res = xhr.response;
      alert(res.username);
    };
    xhr.send(null);

    But when compiled, the username property gets renamed to save space:

    var a = new XMLHttpRequest;
    a.open("GET", "…", !0);
    a.responseType = "json";
    a.onload = function() {
      alert(a.response.a)
    };
    a.send(null);

    This would obviously not work. To fix the code, just use the array-access notation (res['username']) and it’s going to be work as expected.

    A good thing is that the compiler warns us about the possible bug:

    JSC_INEXISTENT_PROPERTY:
    Property username never defined on res at line 5 character 6
    alert(res.username);

    To sum up, the golden rule is: Never use dot notation if you work with 3rd party code or data. This usually applies to external APIs and 3rd party libraries that are not included in the compilation. For instance, database table column names (or document field names in case of document-oriented databases) should always be quoted.

    Follow this rule in all your code, not only when you want it to compile. Using array-access notation should mean that you (or the application) are not the one defining the property.

    There is one special situation in which you do not have to follow this and that is when you declare the external API as externs .

    by Jan Kuča

  6. Google Closure Development Environment

    I am a big fan of the Google Closure Tools, mainly the Google Closure Compiler. They help me a lot; my production code is minified, obfuscated and my source files are consistent in code-style BUT the main advantage of using Google Closure Tools is that it helps me to find many subtle errors that would otherwise cause my code to behavior to be wrong.

    Compiler

    The core component of the environment is the Google Closure Compiler. There are three compilation modes of the compiler—white-space-only, simple optimizations and advanced optimizations. The mode I almost always go for is the advanced optimizations mode. In combination with the compiler’s verbose warning level , it can catch (to some extent) anything you can think of.

    The advanced optimization algorithm does variable name flattening and then it renames all of these to make the code smaller. These two processes are very powerful but there are some caveats one has to be aware of to write code that is compilable in this mode.

    One of the most important things to be aware of is that no string is ever compiled. That means writing a.b is way different from writing a['b'] because the former one can be flattened and renamed whereas the latter one cannot.

    HTML

    The same applies to declarative templates where there are references to JavaScript in the HTML. One example of this are Angular.js templates which are simply namespaced attributes in the HTML code. If you write ng:controller="SidebarController" and have the corresponding constructor in your JavaScript, the HTML will remain the same while the constructor name is changed.

    I thought a bit about this problem and come up with a simple solution. The goog.exportSymbol method lets me keep the original name and I know all the names because there are written in my HTML. (This, of course, isn’t true in case of generated HTML.) I wrote a little script that goes through every .html file, looks for values of the given attributes and outputs a JavaScript file with all the goog.exportSymbol invocations required for my application to work. I then feed the compiler this file along with the rest of my app and get a fully-functional compilation.

    Source Maps

    Let’s move to debugging of the compiled application code. If you’ve ever tried to find a bug in a compiled code, you most definitely had a hard time figuring out what is the source code corresponding to the part of the compiled code where an exception was thrown.

    Believe it or not, there is a really sweet solution to this issue and it’s source maps (which the compiler can optionally create). These are files containing connections between the compiled and the original code. The sweetest thing about this is that these mappings are recognized by some browsers. Such browsers then let you debug the compiled code from the original. As of right now, only Firebug and Chrome Dev Tools support source mappings and the implementation is not nearly perfect. However, it is definitely a very useful thing to have.

    Summary

    To sum up, here is a list of the components of my environment:

    • Google Closure Linter – Checks my JavaScript syntax and JSDoc comments.
    • Google Closure Compiler - Compiles my JavaScript code.
    • Google Closure Library - a set of high quality well-tested cross-browser-compatible code; I, however, usually use only the base.js file that defines methods such as goog.require , goog.provide or the aforementioned goog.exportSymbol .
    • compile-html.js – Extracts JavaScript references from my HTML.
    • fix-source-maps.js – Fixes file paths in my source maps.
    • node.js – Runs some of the build scripts.

    Boilerplate

    I’ve put this boilerplate together and published it on GitHub . For installation, follow the instructions there.

    The boilerplate includes two bash scripts:

    • build/lint.sh – Runs the Google Closure Linter
    • build/compile.sh – Runs the compile.js script, the Google Closure Compiler and the fix-source-maps.js script.

    Be sure to configure them according to your directory structure.

    I am a fan of Sublime Text which is why there is also a .sublime-project file prepared for you. It includes the two scripts each as a build system. You can switch between them in the Tools > Build Systems menu and run them with Cmd+B .

    by Jan Kuča

  7. Wildcard Fallback AppCache Entries Are Needed

    The Application Cache (or AppCache) which is a W3C standard developed in order to bring offline support to the web has been around for some time now. It has been two years or so since the browser support started to be good enough for the standard to be taken seriously. Many of us experimented with this technology and some fundamental issues surfaced.

    To quickly describe the standard, we could say that it is a caching layer that allows websites (or more specifically their parts) to be accessible without Internet connection. The caching rules of such a website are declared in a manifest file consisting of three sections—cache, network and fallback.

    The cache section is simply a list of files that should be cached and accessible without Internet connection. On the other hand, the network section is a list of files that should not be cached.

    The fallback section a bit more complicated; it is a list of file pairs consisting of a real file path and an offline fallback file path. For instance, if we had a file /a.html and wanted the user to fallback to a file /b.html when their Internet connection is not available, we would write

    FALLBACK:
    /a.html /b.html

    Now, let’s move to a real-world situation. Consider a simple application where registered users would be allowed to post status updates and access other users’ statuses (think Twitter). We would have two routes— /user/:username for user profiles with all their statuses listed and /status/:id for each of their statuses.

    If we approached this application as an API with a JavaScript front-end, we would simply load a basic static skeleton and the JS code, load data from our API and show the appropriate view.

    The application should be fast and user-friendly which is why AJAX is a must. Simply put, when a user requests another view, an API request is issued and the requested view is filled with data from the response and rendered in place of the original view.

    For a seamless user experience, there has to be a history entry for each loaded view. We could either use hash-based paths but the modern approach is to make use of the History API . The way it works is that we call the pustState method of the window.history object with the path of the target view, this path is then replaced into the address bar of the browser and a new history entry is created (which basically means that the back-button works).

    OK, let’s move to the actually interesting stuff which are the offline capabilities of our app. Consider the following scenario:

    1. The user is online and requests /user/jankuca
    2. The static skeleton is loaded, the view content is fetched from the API and rendered in the appropriate place in the skeleton.
    3. The user disconnects from the Internet.
    4. The user clicks a link and requests /statuses/2 .
    5. A history entry is created, the address changes to /statuses/2 and and API request is issued.
    6. The requests failes due to the user being offline. An error view is shown to tell the user they need to connect to the Internet.
    7. The user reloads the page (think Cmd+R).
    8. The user is presented a regular 404 error because the requested page is not cached (only the original page is).

    This is because only the original page (or any directly-loaded page) is taken as the one pointing at the cache manifest. Thus the subsequential pages are not included in the cache.

    If this were a simple page-based presentation with several static pages such as / , /portfolio and /contact , this would not be an issue. All of these pages would be listed in the master ( CACHE ) section of the manifest and they would all be cached. Unfortunatelly, our routes are dynamic (parametric) and as such cannot be listed in the CACHE section fo the manifest because the browser wouldn’t have knowledge of all the pages to cache.

    This is where the FALLBACK section should be of use. We want the user to fallback to a general skeleton and ask a data store for the content. I’m intentionally saying a “store” instead of an “API” because in an offline-capable application, the data are stored in a client-side database (such as IndexedDB) and synced with the server when an Internet connection is available.

    Unfortunatelly, this is not possible ; we cannot include wildcard (or parametric) routes in the FALLBACK section and allow the application to be loaded without a server.

    The wildcard routes I’m proposing would be as simple as this:

    FALLBACK:
    /user/* /offline.html
    /statuses/* /offline.html

    I filed a bug report at the HTML Working Group issue tracker as I believe I’m not the only one seeing this solution useful.

    by Jan Kuča

  8. archiemcphee:

Timo Arnall found this awesome street keyboard in Brussels, Belgium.
[via My Modern Metropolis]

    archiemcphee:

    Timo Arnall found this awesome street keyboard in Brussels, Belgium.

    [via My Modern Metropolis]

  9. Templating without the with statement

    One of the limitations that ECMAScript 5 strict mode brings is the removal of the with statement. This is great news in general as the statement just makes code less readable.

    var x = 6;
    with (obj) {
      // You cannot be certain that the "x" you use in this block
      // is the one you want since obj.x can be present.
      alert(x);
    }
    

    There is, however, one type of libraries that are built upon this statement. Can you guess what libraries I’m talking about? (Yeah, I know—the title is a big hint.)

    It’s templating libraries! Almost every JavaScript templating library I have seen made a use of the with statement. And what is the issue that makes all of the libraries use the statement, you ask? Let’s say that you have an object that defines all variables specific to a template instance and you have the template string (HTML with variables for instance). In such case, the with statement is the simplest way to map the key-value storage (the object) to multiple local variables (one for each key). To demonstrate this situation, I’m going to borrow the EJS syntax.

    // Here I have the object that holds template variables...
    var model = {
      'title': 'Lorem ipsum'
    };
    // ...and here is a simple EJS template...
    var tpl = '<h1><%= title %></h1>';
    // ...from which I get the following JS code:
    var js = "'<h1>' + title + '</h1>'";
    
    with (model) {
      // I can access "model.title" as "title".
      eval('return ' + js);
    }
    

    There is one exception I found and it is the ECO (Embedded CoffeeScript) library:

    // We have the same template object as we had above...
    var model = {
      'title': 'Lorem ipsum'
    };
    // ...but the template looks slightly different:
    var tpl = '<h1><%= @title %></h1>';
    // Following the CoffeeScript syntax, this could be expressed as
    var js = "'<h1>' + this.title + '</h1>'";
    
    // By using "this" instead of a variable number of local variables,
    // the template JS code can be executed without the with statement:
    (function () { return eval(js); }).call(model);
    
    // We could also create a detached anonymous function
    // to execute the code in a limited scope:
    var execute = new Function(js);
    return execute.call(model);
    

    I consider this a nice workaround. However, if we want to use the EJS syntax (or any syntax other than ECO with the @ prefixes), the with statement is still very useful.

    So what is the solution if we need a future-proof library in which we cannot use the with statement? We just need to realize that the Function constructor takes more than one argument—one for each of the new function’s arguments.

    var fn = new Function('a', 'b', 'return a + b');
    fn(2, 3); // 5
    

    It is also possible to call the call and apply methods of the constructor:

    var fn = Function.apply(null, [ 'a', 'b', 'return a + b' ]);
    fn(2, 3); // 5
    

    With this knowledge, I’m going to fix the code above.

    var model = {
      'title': 'Lorem ipsum'
    };
    var tpl = '<h1><%= title %></h1>';
    var js = "'<h1>' + title + '</h1>'";
    
    // First, I get the model's keys and values:
    var keys = [];
    var values = [];
    for (var key in model) {
      if (model.hasOwnProperty(key)) {
        keys.push(key);
        values.push(model[key]);
      }
    }
    
    // Then, I push the template JS code (function body) to the keys:
    keys.push('return ' + js);
    
    var execute = Function.apply(null, keys);
    return execute.apply(null, values);
    

    What do you think about this approach? If you know about a better way to achieve the same result, let me know! Thanks.

    by Jan Kuča

  10. Client-side Rendering Engine, Take 1

    Since two days ago, I have been recreating the Google Feedback Tool. It is a pretty big challenge. One of the key features is taking screenshots of the page on which is being given feedback.

    When Elliott Sprehn, the tech lead behind the Google Feedback Tool, commented on my previous post saying that Google developed their own client-side rendering engine which does not require Flash or ActiveX components, my jaw dropped all the way down to the floor from the respect. I thought that nobody would be that crazy to write one of those. However, Google did and it’s amazingly accurate. (Good job!)

    Not that I want to prove that I’m as good as Google is (which I’m not), I’m trying to develop one of those rendering engines on my own and it’s been a great deal of pain so far and I haven’t had the balls to check the solution in other browsers than Chrome. Here is how it snapshot the Last.fm profile of the band The Cinematics.

    Even though there are several horrible bugs and it ignores a lot of CSS properties, when I look at the fact that it took me only a few hours, I find the result rather impressive :-)

    by Jan Kuča