On this page


  1. There's nothing great or particularly amazing about Angular and its build processes
  2. The broken promise of Web Components
  3. Opensource criticism cheat-sheet
  4. Why I'm not using Google's closure compiler
  5. Javascript Tools: A Story in Disgrace
  6. …in which I complain about Elixir syntax
  7. Erlang is dead. Long live E…?
  8. Javascript: Fatigue vs. Stockholm syndrome
  9. Designing for <anything> with Erlang
  10. Putin: 2×2

Originally published as a gist

There's nothing great or particularly amazing about Angular and its build processes.

Everything Angular does is fighting against its own architectural decisions. On a high level it's like this:

So, nothing really changed (only the build chain is becoming increasingly complex and impossible to reason about), and yet "OMG Angular is so great: Google makes it possible to use it with Bazel and Closurescript instead of the circus that is Webpack and UglifyJS" 😂

Fanboys are fanboys.

Disclaimer: I talk a lot about React here, but you can substitute your favorite library: Inferno, Preact, Vue, snabbdom, virtual-dom (or non-js libraries and frameworks like Elm, Om, etc.). Similarly, replace Polymer with Vaadin, or X-Tag, or…

Also, read Rob Dodson’s excellent response to this article: https://robdodson.me/regarding-the-broken-promise-of-web-components/

Brief, incomplete, and mostly incorrect history of Web Components

Ancient times

By 2011 Internet had grown. It had Facebooks and Gmails, and Twitters, and Asanas, and Google Docs, and countless other online things that could no longer be called sites, or Single Page Applications. They were, for all intents and puposes, applications. As simple as that.

And woe was to the devs who were developing them.

State of the art for Web GUIs at the time was probably a templating language glued together by some serverside logic and/or a client-side library. Backbone if you were lucky. jQuery UI or Sencha/ExtJs if you were enterprise enough.

This was cumbersome. This was limiting. You could not prototype easily and quickly. You could not easily escape the limitations of UI libraries. etc. etc. etc.

And you were limited to the same set of HTML elements as ever: divs, ps, formss…

In 2011 Alex Russel proposed a vision of the future (emphasis mine):

I think we’re stuck today in a little bit of a rut of extensibility. We wind up leaning on JavaScript to get things, because it is the Turing complete language in our environment. It is the only thing that can give us an answer when CSS and HTML fail us. So we wind up piling ourselves into the JavaScript boat. We keep piling into the JavaScript boat.

Bruce yesterday brought up the great example of an empty body tag, and sort of this pathological case of piling yourself into the JavaScript boat, where you wind up then having to go recreate all of the stuff that the browser was going to do more or less for you if you’d sent markup down the wire in order to get back to the same value that was going to be provided to you if you’d done it in markup. But you did it for a good reason. Gmail has an empty body tag, not because it’s stupid. Gmail does that because that’s how you can actually deliver the functionality of Gmail in a way that’s both meaningful and reliable and maintainable. You wind up putting all of your bets, all of your eggs, into the JavaScript basket.

It takes browsers time to implement stuff. We have to get it out to users, now users have to start using it, and then we have to look at our deployed population and go, “OK, now I can use this feature, I can target this feature.”

So as developers, this is the sort of things we thrive on. The faster that this crank turns, the more chances we get to see new features enter the market that we can start to target. “OK, well, this is good.”

So we wind up with this unspoken tension between deep pragmatism and the Platonic ideal of where we would like to be. But we don’t have a really good model for thinking about it.

The more of this behavior [behavior agreed-upon, known to devs, and implemented in the browser] that we take into ourselves, the more we do in JavaScript, the more we get away from the idea of shared ambiguity, which is what makes the world work.

We want to feed this process of progress. I think we get stuck in a place where we consider HTML5, we’re done.

And then he goes on to introduce and demo Web Components, which at the time were three different things: Scoped CSS, Shadow DOM and Web Components. (I also highly recommend his talk in general)

But then W3C happened. In true w3c fashion it went ahead and spent another 5 years defining the Platonic ideal and never feeding the progress as a result.

Cue in Facebook

Facebook application is complex. It might not look like it, but it is. Those little boxes everywhere on the page have surprisingly complex layouts which have to be repeated, and/or customized, and/or adjusted in various contexts. A developer would naturally want to do the following: take this box element, and put it here, and apply these random styles to it without disturbing anything on the page.

And it has to be reasonably fast. Because, well, DOM updates are notoriously slow, and there are countless articles detailing how you absolutely must reduce the number of times you access the DOM.

It’s so bad that innerHTML, a sign of horrible smelly code, is on par with or faster than DOM manipulation.

So, what did Facebook do? Oh, simple, they basically wrote their own implementation of Web Components entirely in Javascript. With an XML-based DSL to boot. They called it React and unleashed it unto the world in 2013.

React provides you with following:

No wonder React took the world by storm. Those who weren’t writing it were surely talking about it. It spawned several competitiors that are aiming for the same feature set (Inferno, Preact) or various subsets, most notably Virtual DOM (Snabbdom, virtual-dom etc.).

2017

In 2017 React fulfills all the promises of Web Components: it lets you write performant reusable self-contained components. It can run on almost any Javascript-enabled browser (React doesn’t support IE < 9).

As the ecosystem blossomed, it went far beyond the scope of Web Components. If I’m not mistaken, CSS Modules proposal appeared because it was first implemented in, and for, React.

In 2017 Web Components are in development despite already spawning two versions each of two of underlying standards.

At the time of this writing the situation for Web Components looks like this:

So what’s this broken promise I hear about?

Well, the main failure is obvious: they are nowhere to be seen. The promise of “feeding the process of progress” is unfulfilled. By their 6th year they spawned a total of 6 standards. Two of them are already deprecated. Only one major browser is commited to supporting them (sorry, Opera, you’re no longer a major browser, and you run on top of Chrome these days).

The other broken promise is the one bandied about in the Internets these days: interoperable custom components without vendor lock-ins.

And this is the one that got me writing this overly long piece of thinking out loud.

Because there’s Polymer.

Polymer is Google’s attempt at creating a Web Components-compliant implementation:

Unlock the Power of Web Components. Polymer is a JavaScript library that helps you create custom reusable HTML elements, and use them to build performant, maintainable apps.

Polymer shows the main problem with Web Components: they are DOM.

The DOM

Consider the following code in React:

<MyComponent style={{border: '1px solid gray'}}>
  {
     ['Hello', 'world'].map((text) => <p>{text}</p>)
  }
</MyComponent>

This is a custom component. Its style is defined by a JavaScript object. Its children are defined by maping over an array of values and producing another component. In this case it’s a <p>, but it could be anything. This component’s children is the current array value.

This XML-like DSL is directly translated into JavaScript:

React.createElement(
  MyComponent,
  { style: { border: '1px solid gray' } },
  ['Hello', 'world'].map(text => React.createElement(
    'p',
    null,
    text
  ))
);

A similar feat in WebComponents would be … well…

If you go for HTML as a counterexample for React’s DSL, it’s impossible:

<my-component style="only strings are allowed in attributes">
  ... nothing here ... 
  only other components or text allowed here
</my-component>

What about JS APIs? Remember I told you Web Components is DOM?

const MyComponent = document.createElement('my-component')
MyComponent.style.border = '1px solid gray'

['Hello', 'world'].forEach(('text') => {
	const p = document.createElement('p')
	p.textContent(text)
	
	MyComponent.appendChild(p)
})

This gets progressively worse as your components grow in complexity. Imagine adding a span around the text in p?

React? Easy-peasy. Just add it:

['Hello', 'world'].map((text) => {
   return <p><span>{text}</span></p>
})

Web Components? Well, it’s DOM:

['Hello', 'world'].forEach(('text') => {
	const p = document.createElement('p')
	const span = document.createElement('span')
	span.textContent(text)
	p.appendChild(span)
	
	MyComponent.appendChild(p)
})

Ad infinitum.

Let’s break compatibility

I assume the limitations described above were the immediate problem that Polymer faced. How do you work around this? Well, you invent your own not-really-JavaScript-but-kinda-Javascript kinda-templating-kinda-scripting kinda-language. That can only exist in strings.

Work your way through Polymer’s data system for a full overview. Below are just some examples.

Note: none of these [[]], {{}}, $= etc. are in the Web Component spec

<template>
  <div>[[name.first]] [[name.last]]</div>
</template>

<my-input value="{{name}}"></my-input>

static get properties() {
  return {
    active: {
      type: Boolean,
      observer: 'userListChanged(users.*, filter)'
    }
  }
}

<div>[[_formatName(first, last, title)]]</div>

<a href$="{{hostProperty}}">

etc. etc. etc.

And my favorite one:

Commas in literal strings: Any comma occurring in a string literal must be escaped using a backslash ().

 <dom-module id="x-custom">
   <template>
     <span>{{translate('Hello\, nice to meet you', first, last)}}</span>
   </template>
 </dom-module>

I mean, wat.

Seriously, y’all

In all seriousness though. Web Components ended up delivering hardly anything from their original promises (or have hardly answered any of the originally raised questions):

These are just a few warts I could come up wth off the top of my head. I haven’t seen them really truly discussed anywhere.

React team went as far as to say

We’re not going to use it at all at Facebook. We’re not going to build React on it because there’s a strong model difference – imperative in Web Components to declarative in React. Web Components doesn’t have an idiomatic way to define things like where events go. How do you pass data when everything is a string? We see it more as an interop layer that lets various frameworks talk to each other.

Nowadays React lets you have them as the leaf nodes in the component tree because React assumes any component name that starts with a lowercase to be a DOM element.

To quote Pete Hunt:

There’s a lot of stuff in Web Components spec that is nice. Being able to customize how select work for example (like dropdowns). So there’s a lot of great stuff in Web Components, but it’s not gonna be the way you structure your applications. and that’s what React tries to solve: how do you structure your application, manipulate the DOM.

Web Components are more of the same regular DOM API. What I like to think of it is: it standardized the worst practices.

There are very very few discussions about these issues except for comments on Twitter or on various articles. The consensus seems to be “Web Components is the glorious interoparable fast performant future”.

Is it?

Rob Dodson posted excellent response to this article: https://robdodson.me/regarding-the-broken-promise-of-web-components/. I highly recommend it.

Credits

Link to the PNG

Originally published as a gist

The question

tl;dr: Closure compiler makes little to no sense outside of Google's ecosystem.

Nolan Lawson conducted a very nice research on how various Javasctip tools bundle/compile Javascript. I highly recommend it: The cost of small modules.

This article has made several rounds on Twitter and many people have asked: Why aren't more people using Closure? There are many reasons for that.

Considering the dire state of Javascript tools today, Google's Closure compiler is yet another cryptic, badly configured, half-baked tool to throw into the ever-growing pile of twigs and sticks called Javascript infrastructure.

Documentation

There's basically none.

The Quickstart shows how to create a page that contains basically exactly one line of Javascript code. Instead of showing you how to do more, the next step is Advanced compilation which is basically Javascript one-liners interspersed with Python code to invoke the compiler service.

All these are unanswered.

Let's download the app then. There could me more info.

To run this you need a library you don't need

The README for the compiler has this nugget:

If you're using globs or many files, you may start to run into problems with managing dependencies between scripts. In this case, you should use the Closure Library. It contains functions for enforcing dependencies between scripts, and Closure Compiler will re-order the inputs automatically.

If you follow the link, this is the page you land on:

The Closure Library is a broad, well-tested, modular, and cross-browser JavaScript library. You can pull just what you need from a large set of reusable UI widgets and controls, and from lower-level utilities for DOM manipulation, server communication, animation, data structures, unit testing, rich-text editing, and more.

WAT?

I want to handle my dependencies, not have a UI library. Maybe they refer to ClosureBuilder? I have no idea.

Let's maybe run it?

I have some code in my app that consists of multiple modules, relies on some third-library code (node_modules) etc. The entry point is index.js

How do I run it so that it generates code I need to run my app in the browser?

However you run it, it only produces some minified code that:

At this point I'm ready to give up because, well, I already have my sweet set up that handles everything, transpiles, and compiles, and minifies, and bundles all code.

Switching to ADVANCED compilation modules may pollute your console output with mutiple warnings or errors that are also not helpful in the least.

In (small) conclusion

Google's Closure compiler makes no sense outside Google infrastructure. When you have set up everything the Google way and there are people to help you along as you run into problems, you're ok.

When you're alone (as most devs are), you will either spend an unknown number of hours going through error-messages, disassembling the setups of other projects (ClojureScript, Angular 2)...

Or just use the tools which you kinda sorta can setup without going insane

Why am I even writing about this?

Building JavaScript apps is overly complex right now

Among other things The State of JS 2016 asked developers if they disagreed (1), were neutral (3), or agreed (5) with the following statement: Building JavaScript apps is overly complex right now.

123455%11%27%29%30%

A full 59% of developers agree that Building JavaScript apps is overly complex right now. Only 16% disagree.

Javascript fatigue is real, no matter how hard some people try to shrug it off (see this for an example).

We’re told: move fast and break things. For once, just for once, I would like to stop breaking things and stick to something that works. Half of the issues here is something I’ve encountered in the past two weeks alone, when trying to create a project from scratch.

Developer Experience

Developer Experience is not a lost art. It’s an art that has never been discovered. It was briefly discussed a few years ago and then forgotten. These days if you ever hear it mentioned, it’s usually in the context of APIs.

However (emphasis mine):

As software consumes the world around us, good design in our tools is a growing competitive advantage

Our tools shape our work, and great tools change the shape of our industry.

We talk a lot about the importance of a strong engineering team, but not enough about the design of our tools and the impact it has on the quality of the products we build. We should be talking about DX more, and it’s not enough to talk about UX alone.

Steve Boak

Javascript tools are quite literally death by a thousand cuts. The whole experience of working with and building for Javascript is, at the very least, an excercise in frustration. The landscape is utterly hostile to developers. With experience you learn to navigate it, somewhat safely. Is it an experience you need, though?

Falsehoods programmers believe about Javascript tools

  1. Tools work when you run them
  2. Tools can be configured
  3. In JSON
  4. In Javascript
  5. Can be pointed to different config file
  6. Don’t use hidden/special files like .something.rc
  7. Tools fail if their config is incorrect
  8. Tools warn you about invalid values in configs
  9. Tools ignore invalid values in config
  10. Tools use defaults instead of invalid values in config
  11. Tools don’t ignore valid values in config
  12. Well, at least tools report non-config errors
  13. At least fatal errors?
  14. Tools propagate errors from their plugins or sub-tools
  15. At least fatal errors? I asked that one already, didn’t I?
  16. You can tell if an error originated in the tool, in a plugin or in a sub-tool
  17. At least, errors are clearly stated on screen/in logs
  18. At least, with reasonable and relevant information
  19. Tools can be invoked from command line
  20. Tools can be run on a list of files
  21. With glob patterns
  22. Minor versions of tools, or their plugins, or their sub-parts keep backwards compatibility
  23. Tools fail if none of their requirements are satisfied
  24. Tools fail if some of their requirements are not satisfied
  25. Errors or warnings if this happens
  26. There is only one way to do things
  27. There is more than one way to do things
  28. These many ways lead to the same result
  29. Your tool will be relevant 5 years from now
  30. Ok, a year from now

So, let’s talk tools.

npm

npm is the ubiquitous javascript tool. You may still run into bower occasionally, but that battle seems to have been lost.

npm is:

It probably does other tasks, but these are the most important ones. It does its job quite well, and I cannot recommend this post highly enough. Still, there are gotchas.

Run whatever I think you want, not what you want

This is relatively a minor WTF, but it’s there nonetheless:

If the specified configuration param resolves unambiguously to a known configuration parameter, then it is expanded to that configuration parameter. For example:

npm ls --par
# same as:
npm ls --parseable

In the example above --par is an invalid parameter. npm will not silently ignore it (which would be bad). npm will silently expand it to whatever parameter or combination of parameters it fuzzy matched.

npm has been notoriously bad at detecting actual errors. For example, until version 3.x (!) it would not fail if its configuration file was invalid.

Depend on whatever I you think you want, not what you want

Let’s consider npm install --save. This installs a dependency and adds it to the dependency list in your project’s package.json. And by saving I mean “take the list of dependencies, sort it alphabetically, and write it back to disk”.

This would not be a problem, save for this:

The npm client installs dependencies into the node_modules directory non-deterministically. This means that based on the order dependencies are installed, the structure of a node_modules directory could be different from one person to another. These differences can cause “works on my machine” bugs that take a long time to hunt down.

Yarn: A new package manager for JavaScript

Brilliant, isn’t it?

Remove all the things

Well, you know/remember this one: Rage-quit: Coder unpublished 17 lines of JavaScript and “broke the Internet”

Many were quick to attribute the problem to the horrible programmer culture of JavaScript, where people have forgotten how to program. It’s not the case. JavaScript community has whole-heartedly adopted Unix’s philosophy of “one package has to do one thing, and do it well”, but may have taken it to extremes.

The problem, or rather problems (plural) are all laid out here: https://github.com/npm/npm/issues/12045, so I’m not going to re-iterate.

npm and npm’s registry are essential to developers and to developer experience. The way some/many of arising issues are handled by npm’s organisation are clearly detrimental to developer experience.

Your package name doesn’t matter. Until it matters

First, this story.

So after a search for various of keywords I found out that the module name npmjs was still available in the registry. In the four years that the registry existed nobody took the effort in registering it. This module was and is a perfect fit for my module.

On the 22th I received an email from Isaac, the CEO of npm Inc (which recently raised more than 2M in funding for his company) and creator of npm with a question:

Can you please choose another name for this module? It’s extremely confusing. Thanks.

It didn’t even matter what how right or wrong I was for using npmjs as a module name Isaac had clearly already decided to destroy the module as he stated there wasn’t any negotiation and that it would be deleted no matter what.

Arnout Kazemier

Sounds bad, doesn’t it? Ok, what about this story then (emphasis mine)?

For a few minutes today the package “fs” was unpublished from the registry in response to a user report that it was spam. It has been restored.

More detail: the “fs” package is a non-functional package. It simply logs the word “I am fs” and exits. There is no reason it should be included in any modules. However, something like 1000 packages do mistakenly depend on “fs”, probably because they were trying to use a built-in node module called “fs”.

Incident Report for npm, Inc.
  1. Should you even allow publishing a module that has the same name as an intenal one?
  2. If npmjs is confusing, how is fs not confusing?
  3. If thousands of packages depend on it, how can you remove it considering the SNAFU you had just several months prior?

npm devs are clueless? (added 2016-12-27)

Event though npm is a rather nice package manager, its devs often behave like they haven’t got a clue as to what’s happening, or how development should happen.

Performance drop by 40% or more

At one point npm implemented a new progress bar which slowed down installation speeds by 40% and more. See related issue. Worse still, it was now enabled by default (disabled by default in the previous version).

You’ve got to love some comments from the npm dev team:

I’ve been aware of this as an issue for a while and the fix was literally 10 minutes or so of effort, but it hadn’t bubbled up in priority as I hadn’t realized how big an impact it was having.

Or, in a related issue (this appeared after the release):

Profiling would be grand… Put together a minimal benchmark to work against… Ideally I’d like this benchmark to be 2.x and 3.x compatible so we can directly compare different parts.

We break your stable branch, we’re not going to fix it

Look at this issue. Here’s the problem:

I leave this with no comments

Semantic versioning all the way! Not!

NPM swears by semantic versioning. The CLI will even warn you if your packages do not conform to the SemVer.

Meanwhile, npm project itself couldn’t care less about semantic versioning.

You may wish to check any of the changelogs for any release, including patch releases (for example, 4.0.2).

And this really leads to the following question:

Can we even trust our core tools?

This all leads to rather horrible questions:

Also, see the rationale behind Yarn, Facebook’s alternative package manager.

Build tools

The number of build tools (subsets and supersets of Makefiles and npm, or combination of all of the above) for Javascript is simply staggering.

Grunt, Gulp, Broccoli, Brunch, Jake, Mimosa, Webpack, Bower…

They are all broken in more ways than one. Let me just quote from my own experience (read the whole article for more than just this one snippet):

Let’s step back for a second, and consider:

  • Grunt does not transform code, it’s a “task runner” The task it runs is called browserify
  • Well, the problem with the browserify task8 is that the task runner cannot run it. It needs a plugin called grunt-browserify* to do that
  • Oh, and browserify has to run babel on the source code before it can do anything with it.
  • And the problem is that browserify cannot run babel directly. It needs babelify to work

All in all the whole toolchain to produce a Javascript file is grunt -> grunt-browserify -> browserify -> babelify -> babel. And someone in that chain decides that babel missing all of its specified plugins is not a reason to throw an error, stop and report.

Javascript: Fatigue vs. Stockholm syndrome

Are these problems fixed now? I don’t know. I no longer even care if they are fixed or not. I got myself new shiny better toys to play with. Or did I?

webpack

Webpack is almost the latest and greatest in a web-developer’s life (there’s a more latest now, called Rollup).

On the surface it does the following: take all the modules that your app has, figure out dependencies between them, bundle them up in a single file that you can serve to your website.

All webpack understands is ES5 and some common module structures: CommonJS, AMD etc. It can invoke so-called loaders whose job is to take a file and produce an ES5 output which Webpack will then take and bundle.

As a result, you can somewhat easily depend on anything: modules written in ES6, or in TypeScript, or in Coffeescript, or… You can even depend on CSS files or PNGs. If there’s a loader for that type of file, Webpack can bundle it.

Boilerplates

Also as a result, Webpack’s configuration is inane. And I don’t say it lightly.

Search github for “boilerplate”, and you will come away with easily hundreds of “this is configuration you need to get you started” because it is very nearly impossible to configure webpack.

Webpack is not alone to blame for this. It’s also the fractured tools, the fractured libraries etc.

loaders:[{
   test: /(\.css)/,
   loader: "css-loader?module&localIdentName=[path][name]--[local]--[hash:base64:5]"
}]

Yes, the above is configuration for css-loader. Yes, it is a string that contains URL-like structure where parameters you pass in are, well, URL query parameters. Because reasons.

There’s a reason for that, obviously. There always is.

First, you can pipe loaders for a file type:

loaders:[{
   test: /(\.css)/,
   loader: "style-loader!css-loader?importLoaders=1!autoprefixer-loader"
}]

This configuration pipes a .css file through style-loader then css-loader with options then autoprefixer-loader.

Second, you can do the same thing from within your code because why the fuck would you not want to do that?

require("style-loader!css-loader?importLoaders=1!autoprefixer-loader!...")

Did you know you could exclude things from a loader, if, well, there’s such an option?

loaders:[{
   test: /(\.css)/,
   loader: ExtractTextPlugin.extract('style', 'css?-autoprefixer!postcss')
}]

This is a plugin. Acting as a loader. It has a pre-loader, style. And a loader. Which is a combination of two loaders, css and postcss. Oh, and we exclude autoprefixer (plugin? feature?) from the css loader.

"You were so preoccupied with whether you could, you didn’t stop to think if you should."

Yes, if there’s just one loader, you can provide its parameters as a regular JSON structure (feast your eyes on the query parameters section). However, this is yet another impendancy mismatch you have to deal with when trying to figure out what, where and how to configure all the moving parts.

You go an look for docs on a *-loader, and you run into anything: config as strings, config as objects, a mix of it. And if something goes wrong, you are left alone, there’s no way to know what failed.

But honsetly. How much time do you have to spend to come up with this: ExtractTextPlugin.extract('style', 'css?-autoprefixer!postcss')?

Learn to love the stacktrace

Let’s pretend you are a C++ developer. You use a Makefile to invoke the gcc compiler on your source code. If there is an error in your source code, you will see the relevant error.

For some reason you will never see the stacktraces from either the make tool or from gcc

Not so in Javascript. Because internal stacktraces from the build tools are the bread and butter of everyday JS development.

Enjoy.

> webpack --config webpack.config.js --progress --colors -w -d

Hash: 6f816ab5f143490174a0
Version: webpack 1.13.2
Time: 3176ms
     Asset     Size  Chunks             Chunk Names
    app.js  1.11 MB       0  [emitted]  main
app.js.map  1.25 MB       0  [emitted]  main
   [0] multi main 28 bytes {0} [built]
    + 226 hidden modules

ERROR in ./js/app/fsm/payment-flow-fsm.ts
Module parse failed: /Users/dmitriid/Projects/project/node_modules/awesome-typescript-loader/dist/entry.js!/Users/dmitriid/Projects/project/js/app/fsm/payment-flow-fsm.ts Unexpected token (3:18)
You may need an appropriate loader to handle this file type.
SyntaxError: Unexpected token (3:18)
    at Parser.pp$4.raise (/Users/dmitriid/Projects/project/node_modules/webpack/node_modules/acorn/dist/acorn.js:2221:15)
    at Parser.pp.unexpected (/Users/dmitriid/Projects/project/node_modules/webpack/node_modules/acorn/dist/acorn.js:603:10)
    at Parser.pp$3.parseExprAtom (/Users/dmitriid/Projects/project/node_modules/webpack/node_modules/acorn/dist/acorn.js:1822:12)
    at Parser.pp$3.parseExprSubscripts (/Users/dmitriid/Projects/project/node_modules/webpack/node_modules/acorn/dist/acorn.js:1715:21)
    at Parser.pp$3.parseMaybeUnary (/Users/dmitriid/Projects/project/node_modules/webpack/node_modules/acorn/dist/acorn.js:1692:19)
	<...skip 20 or so lines...>
    at Parser.parse (/Users/dmitriid/Projects/project/node_modules/webpack/node_modules/acorn/dist/acorn.js:516:17)
    at Object.parse (/Users/dmitriid/Projects/project/node_modules/webpack/node_modules/acorn/dist/acorn.js:3098:39)
	<...skip another 20 or so lines...>
    at nextLoader (/Users/dmitriid/Projects/project/node_modules/webpack/node_modules/webpack-core/lib/NormalModuleMixin.js:290:3)
    at /Users/dmitriid/Projects/project/node_modules/webpack/node_modules/webpack-core/lib/NormalModuleMixin.js:259:5
    at Storage.finished (/Users/dmitriid/Projects/project/node_modules/webpack/node_modules/enhanced-resolve/lib/CachedInputFileSystem.js:38:16)
    at /Users/dmitriid/Projects/project/node_modules/webpack/node_modules/enhanced-resolve/node_modules/graceful-fs/graceful-fs.js:78:16
    at FSReqWrap.readFileAfterClose [as oncomplete] (fs.js:380:3)
 @ ./js/app/index.tsx 9:25-58

ERROR in [default] /Users/dmitriid/Projects/project/js/app/fsm/payment-flow-fsm.ts:3:15
Expression expected.

The relevant ticket for this is webpack/webpack#1245. Note no one even asks the most obvious question: “why in the seven hells would I need internal stacktraces if I’m not a webpack/plugin developer?” Well, not until yours truly came along.

Can you understand what actually failed?

So, I’m running this:

> webpack --config webpack.config.js --progress --colors "-w" "-d"

Hash: 578f6adad579fede3e98
Version: webpack 1.9.6
Time: 3224ms
     Asset     Size  Chunks             Chunk Names
    app.js  1.12 MB       0  [emitted]  main
app.js.map  1.27 MB       0  [emitted]  main
   [0] multi main 28 bytes {0} [built]
    + 227 hidden modules

ERROR in ./js/app/fsm/state-manager.ts
Module not found: Error: Cannot resolve module 'machina' in /Users/dmitriid/Projects/js/app/fsm
 @ ./js/app/fsm/state-manager.ts 2:14-32

Can you immediately tell me if it’s webpack or typescript failing?

Ok, how about this:

> webpack --config webpack.config.js --progress --colors "-w" "-d"

Hash: ab8d7ccc3d611479ca81
Version: webpack 1.9.6
Time: 4464ms
     Asset     Size  Chunks             Chunk Names
    app.js   2.4 MB       0  [emitted]  main
app.js.map  2.77 MB       0  [emitted]  main
   [0] multi main 28 bytes {0} [built]
    + 341 hidden modules

ERROR in [default] /Users/dmitriid/Projects/js/app/fsm/state-manager.ts:1:25
Cannot find module 'machina'.

As you can clearly see, the first one is a webpack error. The second one is TypeScript error. Relevant issue: webpack/webpack#2878 (there are probably others).

All your options are belong to us

Specifically, the watch option. Could be others. I don’t know and don’t care at this point.

So, if you provide watch: true in Webpack configuration, Webpack will:

Enter watch mode, which rebuilds on file change.

Webpack documetation

This is, unsurprisingly, not entirely correct. See, if you ever decide to create your own build script, and invoke webpack trough its node.js API, you will see that:

Except

Because, I guess, reasons. And, surprisingly, the “short” version with webpack(config, callback) works as expected. Who’d a thunk it.

Forget continuous builds (added 2016-12-27)

Just read through this issue. Tl;dr: if webpack fails, it exits with a status code of 0. Because reasons.

And yes, despite this being a majot bug in the main version currenlty used, it has not been fixed in the two years since it was reported. Because reasons.

Nothing works out of the box anymore

Install and run? Sane defaults? These things are becoming a rare beast in the Javascript world. It seems that nothing works out of the box anymore.

In JS world, sadly, going through hoops and withholding crucial information from the developer is now the accepted norm.

Babel

The only job that Babel does is compiling a next version of JavaScript to the current version of Javascript.

I mean this is what says on its site:

Babel is a JavaScript compiler.

Use next generation JavaScript, today.

Babel transforms your JavaScript

You put JavaScript in

[1,2,3].map(n => n + 1);

And get JavaScript out

[1,2,3].map(function(n) {
   return n + 1;
});
https://babeljs.io

THIS IS A LIE. A lie so blatant that the next line on the site says it’s a lie:

Start by installing the Babel CLI and a preset

npm install --save-dev babel-cli babel-preset-latest

Create a .babelrc file in your project (or use your package.json)

{
   "presets": ["latest"]
}
https://babeljs.io

Understand this: out of the box the javascript compiler does not do a single thing. You have to install a number of plugins/presets before it even does anything.

Moreover, if there are no presets and no plugins installed, babel will not even complain about it. It will just … do nothing.

Given this index.js file:

[1, 2, 3].map(n => n + 1);

Running freshly installed babel will not even warn you, and will do nothing:

> node_modules/.bin/babel index.js
[1, 2, 3].map(n => n + 1);

Only if you install a preset and specify it, you get back what you need.

Babel has the “latest” preset which is:

This is a special preset that will contain all yearly presets so user’s won’t need to specify each one individually.

The answer is yes in any world other than Javascript.

Typescript

I’ve decided to talk about Typescript, because why not. More often than not Javascript is only used as a target language. Multiple other languages exist that compile/transpile into Javascript while promising nicer features, syntax, tools and so on and so forth.

Typescript is a superset of Javascript developed by Microsoft. It introduces type checks and various niceties into the language.

It includes a nice fast compiler which removes the need for Babel, but, obviously, it introduces a whole host of other problems.

Loaders

Not a typescript issue per se, but still.

When invoking from Webpack, you will deal with either ts-loader or awesome-typescript-loader.

The former one is recommended in typescript docs. The latter one is used in 99% of Webpack boilerplates. Good luck figuring out how they are different, what features they support or don’t support etc.

I haven’t had much experience with ts-loader (yet?), but I’ve alredy run into the following with awesome-typescript-loader.

Who cares about your configs. Part I

Project from scratch. Forgot to create tsconfig.json. No errors whatsoever, obvously. Webpack fails because it cannot parse the non-processed .ts file.

Why did .ts compilation fail? Was it due to a missing tsconfig.json. Wouldn’t it be just so nice if the tools involved could report this?

Who cares about your configs. Part II

When trying to clean up configs I decided to move tsconfig.json to a config directory. Provided path to this file as tsconfig option as per README.

Provided the following invalid option to the compiler in the tsconfig:

{
  "compilerOptions": {
    "target": "absdefg",
  }
}

The config above was accepted, and silently ignored. Everything got compiled. Was the config file even picked up? Well, webpack didn’t fail, so probably it was (see Part I). Who ignored the error? TS compiler? awesome-config-loader? No one knows, and it is impossible to find out.

Third-party modules, do you speak it? (upd. 2016-11-16)

Sooner or later you will have to import modules written in or transpiled to Javascript. Unless you develop a library with no external dependencies (quite possible) or something that only depends on other libraries written in Typescript (highly unlikely).

So, you will find yourself typing something like this into your Typescript code:

import machina from 'machina';

Which will fail:

[default] ...app/src/fsm/state-manager.ts:1:25
Cannot find module 'machina'.

See, Typescript needs type definitions describing the stuff your import. And here’s a type definition that will work just fine:

declare module machina;

You will ask however, “Is that it?”. Yes, that is it.

Edit (Nov 16): This will finally be fixed in TypeScript 2.1.3

Because reasons. Go ahead and try to make sense of the ambient modules section in the docs.

Types, types everywhere

Libraries are developed in whatever language authors prefer. To provide proper static type checking Typescript needs more than the stub module definitions. It needs actual type definitions for libraries.

Thanks to countless contributors to DefinitelyTyped there are quite a few definitions Typescript can use.

Well,…for some definition of “can”.

> npm install @types/superagent

[default] /Users/dmitriid/Projects/keyflow/keyflow-website/app/node_modules/@types/superagent/index.d.ts:83:30
Cannot find name 'Promise'.

See, despite the fact that this is not a single isolated problem, there are no solutions to this.

Well, except one: maybe try a newer version of npm, namely npm 3.x.

I personally have a problem with this. node.js has this thing called Node LTS, Long Term Support. And at the time of this writing it was:

> node -v
v4.6.0
> npm -v
2.15.9

I know, I know. I probably shouldn’t run cutting edge stuff on non-cutting-edge platforms, yada yada.

The problem is there, the problem exists. And, obviously, it exists for some modules, and for others (@types/react works, @types/superagent doesn’t, ad nauseam). Because reasons.

Wrapping up

I’m not sure this warrants a conclusion. I’ve stopped detailing my experience as I was approaching the 4000 word mark. However, there are so many more things that break, run amok, break in unpredictable ways, etc. etc. etc.

I’ll leave you with these quotes:

…never have I worked in an ecosystem where the knowledge attained while becoming a master of the craft goes out of date so rapidly, or where solutions are quite so brittle.

The JS world’s obsession with small tools mean that they combine in endless permutations, causing endless issues to debug.

When the tower of abstractions falls down in a steaming pile and you need to figure out what’s gone wrong and fix it, you then end up sinking hours and hours into it (all the more because you don’t really understand what’s going on, because you didn’t set it up from scratch).

Or you waste a month figuring out how to plug all the tools together. If I have a complex project I know full well I’m going to want code coverage, a proper module system, minification, etc. etc. The initial time investment to investigate the options here and get it all working is faintly ridiculous compared to ecosystems like Java or .NET (or even C++, for that matter).

I’m not even going to talk about the cavalier attitude various popular parts of the ecosystem (e.g. react-router) have towards API stability.

It’s deeply irksome, at best.

Alastair Maw

…the JavaScript community suffers from a very serious case of NIH syndrome, compounded by a neglect for long term sustainability of software projects.

Don’t get me wrong, every single language in the 20 or so years of web development has gone through the framework phase, where people would experiment with solutions for every one of those problems.

The difference is that every one of those languages very quickly converged into good solutions and then made those good solutions into effective tools to get things done.

The JavaScript community, otoh, doesn’t seem to get to the converging part…

Daniel Ruoso

It’s been said that “Javascript fatigue” appears because developers are lazy.

It’s been said that “Javascript fatigue” is because developers don’t want to learn anything new.

These arguments are null and void…

  • there’s nothing lazy in trying to make your build tool work
  • there are exactly zero useful things to learn from that experience

The time I spent trying to figure out the exact motions of all the moving parts I could spend on learning something genuinely new.

Instead, I now have a build toolchain that I have exactly zero confidence in (because it will break unexpectedly at the very next update of any of the fourteen hundred moving parts in it).

Yours truly

Originally posted on Medium

I kinda like Elixir. I even think that Elixir is a huge leap towards the future and all the right things that should be happening in the Erlang world (see related post here). I still get irritated by it :)

What follows is a highly opinionated … erm … opinion. I am right and you are wrong ;)

It’s not the syntax that’s important, it’s the consistency

I will be hanged out to dry for this, then brought in and hanged dry again. But here goes.

Let’s start with Erlang. Erlang has one of the most concise and consistent syntaxes in programming languages.

Erlang syntax primer

1: A sequence of expressions is separated by commas:

Var = fun_call(),
Var2 = fun_call2(), Var3 = fun_call3(Param1, Param2)

2: A pattern match after which we return a value is followed by an arrow. From a programmer’s point of view, there’s no difference if this is a pattern match introduced by the function declaration or by a case, receive or if

function_declaration(Param1, Param2) -> 
  Var = fun_call(),
  Var2 = fun_call2(), Var3 = fun_call3(Param1, Param2),
 
  receive
     {some, pattern} ->
         do_one(),
         do_three(),
         do_four() %% the value of this call will be returned
  end.

3: Choices are spearated by semicolons. Once again, there’s no real difference (at least to the programmer) between choices in function heads (we choose between different patterns) and choices in case, if or receive:

function1({a, b}) ->
  do_a_b();
function1({c, D}) ->
  case D of
    {d, e} -> do_c_d_e();
    {f, g} -> do_c_f_g()
  end.

4: There are also blocks, and a block is terminated by an end (there’s a bit of inconsistency in blocks themselves: begin...end, case of...end, try...catch...end, if...end). E.g.:

function() ->
  begin
    case X of
      b -> 
         if true -> ok end
    end
  end.

What of Elixir?

And here’s my problem with Elixir: You have to be always be aware of things and of what goes where when.

do..end vs ->

def f(a, b) do
  case a do
    {:c, :d} ->
      do_a_b_c_d()
    {:e, :f} ->
      receive do
        {:x} -> do_smth()
        {:y} -> something_else()
      end
  end
end

What? Why is there suddenly an arrow? In a language where (almost) everything is a do...end the arrow is jarringly out of place. For the longest time ever I honestly thought that the only thing separating pattern matches in a case statement is identation. Which would be weird in a language where whitespace is insignificant.

There’s an argument that these are individual patterns within a do...end block separated by newlines and denoted by arrows (see here). cond, case and receive all use that. The argument is moot, however.

Here’s how you define an anonymous function in Elixir:

a_fun = fn x -> x end

This is an even weirder construct. It’s not a do...end. But it has an arrow and an end. Moreover, regular functions are also individual patterns within a do...end block:

defmodule M do
  def x(:x), do: :x end
  def x(:y), do: :y end
end

And here’s how you define similar constructs with arrows:

a_fun = fn :x -> :x
           :y -> :y
        end

result = case some_var do
           :x -> :x
           :z -> :z
         end

If arrow is just a syntactic sugar for do...end, why can’t it be used in other places? If it’s used for sequences of patterns, why can’t it be used for function declarations?

There’s definitely an argument for readability. Compare:

## arrows
case some_var do
  :x -> :x
  :z -> 
    some
    multiline
    statement
    :z
end

## do..end as everywhere
case some_var do
  :x, do: :x
  :z do
    some
    multiline
    statement
    :z
  end
end

But then… Why not use the more readable arrows in other places as well? ;)

## regular do..end
defmodule M do
  def x(:x), do: :x end
  def x(:y) do
    some
    multiline
    statement
    :z
  end
end

## arrows
defmodule M do
  def x(:x) -> :x
  def x(:y) ->
    some
    multiline
    statement
    :z
  end

So, in my opinion, arrows is a very weird incosistent hardcoded syntactic sugar. And it irritates me :)

Moving on

Parentheses are optional. Except they aren’t

One of the arguably useful features of Elixir is that function calls do not require parantheses:

> String.upcase(“a”)
“A”
> String.upcase “a”
“A”

Except that if you want to keep your sanity instead of juggling things in your mind, you’d better off using the parentheses anyway.

First, consider Elixir’s amazing pipe operator:

## instead of writing this
Enum.map(List.flatten([1, [2], 3]), fn x -> x * 2 end)

## you can write this
[1, [2], 3] |> List.flatten |> Enum.map(fn x -> x * 2 end)

Whatever is on the left side of the pipe will be passed as the first argument to the whatever’s on the right. Then the result of that will be passed as the first argument to the next thing. And so on.

However. Notice the example above, taken straight from the docs. List.flatten, a function, is written without parentheses. Enum.map, also a function, requires parentheses. And here’s why:

> [1, [2], 3] |> List.flatten |> Enum.map fn x -> x * 2 end
iex:14: warning: you are piping into a function call without parentheses, which may be ambiguous. Please wrap the function you are piping into in parentheses. For example:foo 1 |> bar 2 |> baz 3Should be written as:foo(1) |> bar(2) |> baz(3)

This is just a warning, and things can get ambiguous. That’s why you’d better always use parentheses with pipes. It gets worse though.

Remember anonymous functions we talked about earlier? Well. Let me show an example.

Here’s how a function named flatten declared in module List behaves:

> List.flatten([1, [2], 3])
[1, 2, 3]
> List.flatten([1, [2], 3])
[1, 2, 3]
> [1, [2], 3] |> List.flatten
[1, 2, 3]

Here’s how an anonymous function behaves:

> a_fun = fn x -> List.flatten x end
#Function<6.54118792/1 in :erl_eval.expr/5>> a_fun([1, [2], 3])
** (CompileError) iex:17: undefined function a_fun/1> a_fun [1, [2], 3]
** (CompileError) iex:17: undefined function a_fun/1iex(17)> [1, [2], 3] |> a_fun
** (CompileError) iex:17: undefined function a_fun/1
 (elixir) expanding macro: Kernel.|>/2
 iex:17: (file)

Wait. What?!

Oh. Right. Anonymous functions, what a nice joke. Let’s all have a good laugh. An anonymous function must always be called with .()

> a_fun.([1, [2], 3])
[1, 2, 3]
> [1, [2], 3] |> a_fun.()
[1, 2, 3]

Because parentheses are optional, right. And functions are first class citizens, and there’s no difference if it’s a function or a variable holding a function, right? (In the example below &List.flatten/1 is equivalent to Erlang’s fun lists:flatten/1. It’s called a capture operator.)

> a_fun = List.flatten
** (UndefinedFunctionError) undefined function List.flatten/0
 (elixir) List.flatten()> a_fun = &List.flatten
** (CompileError) iex:19: invalid args for &, expected an expression in the format of &Mod.fun/arity, &local/arity or a capture containing at least one argument as &1, got: List.flatten()> a_fun = &List.flatten/1
&List.flatten/1> a_fun([1, [2], 3])
** (CompileError) iex:21: undefined function a_fun/1> a_fun.([1, [2], 3])
[1, 2, 3]

Yup. Because parentheses are optional.

If you think it’s not a big problem, and that anonymous or “captured” functions are not that common… They may not be common, but they are still used. An empty project using Phoenix:

Every time you use a “captured” or an anonymous functions, you need to keep the utterly entirely useless information: they are invoked differently than the rest of the functions, they require parentheses etc. etc.

> 1 |> &(&1 * 2).()
** (ArgumentError) cannot pipe 1 into &(&1 * 2.()), can only pipe into local calls foo(), remote calls Foo.bar() or anonymous functions calls foo.()> 1 |> (&(&1 * 2)).()
2

Local. Remote. Anonymous. Riiiight

Compare with Erlang:

1> lists:flatten([1, [2], 3]).
[1,2,3]2> F = fun lists:flatten/1.
#Fun<lists.flatten.1>
3> F([1, [2], 3]).
[1,2,3]

Before we move on. Calls without parentheses lead to the unfortunate situation when you have no idea what you’re doing: assigning a variable value or the result of a function call.

some_var = x  ## is it x or x(). Let's hope the IDE tells you

And yes, this also happens.

do..end vs ,do:

This has been explained to me here. I will still mention it, because it’s annoying the hell out of me and is definitely confusing to beginners. Basically, it’s one more thing to keep track of:

def oneliner(params), do: resultdef multiliner(params) do one two result end

Yup. A comma and a colon in one case. No comma and no colon in the second case. Because syntactic sugar and reasons.

Originally posted on Medium

Disclaimer: I love Erlang. From 2005 to 2015 I ran erlanger.ru (mostly alone, as editor-in-chief, content creator, developer — you name it). It is my favourite language, and this isn’t going to change any time soon. Exactly because I love it, I see problems with it.

Problem statement


Other languages are getting better at being Erlang than Erlang is getting better at being other languages.


Ericsson’s Baby

Erlang is first and foremost Ericsson’s baby. Thus, enterprise and telecom are the two things that influence Erlang the most. It is Erlang’s greatest strength and it’s greatest weakness.

Enterprise side of things made sure that Erlang has got to be one of the most stable languages out there:

Telecom side of things made sure that Erlang is also probably the language that got more things right from the start (or nearly from the start) than any other language:

The problem, however, this no longer matters.

A Brief, Incomplete, and Mostly Wrong History of Erlang

Java is the new Erlang

This may be hard to swallow, but Java is the new Erlang. And other languages either piggyback Java, or are fast developing to provide competition in many areas.

Let’s look at the most common tools that everyone uses or knows about in areas where Erlang is supposed to shine: Hadoop, Kafka, Mesos, ChaosMonkey, ElasticSearch, CloudStack, HBase, <insert your project>.

For every single of your distributed needs you will take Java. Not Erlang.

In other rapidly developing fields such as IoT, there’s Java again, C/C++, Python. Rarely, if ever, Erlang.

And the trend will only continue. Server management and monitoring tools, messaging and distribution, parallel computing—you name it—sees an explosion of tools, frameworks and libraries in most languages. All of them encroaching on space originally thought to be reserved for Erlang.

Erlang Is the New … Nothing, Really

While others are encroaching on it’s territory, Erlang has really nothing to fight back with.

Even today Erlang is a very bare-bones language. An assembly, if you will, for writing distributed services. Even if you include OTP and third-party libraries. And, as a developer, you have to write those services from scratch yourself. Not that it’s a bad thing, oh no. But in a “move fast and break things” scenario having to write lots of things from scratch may break you, and not others.

Erlang is a language for mad scientists to write crazy stuff for other mad scientists to use or to be amazed at (see my lightning talk from 2012)

Anything else? Well,

This leaves Erlang… well, in limbo.

Is There Hope Yet?

I really really don’t know. I really hope though.

Due to its heritage Erlang will evolve slowly. It’s a given, and it’s no fault of the brilliant people who develop Erlang.

Other languages for the Erlang VM? Maybe. As we have seen with Elixir, in three years it’s got more development (both in the language itself and in the available libraries) than Erlang got in 10 years. We can only hope the drive will not subside. Will there be a “Hadoop.ex” to rival the Java Hadoop? I don’t know.

Incidentally, a lot of the praise that now comes towards Elixir mentions Erlang only in passing. As in “compiles to Erlang” or “runs on Erlang VM”. And may be this is exactly what Erlang needs: become for (Elixir|LFE|Efene|…) what JVM became for (Scala|Clojure|Groovy|…)?

Erlang is far from dead. It’s alive and kicking (some butt, if need be). Will it do that for long? In a “let’s choose between languages A, B, and Erlang for scenario X” will there remain a scenario X where Erlang will remain relevant and/or a valid choice?

I don’t know. I sure as hell hope.


I’m trying to do some good myself. I’m not bright enough to create something like the libraries mentioned here, but there’s neo4j-erlang and pegjs for what it’s worth.

Originally posted on Medium

Programmers are the worst. They will vehemently defend the tool of their choice even though it’s the worst tool in the universe. See any vs in the programming community. Tabs vs. spaces. Emacs vs. vim. C++ vs. PHP. You name it.

tl;dr: Javascript fatigue is real. If you deny it, you have a case of Stockholm syndrome. The problem isn’t new, see these two excelent blog posts: http://blog.keithcirkel.co.uk/why-we-should-stop-using-grunt/ and http://blog.keithcirkel.co.uk/how-to-use-npm-as-a-build-tool/ for a calmer discussion.

A little true story

All joking aside, I think that Javascript programmers are even worse than the worst. Here’s a little true story that happened to me three weeks ago.

We have a project which started way before Webpack was in any useable shape or form. So, it uses Grunt. It’s all been fine, and Grunt has been chugging along quite happily (chugging, because, well, you need to concat all/some/some magic number of files before it can figure out what to do with them. Yes, concat. Sigh). Until we imported a small three-file component which was written in — wait for it — ES6.

ES6 is the next version of Javascript (erm, ECMAScript) which is supported in exactly zero browsers, and exactly zero versions of any of the Javascript virtual machines. Through the magic of transpilers it can be converted to a supported version of Javascript. Therefore half of the internet publishes their code in ES6 now.

Oh my. I know what we need! We need Babel! The best tool to convert pesky ES6 into shiny ESwhatever-the-version (I’m told it’s 5).

Install Babel. Run.

>> ParseError: ‘import’ and ‘export’ may appear only with ‘sourceType: module’
Warning: Error running grunt-browserify. Use — force to continue.

Ok. Bear with me. Babel, whose job 99% of the time consists of transforming ES6 code into ES5 code no longer does this out of the box. I have no idea if it does anything out of the box anymore.

But it comes with nice presets! One of them does the job! It’s even called, erm, es-2015. Whatever. Specify it in options, run grunt be happy.

>> ParseError: ‘import’ and ‘export’ may appear only with ‘sourceType: module’
Warning: Error running grunt-browserify. Use — force to continue.

Ok. Bear with me. A preset is an umbrella name that specifies a list of plugins Babel will apply to code. If these plugins are not present, Babel will fail silently.

Oh, it doesn’t fail in your particular setup? Oh, how so very nice to be you. The problem is: there is exactly zero info on which of the moving parts of the entire system silently swallows the error.

Let’s step back for a second, and consider:

All in all the whole toolchain to produce a Javascript file is grunt -> grunt-browserify -> browserify -> babelify -> babel. And someone in that chain decides that babel missing all of its specified plugins is not a reason to throw an error, stop and report.

Unwind. Relax. Breathe. Solve differential equations in your head. Install whatever’s needed for babel. Run grunt.

>> ParseError: ‘import’ and ‘export’ may appear only with ‘sourceType: module’
Warning: Error running grunt-browserify. Use — force to continue.

Oh, Jesus Christ on a pony! What now? Ok. I’m a smart developer. I can figure this out. I mean, if I cannot debug my own toolchain, what good am I?

So, grunt has a -d option which means debug. Awesome.

Ok. Bear with me. The debug option passed to grunt does not propagate through the mess of twig and sticks called a toolchain. Grunt does not pass the debug option to grunt-browserify does not pass the debug option to browserify does not pass the debug option to babelify does not pass the debug option to babel.

You have to provide separate debug options to every piece of the toolschain and pray to god that this piece provides such an option and does not ignore it.

Let’s add debug: true to babelify.

>> ParseError: ‘import’ and ‘export’ may appear only with ‘sourceType: module’
Warning: Error running grunt-browserify. Use — force to continue.

Exactly. Was the debug: true option ignored? Or is this all the debug info I can get? I have no idea.

Ok. Bear with me. Grunt has a specific list of files and components it needs to process. I can only assume that the broken ladder of crap called the toolchain gets this list from Grunt with instructions: “These are the files. Process them.”

Despite all that, Babel by default does not process files from node_modules. Even when invoked from grunt-whatever-theplugin-is-i-dont-care will not process them, and will silently skip them. You have to explicitly provide a separate global: true option to all places in grunt->grunt-broswerify->babelify config where you think that code from node_modules may be imported/invoked/whatever.

No, it’s Stockholm syndrome

I’ve been told that this article is a valid argument against “Javascript fatigue”.

It’s not. It’s Stockholm syndrome.

Nothing can excuse the terrible horrible mess that the state of Javascript development is in right now. Step back, and look at your tools. Really look at them. There is a reason we have a million “webpack starter packs”. Because nothing works unless you invoke a number of semi-arcane incantations with increasingly inane combinations and versions of options.

loaders:[{
   test: /(\.css)/,
   loader: "css-loader?module&localIdentName=[path][name] — -[local] — -[hash:base64:5]"
}]

Really?!

I will not even go into how half of these tools don’t support recursing directories or globs. Or how another half of them doesn’t support monitoring the file system and recompiling stuff on the fly (what? recompiling on the fly with dependency tracking wat?).

Why are you supporting this?

I’m not lazy. I don’t not want to learn

It’s been said that “Javascript fatigue” appears because developers are lazy.

It’s been said that “Javascript fatigue” is because developers don’t want to learn anything new.

These arguments are null and void. If you read the first part of the story, you’ve seen that:

The time I spent trying to figure out the exact motions of all the moving parts I could spend on learning something genuinely new.

Instead, I now have a build toolchain that I have exactly zero confidence in (because it will break unexpectedly at the very next update of any of the fourteen hundred moving parts in it).

And yes, I will be removing some of those moving parts. It doesn’t mean that I will enjoy it or learn anything remotely useful from it.

(imgur style): send me your **-starter-packs :)

Originally posted on Medium

No, this is not really a post about the upcoming Designing for Scalability with Erlang/OTP. Erlang is nearly unique among other programming languages in that almost all of the books on it are in the good to great range. This book is going to be no exception.

No, this is not about Erlang per se, as other languages have the same problem. But Erlang is a poster child for “scalable distributed 99.999999% cloud developer-to-user ratio <insert the next current rave here>”.

Recently I twitted:

Every single book on #Erlang spends 99% of text on reiterating the same basic principles. Most of them are worse than LYSE

Where LYSE is, of course the excellent Learn You Some Erlang For Great Good.

This is a bit harsh. May be not 99%. And not necessarily worse. So, what do I complain about? Most books mostly concern themselves with “draw some circles”. Reality is quite often “draw the rest of the owl”. There is almost no info on the intermediate steps in between.

Scalability is the capability of a system, network, or process to handle a growing amount of work, or its potential to be enlarged in order to accommodate that growth.

To scale horizontally (or scale out) means to add more nodes to a system, such as adding a new computer to a distributed software application.

To scale vertically (or scale up) means to add resources to a single node in a system, typically involving the addition of CPUs or memory to a single computer.

So, in the case of Erlang, which is described as, praised for being, marketed as being scalable, distributed, durable, resilient etc. etc. it would be really nice to read up on at least some of these things:

Funnily enough, in an out-of-the-box Erlang:

Unfortunately, most existing books emphasize beyond measure only one single aspect of Erlang: OTP. Even though OTP is no doubt essential to creating robust scalable distributed applications, people looking for the answers to questions above already grok OTP :) They already know how to draw the circles.

If you’re looking for answers, though, the landscape in Erlang books is quite bleak.

Among the commercially available it looks like only Mastering Erlang: Writing Real World Applications doesn’t fall into the trap of spending all of it’s chapters on OTP with only one or two chapters dedicated to something else. (Update: I’m not even sure this book exists. It looks like it’s been “Coming soon” since 2010. Except perhaps here)

The other notable exception is the excellent Erlang in Anger which answers quite a lot of the questions above.

We’re quite ready to move beyond drawing circles. Some of us are not yet ready to draw the entire owl ;) Hjälp!

Originally published as a gist

Translation of a Russian joke about Putin's answers in his yearly press-conferences.

Q: Vladimir Vladimirovich, what is two times two?

A: I'll be brief. You know, just the other day I was at the Russian Academy of Sciences and had a discussion with many scientists there, including young scientists, all of them very bright, by the way. As it happens, we touched upon the present problem, discussed the current state of the country's economy; they also descibed their plans for the future. Of course, the number one priority for them is the problem of relevancy; also, just as important is the question of housing loans, but I can assure you that all these problems can be solved and we will direct all our efforts to their resolution in the nearest future. Among other things this also applies to the subject you raised in your question.