Chris Bateman


A No-Nonsense Guide to Web Components, Part 2: Practical Use

This is Part 2 of a 2-part series.


In Part 1 we learned how to code pure Web Components. But of course, adopting new web technologies is rarely painless, and Web Components are especially complicated. In this post, we’ll look at the current state of browsers, polyfills, performance, accessibility, progressive enhancement, and options for practical implementations.

Browser Support

For the latest status, check out Are We Componentized Yet and caniuse. But here’s the overall situation as of December 2014:

All browsers (except for IE) have implemented HTML Templates. For the remaining specs:

  • Chrome completed and shipped all specs (enabled by default) by Chrome 36. Opera too.

  • Firefox has implemented Custom Elements and Shadow DOM to some degree, but they are disabled by default behind the dom.webcomponents.enabled flag.

    Firefox will be shipping Custom Elements and Shadow DOM when they’re satisfied and confident in the specs. They will not be shipping HTML Imports, as they want to see what the landscape looks like after ES6 modules are shipped and utilized (12/15/2014).

  • Safari removed Google’s Web Components code (in 2/2014) leftover from before Blink forked, and has yet to begin any new implementations. While not opposed to implementing something in the future, they’re clearly not satisfied with the current specs.

    “There is no way I’m implementing the version [of Custom Elements] that’s incompatible with ES6 classes” (11/1/2014)

  • Internet Explorer‘s status for all specs has been “under consideration” for about 6 months (as of 12/2014). They’ve been pulling out some surprises for IE 12, but so far it seems safe to say that Web Components won’t be there.

What’s going on here?

When Chrome shipped Web Components – without a flag – they kind of ticked off all the other browser makers, who felt that the specs weren’t done baking yet and that their feedback hadn’t been fully considered. They’re fairly diplomatic about the Chrome situation – routinely saying things like, “mail me privately for my feelings on the matter,” because their feelings probably involve a number of expletives.

So Firefox and Safari aren’t shipping until they see the adjustments to the spec they want. Will those adjustments be possible when Chrome has already shipped? Will everyone get on the same page anytime soon? How’s it all going to end? I wish I had answers to these questions. Web standards are complicated things. Tune in next year (or the next) to find out!


In Part 1, we looked at an example of a pure, native Web Component. If you were hoping that you could write code like that, plug in a polyfill, and be good to go – you better sit down, because I’ve got some bad news for you:

It ain’t gonna happen. Some parts of Web Components simply aren’t possible/reasonable to polyfill.

But let’s examine each spec individually.

webcomponents.js (which was previously part of Polymer as platform.js) is definitely the biggest game in town when it comes to polyfills, so that’s mainly what we’ll be talking about.

Custom Elements (native support)

Custom Elements is relatively easy to polyfill – down to IE 9 and Android 2.2, if you use document-register-element for the polyfill (and it’s only 3KB gzipped too!). The webcomponents.js polyfill works down to Android 4.1.

Caveat: No support for the :unresolved pseudo class.

HTML Templates (native support)

The webcomponents.js polyfill works down to IE 9 and Android 2 (and isn’t needed in other modern browsers).

Caveat: Polyfilled templates aren’t truly inert – resources like images will still download.

HTML Imports (native support)

The webcomponents.js polyfill works down to IE 9 and Android 4.4 (some things like CSS references work down to Android 4.1, but there’s other bugginess), and in other modern browsers.

Caveat #1: Polyfilled imports load asynchronously, even if you didn’t add the async attribute.

Caveat #2: They load via XHR – which isn’t great for performance.

Caveat #3: document.currentScript, which is needed in the import to access templates (or other elements) in the import, can’t be polyfilled. It is, however, shimmed with _currentScript. So, to write code that works in both supported and polyfilled browsers, you must do this:

document._currentScript || document.currentScript;

Shadow DOM (native support)

It is not reasonable/possible to polyfill Shadow DOM, thanks to its fancy encapsulation features. You just can’t fake the behavior of a shadow root.

The webcomponents.js code attempts to shim some of the encapsulation features by way of rewriting your CSS. It polyfills Shadow DOM’s JavaScript API, querySelector and other DOM APIs to behave (hopefully) as they should, and it moves elements selected with the <content> tag to where the shadow root would be.


  • CSS rules in the page will still apply to elements in the (fake) shadow root. It’s like everything gets a /deep/.
  • To use ::shadow and /deep/ CSS rules in the page, you must add the shim-shadowdom attribute to the <style> or <link>.
    • Even then, ::shadow rules will behave like /deep/ rules anyhow.
  • To include CSS in a shadow root, you have to add some JS to the component to check whether the Shadow Dom shim is in effect, and if so, grab the css text, run it through a shimming function, add the resulting CSS text to the document, and delete the original CSS.
    • You don’t have to do all this if you’re using Polymer and its wrapper/syntax. But native syntax doesn’t cut it.
  • ::content rules in the shadow root will apply to everything in the shadow root – not just children of <content> elements.
  • When using <content>, DOM hierchy will be different in polyfilled browsers. Normally, elements selected by <content> would be children of the root element, but in polyfilled browsers they will be children of the <content>‘s parent. You can certainly work around this – but it’s an additional thing to keep in mind as you write your component’s JS.

Polyfill Sizes

webcomponents.js includes a “lite” file which includes Custom Elements, Templates, and Imports at 9KB, gzipped.

Adding Shadow DOM brings the polyfill total up to 30KB, gzipped (it’s 103KB minified, without gzipping, by the way. TJ VanToll has written about why it’s so large).

Performance & HTML Imports

An HTML Import for a component contains individual links to all of the component’s dependencies. This really isn’t consistent with today’s practice of concatenating JS and CSS files, to keep the number of HTTP requests down.

Imports were designed with HTTP/2 in mind (which basically makes it fine to skip concatenation by way of multiplexing). Unfortunately, not every hosting provider, CDN, or server supports it yet. Browser support for SPDY (the predecessor to HTTP/2′s multiplexing) isn’t too shabby, but there are still some issues (mostly older IE, and IE 11 on Windows 7-). If you’re ready to go all HTTP/2 – you’re in good shape to use HTML Imports.

If not – Polymer does have a tool called Vulcanize that concatenates Import files – but there are some gotchas:

  • Since all the Imports’ contents will get lumped together, you must ensure there are no duplicated element IDs (mainly used for templates) between Imports.
  • document.currentScript.ownerDocument will point to the importing page’s document, rather than the (original) import document.
  • Anything in the import other than templates, CSS, and JS will be removed. Which is probably fine if you’re just using Imports for Web Components.

Bottom line: The general point of Imports is to give you an easy way to get a component’s dependencies on a page. If you already have a solution for that – then you can just stick with it. You won’t have a good place to store HTML Templates, but that’s okay because ES6 template strings will make templates in JS much less painful.


Accessibility with Web Components really isn’t any different than accessibility with any other kind of UI components.

Yes, Web Components can make it convenient to reimplement native elements, but you certainly don’t have to, and people already do that without “Web Components” when the native element doesn’t provide the flexibility they want. We’ve seen custom dropdowns, range sliders, radios, checkboxes, buttons, and more. Some accessible, and some not so much.

The same principle applies, whether it’s a “Web Component” or not: You must add the appropriate accessibility features to anything you build (whether you’re reimplementing a native element or not). And that often means more work than just adding an ARIA role. Here’s a handy checklist, courtesy of Steve Faulkner.

And don’t forget – while Custom Elements lets you create new elements:

    <crazy-li role="listitem">First</crazy-li>

It also allows you to extend native elements – saving you the trouble of reimplementing built-in accessibility features:

    <li is="crazy-li">First</li>

Bottom line: if you can write an accessible regular component, then you can write an accessible Web Component. Whatever you’re doing – make it accessible!

Progressive Enhancement

If you haven’t noticed yet – Web Components rely on JavaScript. Substantially. Imports is the only part that really works without JS. There are differing opinions on progressive enhancement, but let’s just examine a couple approaches from a high level, so that you can make the best decision for your situation:

Component renders its own internal HTML.

This is kind of the assumed default with Web Components.


  • Page markup is clean, understandable, and simple
  • Components’ internal HTML can be easily updated on all pages


  • No JS = empty component
  • A synchronous-loading component (in the <head>) will slow down the page’s initial render time
  • An asynchronous-loading component will pop into existence after the initial render (Flash of Loaded Component – FOLC? Or FOCL?)

Server includes the component’s internal markup.

This still lets you take advantage of Custom Elements’ lifecycle callbacks and element functions, while maintaining progressive enhancement.


  • Faster initial render
  • No FOLC
  • All markup is there if JS fails


  • Page markup is messy again
  • Updating a component’s HTML means updating every page’s markup (unless you build something server-side to automate it)
  • Shadow DOM will probably need to sit this one out
  • The CSS needs to load in the <head>, of course. So if you’re using an Import, it’ll need to be synchronous (and maybe load its JS asynchronously, for performance).
    • But remember the Imports polyfill doesn’t do synchronous loading.

What to Do?

First off – the lack of consensus between the browser makers is concerning, all-around. Firefox and Safari (and probably IE) want to see changes made before they ship – but we don’t know what those changes will look like.

Having said that, here are my overall conclusions:

  • Custom Elements are helpful, and fairly easy to polyfill.
  • HTML Imports have too many caveats right now (particularly around performance), polyfill browser support isn’t ideal, and Firefox isn’t going to do it (I kind of doubt Safari will either). They may be right that it’s too soon to try to finalize this solution. In the meanwhile – our current solutions for including resources will have to do.
  • Polyfilled Shadow DOM has way too many caveats, and the polyfill is big (and especially slow on mobile devices). Shadow DOM will be useful someday when broad browser support is available.

So I’m left with Custom Elements (this was TJ’s conclusion as well). They’re the only spec that’s polyfillable on all the older platforms I’d like to support (IE 9, Android 4.3 and below, etc.), and I love the lifecycle callbacks and the “semantic” and clean way you use them on a page.

My suggestion is to try it out – build a component with Custom Elements, and see how it works in your environment. When Shadow DOM is ready, you can add it – so just keep that potential future state in mind as you build components.

A note on frameworks/libraries

Libaries like Polymer were developed to solve a number of common tasks related to Custom Elements (and the other specs). Things like easy attribute binding, smarter templating, and events.

However, they almost seem to violate one of the objectives of Web Components, which is reusability. If you want to build a component that can be reused in a variety of environments, keeping your dependencies to a minimum is usually a good thing. I’m not sure I’m comfortable with forcing another largeish (Polymer is ~37KB, gzipped) dependency on everyone who might want to use my component.

But if you want to develop components to be used in environments that you control, I’d feel much better about a library like Polymer – and it’d probably be fairly helpful. X-Tag is another alternative which provides a neat wrapper for creating Custom Elements (and they don’t even bother with Shadow DOM, which is fine by me). And if you really want to start writing CSS for Shadow DOM, you might want to take a look at Bosonic, which transpiles your CSS on the server (rather than in the browser, as Polymer does).

Meta: If I’ve gotten anything flat-out wrong, missed something, or if you have any suggestions, please leave a comment or let me know on Twitter!

Update 2/2015: Browser makers are having a hard time coming to an agreement on how Custom Elements should work under the hood. Particularly around the extension of native elements (using the is= attribute). It’s possible that they may decide to remove that feature so that they can ship a spec they all agree on. Anne van Kesteren warns that the updates “will likely be incompatible with what is out there today.” See here for more details. There’s also been a lot of Shadow DOM discussion and not a ton of agreement – here’s the current status.

A No-Nonsense Guide to Web Components, Part 1: The Specs

This is Part 1 of a 2-part series.


This is a crash course for getting familiar with Web Components. It strives to be concise, rather than exhaustive. There a lot of other great resources available on the topic: check out HTML5 Rocks’ tutorials and this massive list of resources.

Why Web components?

First — they’re easy to manage – thay can instantiate themselves and clean up after themselves.

Second — Simple, declarative usage means they’re easy to include and configure:

<link rel="import" href="my-dialog.htm">

<my-dialog heading="A Dialog">Lorem ipsum</my-dialog>

Third — they’re modular and reusable. A standard format for building, implementing, and interacting with UI components means that it’s easy to use them across different frameworks and environments.

Fourth — they can provide encapsulation for components’ styles and HTML. So they play nice with other styles and things happening on a page.

The Specs

Web Components are made up of 4 separate specifications. They go together nicely, but you don’t have to use them all – you can pick and choose based on your situation. Custom Elements and Shadow DOM are most important; HTML Imports and Templates are really just handy.

We’re sticking to native code for now (no polyfills), so be sure to use a browser that supports the spec, when looking at the demos.

Custom Elements

Custom Elements are the heart of Web Components. This API lets you create new elements, add public methods to them, and gives you 4 lifecycle callbacks to manage them.

All you have to do is create an object to be used as your element’s prototype, add the callbacks to it, and then register it with a hyphenated name (all custom elements must have a hypen – that’s how you know they’re not native elements).

// <my-element></my-element>

var myProto = Object.create(HTMLElement.prototype);

// Lifecycle callbacks
myProto.createdCallback = function() {
    // initialize, render templates, etc.
myProto.attachedCallback = function() {
    // called when element is inserted into the DOM
    // good place to add event listeners
myProto.detachedCallback = function() {
    // called when element is removed from the DOM
    // good place to remove event listeners
myProto.attributeChangedCallback = function(name, oldVal, newVal) {
    // make changes based on attribute changes

// Add a public method
myProto.doSomething = function() { ... };

document.registerElement('my-element', {prototype: myProto});

This is fantastic, because it means that your components can both self-initialize and self-destroy.

Let’s say you have a page where a user action can open up a new widget. This particular widget adds some keyboard listeners to the page to check for shortcuts. Today, you’d probably call a JS function to initialize the widget. And when the user closes it, you’d need to call a destroy function that would remove the event listeners. Because you don’t want those listeners to stick around – using up memory and continuing to take action on events.

With Custom Elements – you (or your framework) don’t have to worry about those details. Just insert the element into the DOM to initialize it, and when you remove it from the DOM, it can clean up after itself. Awesome!

You can also extend native elements, like this:

// <input is="my-input">
document.registerElement('my-input, {
    prototype: myProto,
    type: 'input'

Custom Elements Demo

Shadow DOM

Shadow DOM encapsulates elements. It allows you to hide a number of elements inside of an element – much like browsers do with their native UI elements (e.g. the controls in a <video>). This prevents other code on the page from accidentally messing with your element – and vice-versa.

var shadowRoot = element.createShadowRoot();

You can add CSS inside a shadow root, and it won’t select elements outside of the shadow root (Note that you can’t put <link> tags in a shadow root. To reference an external stylesheet, use @import in a <style> tag). To target the element holding the shadow root, just use the :host selector.

And conversely, CSS selectors on the page won’t select elements inside a shadow root (and that goes for querySelector too). But those elements will still inherit inheritable properties (like font-family).

If the page needs to style something that’s in a shadow root, it’s still possible, and the intention of your CSS will be very obvious (which is a good thing):

my-element::shadow p {
    /* selects <p> tags in shadow roots of <my-element>'s */
body /deep/ p {
    /* selects all <p> tags - in shadow roots or not */

There’s one other thing you can do with Shadow DOM: leave an element’s contents outside of the shadow root – so they’re still accessible to the page – but visually reflow them as if they were in the shadow root. Just add a <content> element, and it will reflow any children of the root element:


In that example – you’ll see the lines orderd as “two three one” but only the “one” is actually encapsulated in the shadow root.

Note that if there wasn’t a <content> element, the “two” and “three” paragraphs would not be visible (a shadow root hides the other children).

Shadow DOM Demo

HTML Imports

Imports give you a single place to put the styles, scripts, and templates required for a component, so pages only need to include one thing.

<link rel="import" href="dialog.htm">

CSS in an import will apply to the page, and scripts will execute in the usual global context.

Other regular HTML elements in the import will not be visible on the page or accessible to things like querySelector. Though you can access anything in the import if you need to, like this: linkElement.import.querySelector('#template');

It’s very important to note that Imports will block the rendering of your page (same as plain JS and CSS resources do) – unless you add the async attribute (you can listen for the load event). Helpfully, scripts in an async import will still execute in order.

HTML Imports Demo

HTML Templates

For storing HTML templates, you may have used strings in JS, or perhaps a <script> tag with a non-standard type. But now there’s a dedicated element for it:

<template id="MyTemplate">
  <div>Some stuff</div>

Using it is pretty straightforward:

var clone = document.importNode(templateNode.content, true); // 2nd parameter for "deep" clone
// now you can append the clone wherever you like

That’s it – nothing too fancy.

And when it comes to Web Components – Templates are pretty useless without Imports (where else would you put them?).

HTML Templates Demo

All Together Now

Now that we’ve looked at the pieces in isolation, let’s see how they look together. This is a super-basic example of how you might build a Web Component without any frameworks or polyfills.

Web Components Demo

Fun stuff, but don’t get too excited just yet. In Part 2 we’ll talk about using Web Components in real life.

CSS Naming Patterns Compared

A few weeks ago, I heard about AMCSS, a newly formalized alternative to using classes for CSS hooks. I was pretty hesitant at first – and still am to some extent – if nothing else because it just feels odd.

The goal of this post is to briefly compare the most common methods of organizing CSS module classes – and see how AMCSS compares.


<span class="badge">A Badge</span>
<span class="badge badge--large">A Large Badge</span>
.badge {
    display: inline-block;
    background: #555;
    color: #eee;
.badge--large {
    font-size: 1.5em;
/* overriding */
.darkheader .badge {
    background: #777;
    color: #eee;
  • Good: overrides are easy.
  • Bad: HTML can feel a bit repetitive.


<span class="badge">A Badge</span>
<span class="badge--large">A Large Badge</span>
.badge--large {
    display: inline-block;
    background: #555;
    color: #eee;
.badge--large {
    font-size: 1.5em;
/* overriding */
.darkheader .badge,
.darkheader .badge--large {
    background: #777;
    color: #eee;
  • Good: HTML is shorter.
  • Bad: If you need more than one modifier you’re back to multi-class anyhow.
  • Bad: Overrides are messy because they have to be updated when a new modifier is added. Unless…

Single-Class with Single Selector: |=

[class|="badge"] {
    display: inline-block;
    background: #555;
    color: #eee;
  • You don’t have to list every modifier for the base styles.
  • Bad: Can’t add any extra classes to the base class (class=”badge extra” will fail). *dealbreaker*

Single-Class with Single Selector: ^=

[class^="badge"] {
    display: inline-block;
    background: #555;
    color: #eee;
  • You don’t have to list every modifier for the base styles.
  • Bad: It always must be the first class.
  • Bad: You can’t have any other classes that begin with your base class (it would also match class=”badgeAlt”).

So far

So far, for me it’s pretty clear that the multi-class pattern comes out on top. Yeah, the HTML is a bit longer – but that’s better than the downsides of the alternatives.


When compared to the previous options, Attribute Module CSS gets you the best of all worlds.

<span css-badge>A Badge</span>
<span css-badge="large">A Large Badge</span>
[css-badge] {
    display: inline-block;
    background: #555;
    color: #eee;
[css-badge~="large"] {
    font-size: 1.5em;

/* overriding */
.darkheader [css-badge] {
    background: #aaa;
    color: #444;

The HTML is short, overrides are straightforward, and there aren’t any caveats about how the “classes” can ordered or mixed with others.

However – there’s one drawback you might not notice at first: JavaScript. Modern browsers have classList, and everyone’s familiar with jQuery’s methods for managing classes. You can’t use any of it to toggle attribute values. The spec authors suggest that this is a feature, and that you might try regular classes for state changes.

But if you don’t like the idea of adding regular classes back into the mix, it’s easy enough to make a JS utility for manipulating attributes in the same way: Here’s a gist that should do the trick.

No comments yet. You can be the first!


Two years ago, Linds Redding passed away, about one year after being diagnosed with inoperable esophageal cancer. He was an art director and motion graphics designer in the advertising industry. He was 52 years old. I didn’t know him, but I recently read some of his blog, including a post entitled, A Short Lesson in Perspective. It’s worth a read – especially if you work in a "creative" job. These are the words of a man who knew his time was up.

It turns out I didn’t actually like my old life nearly as much as I thought I did. I know this now because I occasionally catch up with my old colleagues and work-mates. They fall over each other to enthusiastically show me the latest project they’re working on. Ask my opinion. Proudly show off their technical prowess (which is not inconsiderable). I find myself glazing over but politely listen as they brag about who’s had the least sleep and the most takaway food. “I haven’t seen my wife since January, I can’t feel my legs any more and I think I have scurvy but another three weeks and we’ll be done. It’s got to be done by then. The client’s going on holiday. What do I think?”

What do I think?

I think you’re all f-----g mad. Deranged. So disengaged from reality it’s not even funny. It’s a f-----g TV commercial. Nobody gives a shit.

The other thing I did, I now discover, was to convince myself that there was nothing else, absolutely nothing, I would rather be doing. That I had found my true calling in life, and that I was unbelievably lucky to be getting paid – most of the time – for something that I was passionate about, and would probably be doing in some form or other anyway.

Countless late nights and weekends, holidays, birthdays, school recitals and anniversary dinners were willingly sacrificed at the altar of some intangible but infinitely worthy higher cause. It would all be worth it in the long run.

So was it worth it?

Well of course not. It turns out it was just advertising. There was no higher calling. No ultimate prize. Just a lot of faded, yellowing newsprint, and old video cassettes in an obsolete format I can’t even play any more even if I was interested. Oh yes, and a lot of framed certificates and little gold statuettes. A shitload of empty Prozac boxes, wine bottles, a lot of grey hair and a tumor of indeterminate dimensions.

…Oh. And if you’re reading this while sitting in some darkened studio or edit suite agonizing over whether housewife A should pick up the soap powder with her left hand or her right, do yourself a favour. Power down. Lock up and go home and kiss your wife and kids.

No comments yet. You can be the first!

How Double-Equals Works in JavaScript

tl;dr — Don’t use double-equals!

When asked about the difference between double and triple-equals, a lot of JS developers will tell you that == compares values and === compares values and types. In other words, 1 == "1" will be true, and 1 === "1" will be false.

That explanation is somewhat true, but it belies a much more complicated reality.

Some of the confusion comes from thinking that == is somehow related to truthiness, which is all about how variables get coerced into booleans — which happens in an if statement like this:

if (something) {

In that case — 0, "" the empty string, null, undefined, and NaN will all return false. Non-zero numbers, Non-empty strings, arrays and objects (empty or not) will all return true. Most devs have a pretty good handle on this.

But… does that mean that a non-zero number, or all non-empty strings will double-equal true? This is where things can get confusing.

if ('a') {           // true
if ('a' == true) {   // false
if ('a' == false) {  // false

if (2) {             // true
if (2 == true) {     // false
if (1 == true) {     // true

Hopefully that makes it clear that truthiness has nothing to do with ==.

Remember: truthiness is about coercion into booleans. With double-equals, nothing will ever get coerced into a boolean. So what’s really going on?

The answer is the Abstract Equality Comparison Algorithm. If the two types differ, JS follows a particular process for converting them into the same type, so that it can compare them. If types don’t match somewhere along the way — the endgame will be numbers.

  • First, booleans are converted to numbers. True becomes 1 and false becomes 0.
  • Next, objects will be turned into strings using .toString() (unless you modified .valueOf() to return a primitive). So [1] becomes "1", [1,2] becomes "1,2", and both {...} and [{...}] become "[object Object]".
  • If a string and a number are left, the string is converted to a number (so any string with non-number characters will become NaN — which, by the way, never ever equals anything, including itself).
  • null and undefined equal themselves and each other.

That’s the gist of it, but you can check out the spec for more details.

So — do you need to remember all these rules? Absolutely not. As Felix Geisendörfer puts it, “Programming is not about remembering stupid rules. Use the triple equality operator as it will work just as expected.”

That’s why most all JS style guides recommend using only ===. Some allow for an exception when checking for null or undefined (by using “== null“). But some folks would argue against even that.

No comments yet. Now's your chance.