A No-Nonsense Guide to Web Components, Part 2: Practical Use
This is Part 2 of a 3-part series.
- Part 1: The Specs
- Part 2: Practical Use (Browser Support and Other Challenges)
- Part 3: Web Components in Angular, Ember, and React
In Part 1 we learned how to code pure Web Components. But of course, adopting new web technologies is rarely painless, and Web Components are especially complicated. In this post, we’ll look at the current state of browsers, polyfills, performance, accessibility, progressive enhancement, and options for practical implementations.
For the latest status, check out Are We Componentized Yet and caniuse. But here’s the overall situation as of December 2014:
All browsers (except for IE) have implemented HTML Templates. For the remaining specs:
Chrome completed and shipped all specs (enabled by default) by Chrome 36. Opera too.
Firefox has implemented Custom Elements and Shadow DOM to some degree, but they are disabled by default behind the
Firefox will be shipping Custom Elements and Shadow DOM when they’re satisfied and confident in the specs. They will not be shipping HTML Imports, as they want to see what the landscape looks like after ES6 modules are shipped and utilized (12/15/2014).
Safari removed Google’s Web Components code (in 2/2014) leftover from before Blink forked, and has yet to begin any new implementations. While not opposed to implementing something in the future, they’re clearly not satisfied with the current specs.
“There is no way I’m implementing the version [of Custom Elements] that’s incompatible with ES6 classes” (11/1/2014)
Internet Explorer‘s status for all specs has been “under consideration” for about 6 months (as of 12/2014). They’ve been pulling out some great stuff for Spartan, but so far it seems safe to say that Web Components won’t be there. However – they have said that they’re keeping it in mind as they build Spartan, which is pretty encouraging.
What’s going on here?
When Chrome shipped Web Components – without a flag – they kind of ticked off all the other browser makers, who felt that the specs weren’t done baking yet and that their feedback hadn’t been fully considered. They’re fairly diplomatic about the Chrome situation – routinely saying things like, “mail me privately for my feelings on the matter,” because their feelings probably involve a number of expletives.
So Firefox and Safari aren’t shipping until they see the adjustments to the spec they want. Will those adjustments be possible when Chrome has already shipped? Will everyone get on the same page anytime soon? How’s it all going to end? I wish I had answers to these questions. Web standards are complicated things. Tune in next year (or the next) to find out!
In Part 1, we looked at an example of a pure, native Web Component. If you were hoping that you could write code like that, plug in a polyfill, and be good to go – you better sit down, because I’ve got some bad news for you:
It ain’t gonna happen. Some parts of Web Components simply aren’t possible/reasonable to polyfill.
But let’s examine each spec individually.
webcomponents.js (which was previously part of Polymer as platform.js) is definitely the biggest game in town when it comes to polyfills, so that’s mainly what we’ll be talking about.
Custom Elements (native support)
Custom Elements is relatively easy to polyfill – down to IE 9 and Android 2.2, if you use document-register-element for the polyfill (and it’s only 3KB gzipped too!). The webcomponents.js polyfill works down to Android 4.1.
Caveat: No support for the
:unresolved pseudo class.
HTML Templates (native support)
The webcomponents.js polyfill works down to IE 9 and Android 2 (and isn’t needed in other modern browsers).
Caveat: Polyfilled templates aren’t truly inert – resources like images will still download.
HTML Imports (native support)
The webcomponents.js polyfill works down to IE 9 and Android 4.4 (some things like CSS references work down to Android 4.1, but there’s other bugginess), and in other modern browsers.
Caveat #1: Polyfilled imports load asynchronously, even if you didn’t add the
Caveat #2: They load via XHR – which isn’t great for performance.
document.currentScript, which is needed in the import to access templates (or other elements) in the import, can’t be polyfilled. It is, however, shimmed with
_currentScript. So, to write code that works in both supported and polyfilled browsers, you must do this:
document._currentScript || document.currentScript;
Shadow DOM (native support)
It is not reasonable/possible to polyfill Shadow DOM, thanks to its fancy encapsulation features. You just can’t fake the behavior of a shadow root.
<content> tag to where the shadow root would be.
- CSS rules in the page will still apply to elements in the (fake) shadow root. It’s like everything gets a
- To use
/deep/CSS rules in the page, you must add the
shim-shadowdomattribute to the
- Even then,
::shadowrules will behave like
- Even then,
- To include CSS in a shadow root, you have to add some JS to the component to check whether the Shadow Dom shim is in effect, and if so, grab the css text, run it through a shimming function, add the resulting CSS text to the document, and delete the original CSS.
- You don’t have to do all this if you’re using Polymer and its wrapper/syntax. But native syntax doesn’t cut it.
::contentrules in the shadow root will apply to everything in the shadow root – not just children of
- When using
<content>, DOM hierchy will be different in polyfilled browsers. Normally, elements selected by
<content>would be children of the root element, but in polyfilled browsers they will be children of the
<content>‘s parent. You can certainly work around this – but it’s an additional thing to keep in mind as you write your component’s JS.
webcomponents.js includes a “lite” file which includes Custom Elements, Templates, and Imports at 9KB, gzipped.
Adding Shadow DOM brings the polyfill total up to 30KB, gzipped (it’s 103KB minified, without gzipping, by the way. TJ VanToll has written about why it’s so large).
Performance & HTML Imports
An HTML Import for a component contains individual links to all of the component’s dependencies. This really isn’t consistent with today’s practice of concatenating JS and CSS files, to keep the number of HTTP requests down.
Imports were designed with HTTP/2 in mind (which basically makes it fine to skip concatenation by way of multiplexing). Unfortunately, not every hosting provider, CDN, or server supports it yet. Browser support for SPDY (the predecessor to HTTP/2’s multiplexing) isn’t too shabby, but there are still some issues (mostly older IE, and IE 11 on Windows 7-). If you’re ready to go all HTTP/2 – you’re in good shape to use HTML Imports.
If not – Polymer does have a tool called Vulcanize that concatenates Import files – but there are some gotchas:
- Since all the Imports’ contents will get lumped together, you must ensure there are no duplicated element IDs (mainly used for templates) between Imports.
document.currentScript.ownerDocumentwill point to the importing page’s document, rather than the (original) import document.
- Anything in the import other than templates, CSS, and JS will be removed. Which is probably fine if you’re just using Imports for Web Components.
Bottom line: The general point of Imports is to give you an easy way to get a component’s dependencies on a page. If you already have a solution for that – then you can just stick with it. You won’t have a good place to store HTML Templates, but that’s okay because ES6 template strings will make templates in JS much less painful.
Accessibility with Web Components really isn’t any different than accessibility with any other kind of UI components.
Yes, Web Components can make it convenient to reimplement native elements, but you certainly don’t have to, and people already do that without “Web Components” when the native element doesn’t provide the flexibility they want. We’ve seen custom dropdowns, range sliders, radios, checkboxes, buttons, and more. Some accessible, and some not so much.
The same principle applies, whether it’s a “Web Component” or not: You must add the appropriate accessibility features to anything you build (whether you’re reimplementing a native element or not). And that often means more work than just adding an ARIA role. Here’s a handy checklist, courtesy of Steve Faulkner.
And don’t forget – while Custom Elements lets you create new elements:
<ul> <crazy-li role="listitem">First</crazy-li> </ul>
It also allows you to extend native elements – saving you the trouble of reimplementing built-in accessibility features:
<ul> <li is="crazy-li">First</li> </ul>
Update 4/2015: Extending with
is= will most likely be removed from the spec. It may return in a different form, in a future version.
Bottom line: if you can write an accessible regular component, then you can write an accessible Web Component. Whatever you’re doing – make it accessible!
Component renders its own internal HTML.
This is kind of the assumed default with Web Components.
- Page markup is clean, understandable, and simple
- Components’ internal HTML can be easily updated on all pages
- No JS = empty component
- A synchronous-loading component (in the
<head>) will slow down the page’s initial render time
- An asynchronous-loading component will pop into existence after the initial render (Flash of Loaded Component – FOLC? Or FOCL?)
Server includes the component’s internal markup.
This still lets you take advantage of Custom Elements’ lifecycle callbacks and element functions, while maintaining progressive enhancement.
- Faster initial render
- No FOLC
- All markup is there if JS fails
- Page markup is messy again
- Updating a component’s HTML means updating every page’s markup (unless you build something server-side to automate it)
- Shadow DOM will probably need to sit this one out
- The CSS needs to load in the
<head>, of course. So if you’re using an Import, it’ll need to be synchronous (and maybe load its JS asynchronously, for performance).
- But remember the Imports polyfill doesn’t do synchronous loading.
What to Do?
First off – the lack of consensus between the browser makers is concerning, all-around. Firefox and Safari (and probably IE) want to see changes made before they ship – but we don’t know what those changes will look like.
Having said that, here are my overall conclusions:
- Custom Elements are helpful, and fairly easy to polyfill.
- HTML Imports have too many caveats right now (particularly around performance), polyfill browser support isn’t ideal, and Firefox isn’t going to do it (I kind of doubt Safari will either). They may be right that it’s too soon to try to finalize this solution. In the meanwhile – our current solutions for including resources will have to do.
- Polyfilled Shadow DOM has way too many caveats, and the polyfill is big (and especially slow on mobile devices). Shadow DOM will be useful someday when broad browser support is available.
So I’m left with Custom Elements (this was TJ’s conclusion as well). It’s the only spec that’s polyfillable on all the older platforms I’d like to support (IE 9, Android 4.3 and below, etc.) and I love the public API, the lifecycle callbacks, and the “semantic” and clean way you use them on a page.
My suggestion is to give Custom Elements a try, and see if they make life easier for you. However, if you’re going to put Custom Elements into production – be sure to use a wrapper around the native API so that you can easily make changes if they become necessary. And it’s probably best to avoid extending native elements (using the
To learn about using Custom Elements in JS frameworks, head over to Part 3.
A note on frameworks/libraries
Libaries like Polymer were developed to solve a number of common tasks related to Custom Elements (and the other specs). Things like easy attribute binding, smarter templating, and events.
However, they almost seem to violate one of the objectives of Web Components, which is reusability. If you want to build a component that can be reused in a variety of environments, keeping your dependencies to a minimum is usually a good thing. I’m not sure I’m comfortable with forcing another largeish (Polymer is ~37KB, gzipped) dependency on everyone who might want to use my component.
But if you want to develop components to be used in environments that you control, I’d feel much better about a library like Polymer – and it’d probably be fairly helpful. X-Tag is another alternative which provides a neat wrapper for creating Custom Elements (and they don’t even bother with Shadow DOM, which is fine by me). And if you really want to start writing CSS for Shadow DOM, you might want to take a look at Bosonic, which transpiles your CSS on the server (rather than in the browser, as Polymer does).
Meta: If I’ve gotten anything flat-out wrong, missed something, or if you have any suggestions, please leave a comment or let me know on Twitter!
Update 2/2015: Browser makers are having a hard time coming to an agreement on how Custom Elements should work under the hood. Particularly around the extension of native elements (using the
is= attribute). It’s possible that they may decide to remove that feature so that they can ship a spec they all agree on. Anne van Kesteren warns that the updates “will likely be incompatible with what is out there today.” See here for more details. There’s also been a lot of Shadow DOM discussion and not a ton of agreement – here’s the current status.