I Am Not Charles

W2E Day 1: HTML5 with Alex Russell

Posted in the Living Room by Joe on September 27, 2010

First impression: some guys behind me talking about which smartphones they should get for testing. “Not a Blackberry, though – I can’t stand doing web development on the Blackberry browser.” Sob. We were vindicated when Alex listed the browsers that could and could not handle HTML5 features, and carefully split BB into “Torch+” and “pre-Torch” in each list.

Second Impression: wow, dense. This was a dense, dense, dense, dense talk.

The first part of this talk was basically, “A browser engineer tells you how to build web pages.” Actually Alex’s epigram for it was more illustrative: “Think about your web pages the way a browser thinks about web pages.” About time! You wouldn’t believe how many optimizations we have to bypass because it would break some ridiculous crufty feature that page authors nevertheless make heavy use of. Like Javascript.

Many web developers just don’t realize how many side effects common techniques have. The most obvious is that Javascript is executed as soon as it’s parsed, and it blocks page loading until it’s finished. But Alex went through some more obscure and situation-dependent ones: for example, reading computed style info from Javascript requires the engine to do a layout pass to compute that info, which can be very slow, unless it’s already been done. So try to delay Javascript like this until after the page has already been laid out and rendered once (good luck reliably figuring out when that is.). You especially want to avoid Javascript that alters the page structure, forcing it to be layed out again (unfortunately, this is the most useful thing for Javascript to do).

Another extremely important thing that’s often misunderstood is just how much DNS sucks. Because DNS is a giant bottleneck, and for other reasons that Alex went into at length but I won’t because this is long enough, opening an http connection is heavyweight, so for good performance you want to avoid downloading lots of small resources: put as much as you can into each resource, and try to prioritize so that the important ones (CSS that applies to every element on the page) doesn’t have to wait on the tiny images that form the rounded corners of your logo.

Hopefully the talk went into enough detail on this to give the audience a good conception of these issues. But I think it might have been a bit heavy on the browser developer jargon – it included an overview of webkit render trees (!) to explain how the various bits of timing interact.

Fortunately the take-away for general design can be boiled down to some simple points: put your CSS first; put your Javascript last; and use “defer async” as much as possible. (But don’t take my word for it, check out the slides, because it’s important to understand why this helps in order to generalize it.)

I spent the opening half of the presentation thinking, “It’s great how he’s showing all these web developers the complexities involved in the browser!” But then when he started talking about individual CSS elements, I started to zone out. “I can just look these up in the spec when I need them.” I wonder if the web developers in the audience had the opposite reaction?

Speaking of which, there wasn’t a lot of audience reaction. He kept asking for questions and getting none. I think it was more of a, “This is really cool but I’ll clearly need to play around with it myself to get a feel for it, so I don’t have anything to ask yet,” kind of silence, and less of a, “I have no idea what’s going on,” silence, but it’s hard to be sure because the second half of the talk flew through a whole bunch of new HTML5 features without pausing on any of them. Too densely packed to linger. So let’s repeat the whirlwind tour:


HTML has some new and shiny things:

New, cool input types that let you remove validation code (they have builtin constraints) and avoid writing complex user interaction objects (they have builtin themes and behaviour that’s more complete than traditional objects).

Canvas – lets you draw on the page with inline code instead of downloading tons of images which fight for network priority.

ContentEditable – turns any HTML element into a rich text editor.

CSS has more new and shiny things:

Gradients, shadows and rounded borders: again, less use of images.

Layered backgrounds so you can draw your background with a canvas instead of, again, an image.

Transitions for hardware accelerated page effects without writing and parsing Javascript.

Transforms for morphing the display of items (although they’re not available everywhere, so use judiciously).

Animations for even more elaborate (and hardware accelerated!) page effects, although they are probably not finalized as the syntax is still not very clean compared to transitions.

Javascript and the DOM have new things that are less shiny, but useful:

First, let me digress to talk a bit about jQuery. It’s a horrible anchor around the neck of every web site. It’s a gigantic library that must be downloaded and parsed (synchronously) before even beginning to parse the rest of the page, and because everybody uses a slightly different version of it served from a different URL they can’t even share a cache of the thing! The only thing worse than using jQuery is not using it: can you imagine the bugs that would be caused if everyone tried to duplicate its functionality by hand?

So replacing jQuery with something making better use of inbuilt browser features is a worthy goal, because it would hit points (a) and (b) above perfectly.

So when Alex says, “These new features let you do everything jQuery does in 30 lines of code!” I prick up my ears. Not being a web developer, I don’t know what jQuery actually does, so I’m probably being naïve, but here they are:

querySelectorAll is a new DOM method that ties in with the CSS declarative model: give it a string and it will return all the nodes which match that CSS declarator. As I mentioned yesterday, CSS is not my thing so the descriptors Alex demonstrated were all gibberish to me, but the concept seems pretty useful.

defineProperty is a new Javascript statement that let’s you add functions to any prototype, meaning if you stick something on the root of an object tree, it’s automatically available to every derived object (what most languages would call a “subclass”). This has always been possible, but there were side effects that made it dangerous to use in libraries; defineProperty does it cleanly.

Some more Javascript goodness:

WebStorage let’s you store persistent key-value pairs. Traditionally you would use cookies for this, but cookies are terrible for performance: they’re included in the headers sent with every request and response, and the headers can’t even be zipped. And incidentally, the cookie “spec” is a joke (it is fundamentally ambiguous, so it is not possible to write an algorithm to parse cookies correctly) so anything that gets rid of them is A-OK by me!

Drag and drop is traditionally done with fragile and inefficient Javascript; now it’s built into the browser.

And, uh, some more, but by this point my notes are becoming illegible because my God that’s a lot of stuff. And there are a lot of intriguing points brought up about all of these, which this margin is too small to contain, so if I have time I’ll make more in-depth followup posts on some of them.

Of course, you can only use these features on bleeding-edge browsers – so, well, basically all of them except IE 6 to 8. Which still make up something like 30% of the market. Alex handwaved this a bit by talking up his Chrome Frame product – you can just ask your users to install an IE extension instead of a whole new browser – but it sounds pretty tenuous. I have no better solution for this, though – adding fallbacks defeats the whole purpose of using new builtins to make pages smaller and simpler.

Then there was a lab session, which Alex unsurprisingly ran out of time to show in detail. It showed a small sample app using these techniques, including a wrapper library called h5.js which makes using some features simpler, works around design flaws in others, and includes an async downloader for resources in less than 1K (gzipped). Source code is available here.

All in all, a very impressive presentation, covering lots of exciting ground, but could have used a little less scope and more depth. Fortunately, the afternoon’s session provided a very nice complement.

(While I’m attending this conference on behalf of Research In Motion, this blog and its contents are my personal opinions and do not represent the views of my employers, colleagues, family members, or pets.)

Web 2.0 Expo

Posted in the Foyer by Joe on September 27, 2010

I’m not much of Web 2.0 kind of guy. Facebook confuses me, I don’t see the point of Twitter, and these are the only two rich web apps I can think of off the top of my head. (Ok, that’s not true, but it’s punchier than, “I’ve probably never heard of most of the stuff that you Web 2.0 type people get excited about.”)

Yet I’m on my way to the Web 2.0 Expo in NYC. What am I doing here?

Well, first off I’m representing my employer, whatever that means. Schmoozing, I guess.

I’m also going to be going to a lot of web development (as opposed to business or – ick – marketing) talks. I’m not a web developer, I’m a WebKit developer: I know quite a bit about the underpinnings of web technology, but I’m not so clear about how it’s used in the real world. So I want to find out things like: which of the cool new HTML5 features are people doing web development most excited about? Which will be seeing heavy use starting now, and which are still seen as not mature enough to rely on? What do people want to do that can’t be done with cross-platform web technologies yet, and how can we provide it? Does anybody actually care about Flash? And what do web developers want to see from the Blackberry, specifically, to make it a killer platform?

And rather than learn about all this stuff and then file it away somewhere, I’m going to blog about it as I go. That will let me feel like I’m being useful and not just schmoozing on my employer’s dime. Hey, it’s a blogging conference, and I have a handy blog sitting here unused. Seems like a good time to dust it off!

First off tomorrow are two workshops: “HTML5: Developing for the Desktop and Mobile”, by Google’s Alex Russell, and probably “Building Cross-Platform Mobile Apps”, by Jonathan Stark, which sounds pretty basic from the description, but I’m probably in need of some remedial CSS3. (It can’t be any harder than template metaprogramming, right?) More importantly right now, it’ll help me decide if I should go to Jonathan’s other talk, which conflicts with a bunch of other stuff I’d like to see.

(While I’m attending this conference on behalf of Research In Motion, this blog and its contents are my personal opinions and do not represent the views of my employers. Nutella is a registered trademark of Ferraro.)

Retiring a link

Posted in the Foyer by Joe on August 30, 2009

One of the links over on the right that nobody ever clicks on is “Torch Mobile”, my employer. It’s time to retire that link, and replace it with something from Research In Motion – my employer.

How’s Gimp doing, part2

Posted in the Living Room by Joe on July 30, 2009
Tags: , ,

How’s Gimp doing these days?

Posted in the Living Room by Joe on July 22, 2009
Tags: , ,

I’ve always heard two opinions about Gimp: “It’s just like Photoshop, an incredibly complex and expensive piece of professional software, but it’s free! Behold the power of Open Source!” and “Gimp sucks. It’s better than nothing, I guess, but if you’re using Gimp instead of Photoshop you’re a chump.”

Of course, not having any need for a pro-quality image editor due to lack of artistic talent, it’s been 5 years since I paid any attention to this. My sister gives an update on how Gimp stacks up today.

O(log n) continues to beat O(n^2)

Posted in the Kitchen,the Utility Room by Joe on March 3, 2009

Two items:

1. I feel pretty bad about the several months of silence. I swore this time I wouldn’t start a nice professional journal and then let it languish. Oops.

2. I also didn’t want to post links with no original content but, well, this is a pretty cool result and I have nothing more to add to it. So here you go.

(Brief summary: my coworkers found a way to significantly speed up image tiling in Qt, using a simple algorithm that’s easily applicable to other toolkits and environments. Briefer summary: It makes painting faster, which is always good.)

Fresh data

Posted in the Living Room by Joe on September 26, 2008
Tags: ,

So what runs better – OpenOffice or KOffice? Anecdotal evidence says KOffice is slimmer but OpenOffice works better, but it’s very hard to find actual hard numbers on memory and resource usage. And when you do find somebody who’s done a comparison, it invariably dates back to 1995 and it’s totally obsolete.

How about Evolution vs. KMail? Evolution’s much more popular, but there are a few blog posts that offhandedly mention it’s a huge hog compared to Kontact. Which is odd, because there are actual filed bug reports that claim Kontact leaks memory like a sieve. How can I find the truth without installing both and painstakingly testing them in a wide variety of common use cases?

Well, we know that Evolution’s more popular because of the Debian Popularity Contest, where Evolution has a commanding lead. (The “Vote” column is the useful one.) popularity-contest is a Debian (and Ubuntu) package which automatically collects stats on what software is installed and how often it’s used, and I urge everyone to install and enable it now so that it can give a more accurate picture of what’s popular and where developer resources are best spent.

It would be great to have a similar program that automatically collects resource usage statistics for all running programs. Suddenly we’ll have actual hard, fresh data to settle arguments about whether your favourite app is a bloated hog or no worse than everything else! This info would be invaluable for both developers and packagers.

Your clipboard isn’t broken, just confused

Posted in the Kitchen by Joe on July 4, 2008
Tags: , ,

This is kind of trivial, but it’s good to have it documented somewhere.

If you ever have to work with the Windows clipboard API directly (and it’s not too bad, as Windows API’s go) this might save you a lot of time: don’t try and step through any clipboard-related code in the debugger.

I was trying to figure out why pasting an image into my app didn’t work, so obviously the first thing to check is that the data is actually being retrieved from the clipboard correctly. I suspected it wasn’t being saved in the format I thought it was.

BOOL clipboardOpen = ::OpenClipboard(NULL);
if (clipboardOpen) {
    qDebug() << "Clipboard open";
} else {
    qDebug() << "Couldn't open clipboard: error" << ::GetLastError();

UINT format = 0;
while ((format = ::EnumClipboardFormats(format) != 0) {
    qDebug() << "Clipboard contains format" << format;

qDebug() << "Last EnumClipboardFormat status was" << ::GetLastError();

MSDN is pretty clear on how these two functions work: OpenClipboard returns true if the clipboard’s open, and EnumClipboardFormats returns 0 when there’s an error (in which case GetLastError returns the error code) or if it’s out of formats (in which case GetLastError returns 0, SUCCESS).

Since I was too lazy to actually hook up the Qt debug logger I was just stepping through this in the Visual Studio debugger to examine the results. And the results were basically:

Clipboard open
Last EnumClipboardFormat status was 1418

Since my app is emphatically not multithreaded, I was pretty baffled about how “Clipboard open” could be immediately followed by 1418: ERROR_CLIPBOARD_NOT_OPEN. I thought my paste problem was because my clipboard was seriously broken (on any OS but Windows I’d have thought something that fundamental was impossible, but on Windows I never assume anything). Took me ages to realize that it worked fine if it wasn’t in the debugger.

The problem, I think, is that when you pass NULL to OpenClipboard it associates the clipboard with the current process, and when you’re stepping through in the debugger it’s switching back and forth between the application and Visual Studio. Somehow the system is getting confused about which process has the clipboard open. This example seemed to work if you pass an HWND to associate it with a specific window instead of a process, but I wouldn’t want to place any bets that more complicated code would keep working. On Windows I never assume anything.

Joel gives bad advice: details at 11:00

Posted in the Living Room by Joe on July 2, 2008

Am I blind, or does Joel on Software not allow comments any more? (Or did it ever?) Well, I guess I’ll respond to his latest article on disabling unusable menu items right here, even though that means he’ll never see it.

Don’t do this… Instead, leave the menu item enabled. If there’s some reason you can’t complete the action, the menu item can display a message telling the user why.

That would be incredibly annoying – I’d be constantly clicking on items I expect to work and getting a popup saying, “Sorry.” In an app I use often, I use the menus mostly by feel, so I’m not going to notice that now there’s a message saying, “Disabled because the moon is in th’e fourth quarter and you haven’t paid your phone bill.” Or if I do it’ll be in the instant before my finger clicks the button, so now I’ll just have time to realize I screwed up before it pops up the Box of Aggravation.

A better thing to do would be to disable the option, so if I click on it nothing will happen instead of the app yelling at me, and have feedback on why it’s disabled.

I’m a greasy little monkey

Posted in the Workshop by Joe on July 2, 2008
Tags: , ,

Wow, work’s really been kicking my ass lately. I’ve been meaning to update this blog for ages, but I’ve had no time. Finally got the day off and spent an hour or so learning to use Javascript and Greasemonkey. While we’re waiting for something more substantial, here’s my first script. You might even find it useful:

// ==UserScript==
// @name           Include Linked Images
// @namespace      ca.notcharles.greasemonkey
// @description    Add linked images to the end of a webpage.
// @include        http://www.wizards.com/*&pf=true
// ==/UserScript==

Copyright (c) 2008 Joe Mason 

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.


var body = document.getElementsByTagName('body')[0];
var anchors = document.getElementsByTagName('a');

for (var i = 0; i < anchors.length; i++)
	var anchor = anchors[i];

	// only process anchors containing images
	if (anchor.getElementsByTagName('img').length == 0) continue;

	// add the target of the image to the end of the document
	var href = anchor.getAttribute('href');
	var hr = document.createElement('hr');
	var img = document.createElement('img');
	img.setAttribute('src', href);

I won’t bother going through it because there are a million Javascript tutorials out there.

So what’s it useful for? Well, Wizards of the Coast have been releasing Dragon and Dungeon magazine articles online – free, for the time being. Sooner or later they’ll start charging for them so I’ve been downloading as many as I can and saving them as PDF’s. (The best way to do this is to click on the “Printer Friendly” link at the bottom of an article, and then print to PDF. On Windows you’ll have to install an add-on for this – I like PDFCreator.)

The problem is that some of them have thumbnailed images which link to a full-sized version, and I’d really like the full images to end up in the PDF. So this script just finds every image which is a link, and appends that image to the end of the page. It only runs on the printable format page (the “pf=true” at the end of the @include line). It just occurred to me it should really be checking that the link actually leads to an image, but meh – that’s not very likely for these articles, and if it happens I’ll deal with it then.

This article is a nice simple example to try it on.

« Previous PageNext Page »