I Am Not Charles


W2E Day 3: Morning Presentations

Posted in the Kitchen,the Living Room by Joe on September 30, 2010
Tags: , , , ,

JavaScript is the New Black – Why Node.js is Going to Rock Your World
by Tom Hughes-Croucher of Yahoo

Node is a Javascript interpreter that’s getting a lot of buzz. Basically it acts the same as a Python or Perl runtime (or as Tom said repeatedly, “Python or Ruby” – not a Perl fan apparently, which earns him some points with me), letting you run Javascript without a browser and putting it on the same level as the popular desktop or server-side scripting languages.

I’ve been wanting this for years: Javascript is a well-designed, powerful language with clean syntax, and there’s no reason it should be limited to embedding in browsers. And because it has a 100% lock on browser scripting, pretty much everybody has to learn it at some point anyway, so why switch back and forth between scripting languages for other tasks?

Tom makes this point more strongly, pointing out a huge number of job postings for Javascript programmers: web sites are now so complex that companies are not just hiring visual designers and expecting them to slap on some Javascript copied from a web site, they’re hiring full-fledged developers to code up their sites. Using Javascript on the server lets these developers write both back-end and front-end code rather than needing a separate team for each.

I don’t think this is a 100% win: every serious programmer should learn several languages so that they can distinguish the philosophy and structure of programming in general from the quirks of their particular language, so a pure Javascript developer who can’t pick up whatever language is being used on the server side isn’t much of a developer at all. But as long as you remain proficient in several languages – especially if they come from different paradigms – having to switch back and forth during day to day tasks which should be related does slow you down, so artificially limiting Javascript to the browser is a penalty even if it does help to discourage laziness.

The other big benefit touted by Tom is code reuse – which is a 100% win. There is often logic duplicated between client and server – form validation is a big example – and using Javascript on the server lets you use the exact same code, rather than having to rewrite the same algorithm in two different languages, a huge source of bugs. In fact, using Javascript on the server enables shared logic at a level that would be infeasible if it had to be written twice: consider a page that writes a lot of its HTML dynamically through Javascript. In a technique Tom refers to as “Progress Enhancement”, the first pass is done on the server, using the complete widget set and dynamic logic used on the client, so that as soon as the HTML is received it can be rendered instantly. But the dynamic Javascript is also repeated on the client side so that as the user interacts the page is reconfigured in the browser without going back to the server. (The server-side and client-side code will never be 100% identical, but at least it will have the same base rather than trying to do the same thing twice from scratch.) There is an example of this in the YUI/Express demo, with Yahoo UI widgets rendered first on the server without sacrificing client interaction. Tom demonstrated the Table View widget, which showed up a glitch in this scheme: the spacing generated on the server did not exactly match the client, so the widget originally rendered with header tabs squished together slightly and then spaced them out, leading to a slight UI flash. This is ugly and needs to be addressed (although I don’t know if it’s a systemic problem or just because the simplistic demo didn’t include any codeb to deal with it.) Still, that split second when the initial layout was flashed would have been blank without server-side rendering.

Under the hood, Node.js uses Google’s V8 engine and contains a non-blocking, event-driven HTTP server written in 100% Javascript which compares in performance with nginx. The performance graphs Tom showed were impressive and it seems to scale quite well (far better than Apache, for instance.) One big hole right now is that HTTPS support is sketchy, but this is being worked on.

One interesting technical note Tom highlighted: to make use of multi-core hardware with an event-driven server, new threads or processes need to be spun off by hand for heavy work (as opposed to automatically for each connection as in Apache). Although Node does support the fork system call, it also implements the HTML5 Web Workers spec. That means rather using slightly different concepts to spawn helpers on the client and the server, developers can reuse their knowledge when writing code in both places.

As a new language (in this context), Javascript doesn’t have as many 3rd-party libraries available as, say, Python and Ruby. But with the buzz it’s getting, more are popping up quickly: Tom showcased several, all available at GitHub:

NPM, the Node Package Manager
Mustache, a JSON-like templating language (which Twitter currently uses in JS in the client but Ruby on the server)
Express, an MVC framework similar to the lower levels of the Rails stack
Paperboy, a static file server

As well as using it as a web server, Node has an interactive shell just like Python’s or Ruby’s. Definitely going to be picking this up for my scripting needs, even though I don’t exactly do much server development.

Tom’s slides are online at http://speakerrate.com/sh1mmer.

When Actions Speak Louder Than Tweets: Using Behavioral Data for Decision-Making on the Web
by Jaidev Shergill, CEO of Bundle.com

Now here’s how to make a product focused presentation without sounding like a shill:

– Here are the resources we have that most people don’t (a large database of consumer behaviour data, including anonymized credit card purchases from a major bank, government statistics and nebulous “third party databases”)
– Here are some studies we did for our own information, whose results we think you’d find useful (“We tracked a group of people in detail and interviewed them to find out in depth how they make decisions”)
– Here’s a neat experiment we put together using these two pieces of information – we don’t even know if we’ll release it, we just wanted to find the results (and here they are)
– Oh, and here’s our actual product

Jaidev presented two theses, the first gleaned from interviewing study participants and the second from his own experience:

1. There’s more than enough information on the Web to make decisions, but 99% of it is useless for the specific person looking at it, because – especially when looking at opinions and reviews – people need to know how people that are like them feel about an option. (Here we are talking about subjective decisions like, “Is this a good restaurant?” or decisions with a lot variables like, “Does this new device fit my exact needs?”)

2. Online user-generated content is nearly useless for finding opinions because it is not filtered right. For example, review sites tend to polarize between 5 star and 1 star reviews because only users with strong opinions bother to rate, so all reviews are distorted. Many people filter by their social circle since their friends (mentions on Facebook, Twitter, etc) have things in common so their recommendations carry more weight, but this means that recommendations are skewed towards options with the latest hype. It turns out people are much better at reporting new things they just found than what they actually use longterm.

To illustrate this, Jaidev presented an experiment in which he used his company’s credit card database to build a restaurant recommendation system, by drawing a map between restaurants based on where people spent their money, how often they returned, and how much they spent there. Type in a restaurant you like and the system would return a list of where else people who ate at that restaurant spend their money. Rather than a subjective rating, the tool returns a “loyalty index” quantifying how much repeat business the restaurant gets. Presumably this will be more useful to you than a general recommendation because the originators of this data share at least one important factor with you: a love of the original restaurant.

The result was that a restaurant which was highly recommended on both review sites and in Jaidev’s circle rated very low. Compared to restaurants with similar food and prices, customers returned to this one far less often and spent far less. Reading reviews in depth revealed that, while the highest ratings praised the food quality, middling ratings sais that the food was good but management was terrible, with very slow service and high prices. Equally good food could be found elsewhere for less price and hassle. This information was available in reviews, but hard to find since it was drowned out by the all-positive or all-negative reviews.

So the main point to take away from the presentation is: hard data through data mining is still more valuable than the buzz generated through social media. Which is obvious, but a good point to repeat at this conference which is full of people who are so excited about adding social components to everything.

Jaidev did a great job of demonstrating the value of his company’s data set without actually sounding like he was selling it. He only demonstrated bundle.com itself briefly: it seems to be a money management site which allows users to compare their financial situation to the average and median to answer questions like, “Am I spending too much on these products?” and, “How much should I budget for this?”. The example Jaidev showed was an interactive graph of the cost of pet ownership. Looks like a useful site.

Alas, the equally useful looking restaurant recommender was only a proof of concept and is not released to the public. (And only covers Manhattan.) Email jaidev@bundle.com if you want to see it made public.

(While I’m attending this conference on behalf of Research In Motion, this blog and its contents are my personal opinions and do not represent the views of my employers. How does the unicorn breathe?)

W2E Day 2: Afternoon Presentations

Posted in the Living Room by Joe on September 29, 2010

What to Expect From Browsers in the Next Five Years: A Perspective
Panel discussion

This was a great discussion, although it really didn’t touch on its ostensible topic much! It was more a discussion of the current state of the browser. The panelists were:

Douglas Crockford, creator of JSON, active on the ECMAscript committee, and Javascript evangelist at Yahoo
Alex Russell, whose workshop I already went to on day 1
Brendan Eich, original inventor of Javascript and Mozilla CTO
Håkon Wium Lie, creator of CSS and Opera CTO

The discussion moved way too fast to take detailed notes, so here are some quotes (paraphrased):

On the IE9 beta:
Hakon: “I’ve dedicated my life to improving IE”
Doug: “They have no ECMAscript 5 – there’s no provision in the standard for subsets meaning they’ve chosen to be noncompliant”
Alex: “It’s not available on XP, which is a concern because XP still has so many users”

On how to differentiate a browser:
Hakon: “Opera has a good emphasis on JS performance, but getting on every device in Asia and Africa are another goal.”
Brendan: “Mozilla is behind on performance but improvements are ongoing. Integration with the user’s identity is what I’m interested in (password management, etc so that the browser has your info and doles it out to sites rather than the user passing info to sites individuallly).”
Doug: “Do we work AT ALL – getting better.” (I have no idea what the context of this was any more.)
Alex: “Developer productivity. The big bottleneck is now network behaviour: we need more expressive API’s because using javascript to address HTML and CSS shortcomings makes the web less semantic.”

On data privacy with the browser knowing your identity:
Hakon: “I’m scared about evercookie. I would rather have the browser shield the user.”
Brendan: “The browser should be the first point of trust, providing API’s to the user, instead of Facebook providing API’s.”
Doug: “XSS is the biggest security problem: all scripts share a common global context, and there is a complicated language abd protocol stack with different conventions at each level making it impossible to reason about. HTML5 was irresponsible in adding new networking, local storage, and DB access before fixing XSS.”
Brendan: “We need to fix incrementally, or nobody will adopt. HTML5 has some security features.”
Alex: “We have theoretical solutions. The battle will be good vs usable: it has to be a system that everyone can use because that is the web’s strength.”

On the use of “HTML5” to refer to a set of features, many of which are actually CSS additions, rather than the actual spec:
Hakon: “This is a marketing problem: people want one name. The solution to ensure MS supports these features is ACID tests.”
Alex: “This is unproductive because lag time on creating ACID tests and getting browsers in hands is very long. I’m more interested in dynamics of getting browsers upgraded and released quickly.”
Doug: “This gets worse in the short term for web developers because the differences between browsers will increase, but longterm IE6 will die and it will become better.”
Brendan: “We need to use JS libraries for extensibility, because building everything into the browser is too slow to deploy.”
Hakon: “Yes, we need to use JS as a sandbox and move successes into the declarative side.”

On apps vs the Web:
Hakon “Native apps will be a footnote.”
Alex: “The Chrome web store let’s you monetize a web page. Targeting fixed hardware let’s you target the edge of a device, but as hardware improves we can burn cycles and the Web becomes a cost reduction to avoid writing things 5 times.”

On Google’s NACL:
Somebody, I didn’t write who (maybe the moderator): “It’s a promising research and prototyping story to get better performance out of the browser.”
Brendan: “It’s too complex. People here are calling it ActiveG. Nobody wants it. I don’t want pthreads running in my browser.”

On the lack of audience questions:
Moderator: “I don’t think you understand – this man invented CSS. If you have any questions about why your transforms aren’t working, ask them now!”

On developer tools:
Everybody: “Our browser has a tool built in. This is its name.”

On XML compatibility:
Doug: “XML is obsolete, didn’t you get the memo?”
Brendan: “Firefox uses heavy XML. We bought into it heavily, but now we’ve ripped a bunch of it out.”

On Mozilla’s evolving role now that every browser is debuting new user features, and the fight to get IE to follow web standards is won:
Brendan: “We represent the user first and only.”
Hakon: “Competition is great. We need several rendering engines to verify the models.”
Brendan: “People think engines are too complex to maintain several, but it’s not true.”

On standards bodies:
Brendan: “We need smart people who can do both practical and theory to serve on them, and we need them to work together.” Doug: “ECMAcript is the most important standard because if the others are broken, JS is the workaround.”

Some genius in the audience asked the best question of the session: “What’s the biggest unfilled hole in the HTML/CSS/Javascript stack?” On that:
Doug: “Security.”
Alex: “Integration. There are leaky seams between standards, and too many standards groups that don’t cooperate.”
Hakon: “CSS has an object model coming to help with that.”
Brendan: “The security problem lies in the leaky schemes.”

On plugin privacy concerns (users uniquely identifiable by their list of installed plugins):
Brendan: “We’re going to start turning off old plugins.”
Alex: “We’re trying to reduce the surface area of plugins. We need to leak some info so that they can be instantiated, but at least we need to hide their upgrade path and only keep the most recent versions installed.”

10 Things You Never Knew About Opera
by Håkon Wium Lie

I wanted to see this after seeing Hakon at the last panel, but it turned out to be pretty useless: basically a big ad for Opera. I guess that’s expected from the title, but I was hoping for some interesting new technology they incorporated or something. Instead it was all, “We employ 500 engineers,” and, “We’re big in Russia.” The only new thing mentioned was Opera Unite, which didn’t work when he tried to demo it. (My big question about it was how it dealt with firewalls and NAT – apparently it doesn’t, which is pretty useless.)

David Kaneda mentioned in the morning that he was required to make his presentation about general technology and not his particular product. I guess sponsors are immune.

Personal, Relevant, Connected: Designing Integrated Mobile Experiences for Apps and Web
by some drones from Microsoft

This was even worse. A lot of people walked out. I won’t waste my time by elaborating.

(While I’m attending this conference on behalf of Research In Motion, this blog and its contents are my personal opinions and do not represent the views of my employers. No, I would not like a pig.)

W2E Day 1: HTML5 with Alex Russell

Posted in the Living Room by Joe on September 27, 2010
Tags:

First impression: some guys behind me talking about which smartphones they should get for testing. “Not a Blackberry, though – I can’t stand doing web development on the Blackberry browser.” Sob. We were vindicated when Alex listed the browsers that could and could not handle HTML5 features, and carefully split BB into “Torch+” and “pre-Torch” in each list.

Second Impression: wow, dense. This was a dense, dense, dense, dense talk.

The first part of this talk was basically, “A browser engineer tells you how to build web pages.” Actually Alex’s epigram for it was more illustrative: “Think about your web pages the way a browser thinks about web pages.” About time! You wouldn’t believe how many optimizations we have to bypass because it would break some ridiculous crufty feature that page authors nevertheless make heavy use of. Like Javascript.

Many web developers just don’t realize how many side effects common techniques have. The most obvious is that Javascript is executed as soon as it’s parsed, and it blocks page loading until it’s finished. But Alex went through some more obscure and situation-dependent ones: for example, reading computed style info from Javascript requires the engine to do a layout pass to compute that info, which can be very slow, unless it’s already been done. So try to delay Javascript like this until after the page has already been laid out and rendered once (good luck reliably figuring out when that is.). You especially want to avoid Javascript that alters the page structure, forcing it to be layed out again (unfortunately, this is the most useful thing for Javascript to do).

Another extremely important thing that’s often misunderstood is just how much DNS sucks. Because DNS is a giant bottleneck, and for other reasons that Alex went into at length but I won’t because this is long enough, opening an http connection is heavyweight, so for good performance you want to avoid downloading lots of small resources: put as much as you can into each resource, and try to prioritize so that the important ones (CSS that applies to every element on the page) doesn’t have to wait on the tiny images that form the rounded corners of your logo.

Hopefully the talk went into enough detail on this to give the audience a good conception of these issues. But I think it might have been a bit heavy on the browser developer jargon – it included an overview of webkit render trees (!) to explain how the various bits of timing interact.

Fortunately the take-away for general design can be boiled down to some simple points: put your CSS first; put your Javascript last; and use “defer async” as much as possible. (But don’t take my word for it, check out the slides, because it’s important to understand why this helps in order to generalize it.)

I spent the opening half of the presentation thinking, “It’s great how he’s showing all these web developers the complexities involved in the browser!” But then when he started talking about individual CSS elements, I started to zone out. “I can just look these up in the spec when I need them.” I wonder if the web developers in the audience had the opposite reaction?

Speaking of which, there wasn’t a lot of audience reaction. He kept asking for questions and getting none. I think it was more of a, “This is really cool but I’ll clearly need to play around with it myself to get a feel for it, so I don’t have anything to ask yet,” kind of silence, and less of a, “I have no idea what’s going on,” silence, but it’s hard to be sure because the second half of the talk flew through a whole bunch of new HTML5 features without pausing on any of them. Too densely packed to linger. So let’s repeat the whirlwind tour:

HTML5 ~= HTML + CSS + JS APIs

HTML has some new and shiny things:

New, cool input types that let you remove validation code (they have builtin constraints) and avoid writing complex user interaction objects (they have builtin themes and behaviour that’s more complete than traditional objects).

Canvas – lets you draw on the page with inline code instead of downloading tons of images which fight for network priority.

ContentEditable – turns any HTML element into a rich text editor.

CSS has more new and shiny things:

Gradients, shadows and rounded borders: again, less use of images.

Layered backgrounds so you can draw your background with a canvas instead of, again, an image.

Transitions for hardware accelerated page effects without writing and parsing Javascript.

Transforms for morphing the display of items (although they’re not available everywhere, so use judiciously).

Animations for even more elaborate (and hardware accelerated!) page effects, although they are probably not finalized as the syntax is still not very clean compared to transitions.

Javascript and the DOM have new things that are less shiny, but useful:

First, let me digress to talk a bit about jQuery. It’s a horrible anchor around the neck of every web site. It’s a gigantic library that must be downloaded and parsed (synchronously) before even beginning to parse the rest of the page, and because everybody uses a slightly different version of it served from a different URL they can’t even share a cache of the thing! The only thing worse than using jQuery is not using it: can you imagine the bugs that would be caused if everyone tried to duplicate its functionality by hand?

So replacing jQuery with something making better use of inbuilt browser features is a worthy goal, because it would hit points (a) and (b) above perfectly.

So when Alex says, “These new features let you do everything jQuery does in 30 lines of code!” I prick up my ears. Not being a web developer, I don’t know what jQuery actually does, so I’m probably being naïve, but here they are:

querySelectorAll is a new DOM method that ties in with the CSS declarative model: give it a string and it will return all the nodes which match that CSS declarator. As I mentioned yesterday, CSS is not my thing so the descriptors Alex demonstrated were all gibberish to me, but the concept seems pretty useful.

defineProperty is a new Javascript statement that let’s you add functions to any prototype, meaning if you stick something on the root of an object tree, it’s automatically available to every derived object (what most languages would call a “subclass”). This has always been possible, but there were side effects that made it dangerous to use in libraries; defineProperty does it cleanly.

Some more Javascript goodness:

WebStorage let’s you store persistent key-value pairs. Traditionally you would use cookies for this, but cookies are terrible for performance: they’re included in the headers sent with every request and response, and the headers can’t even be zipped. And incidentally, the cookie “spec” is a joke (it is fundamentally ambiguous, so it is not possible to write an algorithm to parse cookies correctly) so anything that gets rid of them is A-OK by me!

Drag and drop is traditionally done with fragile and inefficient Javascript; now it’s built into the browser.

And, uh, some more, but by this point my notes are becoming illegible because my God that’s a lot of stuff. And there are a lot of intriguing points brought up about all of these, which this margin is too small to contain, so if I have time I’ll make more in-depth followup posts on some of them.

Of course, you can only use these features on bleeding-edge browsers – so, well, basically all of them except IE 6 to 8. Which still make up something like 30% of the market. Alex handwaved this a bit by talking up his Chrome Frame product – you can just ask your users to install an IE extension instead of a whole new browser – but it sounds pretty tenuous. I have no better solution for this, though – adding fallbacks defeats the whole purpose of using new builtins to make pages smaller and simpler.

Then there was a lab session, which Alex unsurprisingly ran out of time to show in detail. It showed a small sample app using these techniques, including a wrapper library called h5.js which makes using some features simpler, works around design flaws in others, and includes an async downloader for resources in less than 1K (gzipped). Source code is available here.

All in all, a very impressive presentation, covering lots of exciting ground, but could have used a little less scope and more depth. Fortunately, the afternoon’s session provided a very nice complement.

(While I’m attending this conference on behalf of Research In Motion, this blog and its contents are my personal opinions and do not represent the views of my employers, colleagues, family members, or pets.)

How’s Gimp doing, part2

Posted in the Living Room by Joe on July 30, 2009
Tags: , ,

How’s Gimp doing these days?

Posted in the Living Room by Joe on July 22, 2009
Tags: , ,

I’ve always heard two opinions about Gimp: “It’s just like Photoshop, an incredibly complex and expensive piece of professional software, but it’s free! Behold the power of Open Source!” and “Gimp sucks. It’s better than nothing, I guess, but if you’re using Gimp instead of Photoshop you’re a chump.”

Of course, not having any need for a pro-quality image editor due to lack of artistic talent, it’s been 5 years since I paid any attention to this. My sister gives an update on how Gimp stacks up today.

Fresh data

Posted in the Living Room by Joe on September 26, 2008
Tags: ,

So what runs better – OpenOffice or KOffice? Anecdotal evidence says KOffice is slimmer but OpenOffice works better, but it’s very hard to find actual hard numbers on memory and resource usage. And when you do find somebody who’s done a comparison, it invariably dates back to 1995 and it’s totally obsolete.

How about Evolution vs. KMail? Evolution’s much more popular, but there are a few blog posts that offhandedly mention it’s a huge hog compared to Kontact. Which is odd, because there are actual filed bug reports that claim Kontact leaks memory like a sieve. How can I find the truth without installing both and painstakingly testing them in a wide variety of common use cases?

Well, we know that Evolution’s more popular because of the Debian Popularity Contest, where Evolution has a commanding lead. (The “Vote” column is the useful one.) popularity-contest is a Debian (and Ubuntu) package which automatically collects stats on what software is installed and how often it’s used, and I urge everyone to install and enable it now so that it can give a more accurate picture of what’s popular and where developer resources are best spent.

It would be great to have a similar program that automatically collects resource usage statistics for all running programs. Suddenly we’ll have actual hard, fresh data to settle arguments about whether your favourite app is a bloated hog or no worse than everything else! This info would be invaluable for both developers and packagers.

Joel gives bad advice: details at 11:00

Posted in the Living Room by Joe on July 2, 2008
Tags:

Am I blind, or does Joel on Software not allow comments any more? (Or did it ever?) Well, I guess I’ll respond to his latest article on disabling unusable menu items right here, even though that means he’ll never see it.

Don’t do this… Instead, leave the menu item enabled. If there’s some reason you can’t complete the action, the menu item can display a message telling the user why.

That would be incredibly annoying – I’d be constantly clicking on items I expect to work and getting a popup saying, “Sorry.” In an app I use often, I use the menus mostly by feel, so I’m not going to notice that now there’s a message saying, “Disabled because the moon is in th’e fourth quarter and you haven’t paid your phone bill.” Or if I do it’ll be in the instant before my finger clicks the button, so now I’ll just have time to realize I screwed up before it pops up the Box of Aggravation.

A better thing to do would be to disable the option, so if I click on it nothing will happen instead of the app yelling at me, and have feedback on why it’s disabled.

Wild Zebras

Posted in the Living Room by Joe on May 10, 2008
Tags: ,

Christian at Design Desires posted some simple Javascript to make dynamic zebra-striped tables. He asked for comments to be mailed to him – in keeping with last month’s rant about open discussion, I’m posting my response here instead:

No sir, I don’t like it.

I don’t find it adds anything. I’m perfectly able to read across the lines of a plain old static table, so highlight one line doesn’t make it any faster to read that line. It also doesn’t help me pick out a line when scanning with the mouse, because I’m already looking at the line that gets highlighted – I put the mouse there myself!

I don’t know if it’s just my browser, but highlighting the line causes a lot of flicker. (The “more extreme” example from Neil Roberts is even worse – it actually resizes things while I’m mousing over them, jerkily. That’s terrible!) So I’d really hesitate to use this until it’s been tested on a lot of browsers and setups to make sure it’s smooth everywhere.

The only advantage I could see is if I look away and come back later. There might be some value in highlighting the line after the mouse has stopped there for a few seconds, which wouldn’t cause flicker when scanning quickly. It still might be more annoying than useful, though.

Advogato vs. the world

Posted in the Living Room by Joe on April 30, 2008
Tags: ,

Now that I’m not using Advogato any more, I’ve taken a little time to think about what made me dislike it. Mostly it’s that the user interface is bland, but there’s also a bit of clunkiness that really annoyed me:

90% of the world’s blogs are laid out in the same way: if you have something to say about a post, you make a comment on it. If you have something to say that you think deserves a post of its own, you post in your own blog and make a trackback to the post you’re referencing.

Advogato works a bit differently: you can post, and that’s it. There are no comments and no trackbacks. On the surface, that’s because it’s more primitive, but the community’s had plenty of time to add these features. Either they just don’t care or they think their system’s actually better.

(Wait, that’s a misuse of worse-is-better – I should have linked to less is more.)

The advantage of the Advogato system is that people do comment on each others’ posts, but they do it in their own journals. You just start out with a link to the post you’re commenting on, in this format:

shlomif: did you actually read the Nichomachean Ethics? The thing is repulsive…

On most blogs, when referencing a post somewhere else the standard is to quote copiously – quote the parts you are replying to directly, quote the parts you find especially interesting, quote the parts that give the argument’s main points. Since you found the subject interesting enough to go to all this effort, you want to give your readers a good overview. But this gives the reader very little incentive to actually go back and read the original post you’re quoting, since they already have the hilights. The Advogato style gives no context, but readers can always click the link to find it. That means that, on Advogato, people who are hooked in by a post are actually more likely to go back and read the entire context, because there’s no other way to get it.

The other advantage of Advogato’s style is that I can find interesting discussions no matter who starts them as long as someone I know takes part. When a discussion starts on one blog and unfolds entirely in the comments there, it can’t draw in anyone who isn’t already a reader. I only find new blogs when they’re widely linked to, which only happens when lots of different people have a lot to say on a subject, or when they feel that a post is so insightful that it needs to be shared. I miss out on the smaller, more specialized posts which may attract comments and observations by not wide dissemination.

On the other hand, the Advogato style discourages casual commenting, since it’s so much work to make a link – nobody comments unless they have something substantial enough to make a full post. Discussions tend to peter out after a few exchanges. And unless you already have a strong relationship with the person you’re replying to, how do you know they’ll even read your reply? There’s a list of recent posts throughout the entire site on the front page, but this doesn’t scale well at all.

So, no, the Advogato system doesn’t really work as-is. It’s pretty good for building a sense of community, but the poor scalability and lack of features in the interface it makes it unattractive for serious blogging, so many writers migrate elsewhere. (Take a look at the Advogato recent blog entries list and see how many are syndicated from other sites.) In fact, the community feeling is exactly the reason I switched to WordPress – I wanted more control over the layout, the name, and the style. I wanted a blog that I felt like I owned. But here I am complaining about how blogs are fragmented when everybody owns their own instead of participating in a communal site.

So what’s the solution? Act like a writer, not a reader. Unless what you want to say is really inconsequential, do it Advogato-style, in your own blog, with a simple link back to the post you’re referring to. That way more people will see what’s going on, and the whole community will grow. Just commenting on other people’s blogs is easier and faster, but it’s also less effective. Unfortunately, it’s what’s most encouraged by most interfaces.

(Now – should I disable comments, just for irony?)

Advogato again (7 replies)

Posted in the Living Room by Joe on April 27, 2008
Tags: ,

I was just about done typing up a post about Advogato and how its community differed from most blogs, and then I went over there to check on something and realized my entire thesis was wrong. So let me ask a question to see if I can salvage it, or if I’m just a blind idiot:

I last used Advogato 4 years ago. I could have sworn at the time, there was no way to leave a comment on a post, leading to lots of people making posts in their own journals just to reply to a post in another journal. But the top post on avogato.org right now has 7 replies, so apparently it’s possible.

Has it always done that, or was it added recently? (That is, some time since 2004?)