Saturday, 11 February 2012

CSS Vendor Prefixes: Another Storm in Another Teacup - or a Rip in the Fabric of Space-Time?

Is the current concern over vendor prefixes in CSS something that can be solved by tweaking the current mechanisms and asking everyone to play fair in the interests of the "Open Web", or is it a sign of something darker, a the sign of something deeply broken, that is only going to get worse as time goes on? I think there's a danger of the latter, and I think that this current discussion is indeed indicative of the deeper problems that exist.

The Current CSS Vendor Prefix Kerfuffle

But first: before I wring my cassandra like hands, I'm late-ish to the party in writing about this and you may have no idea what I'm talking about.
The dominance of Apple and Google mobile browsers is leading to a situation that's even worse for Web programming than the former dominance of Internet Explorer, a standards group leader warned today.
Reported CNET based on a post by Daniel Glazman, chair of the CSS WG, asking everyone to change their Web sites and no longer use the -webkit- prefix.

Lots of people have posted on this, and I thought that before ploughing in with an opinion of my own, I'd do a little research as to what the CSS Working Group says about vendor prefixes. I'd always thought that the vendor prefix was a crude namespacing mechanism to allow proprietary extensions to CSS. 

There are lots of good reasons for having a non-standardised extensibility mechanism, of course. If I was making a browser intended for operation at the depths of the ocean I can imagine a number of proprietary features that I wouldn't expect to be standardised. That seems fine. 

Equally, in a browser that operates using an operating system that has specific user interface metaphors, an author might want to control the properties of the rendering of user interface artefacts in that environment only. -iOS-triangular-button-pointyness could possibly be the right way to control the pointedness of a triangular button under iOS. 

Vendor Prefixes for Experimentation

The thing is that in addition to these use cases, the CSS Working Group recommends using vendor prefixes for implementation of experimental features which are under consideration for standardisation. 

Let's look at what the CSS working group says, in "Cascading Style Sheets (CSS) Snapshot 2010 W3C Working Group Note 12 May 2011" []:

3.3. Experimental Implementations 
To avoid clashes with future CSS features, the CSS2.1 specification reserves a prefixed syntax for proprietary and experimental extensions to CSS. 
Prior to a specification reaching the Candidate Recommendation stage in the W3C process, all implementations of a CSS feature are considered experimental. The CSS Working Group recommends that implementations use a vendor-prefixed syntax for such features, including those in W3C Working Drafts. This avoids incompatibilities with future changes in the draft.
[my emphasis]

Here's what the "prefixed syntax" link says []: Vendor-specific extensions 
In CSS, identifiers may begin with '-' (dash) or '_' (underscore). Keywords and property names beginning with -' or '_' are reserved for vendor-specific extensions. Such vendor-specific extensions should have one of the following formats: 
'-' + vendor identifier + '-' + meaningful name
'_' + vendor identifier + '-' + meaningful name
Authors should avoid vendor-specific extensions
A few notes on the above.
  1. It would seem that in order to test the viability of features in upcoming standards (Recommendations, whatever) you have to get authors to use them, and you shouldn't name them the same as anyone else, you should name them according to a proprietary extension scheme. That's very silly. As ppk points out in his post, if you are going to do this then they should be called -experimental-foo (or something) by everyone. 
  2. You may also have noted that there is an inherent contradiction between recommending vendors to prefix experimental features and recommending authors not to use vendor specific extensions. 
  3. It doesn't look to me as though it was intended in CSS 2.1 (or 2) that vendor prefixes should be used for not-yet-standardised features - i.e. the text in the quotation above makes no mention of experimentation. Is this a novel interpretation?
  4. The piece of the WG Note that says "avoids incompatibilities" is almost funny, in the current circumstances. Seems to me that in using prefixes, although they are not avoiding but guaranteeing future incompatibility, vendors are only doing what they are asked to do.
  5. Coming back to the  -experimental-foo suggestion - caveat emptor. I'm reminded of lengthy discussions in the course of writing "W3C Content Transformation Guidelines" about the idiom of X- prefixed HTTP headers, also widely understood to be for extensions and experimental features. Once in the wild these things stay in the wild.
But there is no such thing as "experimentation"

The browsers in which these experimental features are unleashed are not called experimental browsers. Usually they are not even beta versions, let alone experimental. Let's be clear, then, that reality says that once you unleash an experimental feature you have then littered the Web environment with some kind of requirement for ongoing support. Authors that have used the feature are unlikely ever to change it. Users of any particular version of a browser may not ever update it, they may not know how to, and in certain common situations are not even able to do so even if they wanted to. 

Daniel Glazman's note asking Web Authors to update their sites to use the non-prefixed version conjures up a picture in my mind of Web site owners getting out their editors, grepping the entire contents of all their sites and doing a replace operation. Simple. Well, possibly a bit unrealistic. I mean, first of all, you'd want to leave the prefixed version in, wouldn't you. Otherwise your site is going to break for some browsers it works in today. 

Secondly, and probably more importantly, I find it very hard to imagine that the majority of Web sites are built that way at all. I find it very hard to imagine that anything but a very small number of Web sites are operated by anyone who has the slightest clue what CSS is. Let alone a vendor prefix. I'm guessing that almost all of them rely on libraries, frameworks, external vendor support - whatever it is, it is unlikely to be a tame developer sitting waiting to do a grep. i.e. it's not going to happen.

Standards Junk

The experimental features are in the wild, the genie is out of the bottle. It seems wholly unsurprising that vendors will want to implement them. Actually they may not just "want" they need to copy each other's prefixes for interoperability purposes. The process guarantees a legacy problem, the standards equivalent of space junk. 

The outcome of all this is that all sites, in theory, need to include all variants of the name of every property that has been through this process. All browsers need to support every variant too - and not only as synonyms of each other, they need to support the browser specific quirks too.

And all this has the intention of avoiding future incompatibilities. A standardisation process that guarantees the need to use non-standard extensions truly needs to be reconsidered.

What's Really Wrong

The fact that experimental features have got mixed up with browser extensions is a problem. 

However, there's other, rather more difficult stuff that I suggest needs fixing.  The most important, I think, is that basic mechanisms for version negotiation and any kind of feature detection are missing. 

The assumption, I think, is that any Web page and its representational variants are somehow some unitary whole. By using precedence, the cascade and some @media at-rules in the CSS the client somehow figures out what it is supposed to do. But this is a problem since a single page may have a very wide variety of representations, depending on the version of a browser and its context. Is it really practical to think that an indefinite range of delivery contexts, feature support, bug workarounds and so on can really be encoded into a single CSS script?

There's no possibility within CSS to express "if you support x feature do all this, if not do something else" - up to a point, you might be able to do that by browser sniffing at the server, CSS could help, but doesn't - since there is no way within the scope of CSS for a server to know what level of CSS the browser supports, which of the optional parts at any level are supported and what modules are available.

CSS maintainability is terrible. SASS etc go some way to improve that by allowing expressions and by allowing the explicit coupling of property values (through assignment to a variable, or whatever).

With the advent of webkit and its life-saving CSS inspector it has actually been possible to see the effect of the cascades and so on and actually do some debugging. This is a relatively recent development in the life of CSS though. And we could probably do with some more authoring tools to make the creation of CSS a deal more foolproof than it is.

There's some other stuff in relation to CSS too, some of which I probably should know the answer to, but don't, some of which have been discussed to death and some of which are a bit whimsical.

For example, why is the syntax of CSS so hokey? Given silent failure and recovery of CSS and silent failure and recovery of HTML(5) are the rules aligned? Should they be? Why are CSS selectors and XPath selectors so different? Aren't they in fact doing the same job? The list goes on ...

Finally and Incidentally

This isn't really very important, but in the course of my looking into what the history of the prefixed syntax, I noticed that CSS Level 2 contains [at]  the following
4.1.1 Tokenization

All levels of CSS — level 1, level 2, and any future levels — use the same core syntax.
 However, that doesn't appear to be the case, since CSS 2 allows prefixed syntax:
ident [-]?{nmstart}{nmchar}*
nmstart [_a-z]|{nonascii}|{escape}
nmchar [_a-z0-9-]|{nonascii}|{escape}
whereas CSS 1 [] doesn't:
ident {nmstrt}{nmchar}*
nmstrt [a-z]|{latin1}|{escape}
nmchar [-a-z0-9]|{latin1}|{escape}
Ho Hum.

The Web, Time to retract the wheels?

It is said that one of the most important things about the growth and dissemination of the Web is the fact that HTML as well as CSS and Javascript are textual formats which can be read by human beings and copied.

The View Source Principle

This is considered such an important principle, by some, and as part of the Web orthodoxy, that it has a name, "The View Source Principle".

This principle and has been quoted over a long period by many highly respected Web pioneers.

For example:
The Web has mostly been built by hackers, originally for hackers, and is well-known to have spread virally via the “View Source” principle: find something you like, View Source, and figure it out.
Tim Bray 2003-06-03
However, I think the virtuousness of the principle needs to be questioned. Both from a straightforward "even good ideas need to be tested" point of view, and also from the point of view that time has moved on in all these years and things ain't what they used to be. Whether it ever should have been be an overarching or over-riding principle is moot. Whether it still is, is more interesting.

Of course I'm not the first person to observe this. The discussion goes back a long, long way, for example:
The "view source principle" should be treated respectfully, but it must be weighed against other requirements and constraints on the Web architecture.
Mike Champion, 2003-10-23
Here are some reasons why the virtuousness of the "View Source Principle" may be suspect:

Usage is not Exemplification

The assumptions you make from live usage may be wrong. Examples illustrate a point and are made with a specific didactic purpose. Live usage is not created, in general, with exemplification in mind. Assuming that the content has been created by a human being and that the human being is a competent (or better) practitioner of the Web arts, whatever they did may not be a good example for what you plan to do. There may be a better way of doing that, the author may have balanced competing design principles in making the decisions they did, which may not be at all applicable to your circumstances. Generally speaking, you don't know by inspection.

Copying Good Practice is Good, Copying Bad Practice is Bad

A more extreme version of the above point is that the author may not have been an expert, or even competent. Their usage may may be wrong, out of date, or for whatever reason not in line with good practice.

Times Change

The Web has moved on. Increasingly, it's not primarily composed of HTML that has been created in a text editor. It may be that already the majority of Web content is not. For example, this blog is composed using the standard "Blogger" tools, a Javascript-based somewhat wysiwyg editor.

Try doing a "View Source" on this page. You learn that for some reason <p> elements are not used. It would seem that for this post, at least, instead of using <p> elements, <div class="p"> is used instead. Presumably there's some CSS somewhere that specifies the same kind of visual representation of a div that a <p> would. Is that good practice that should be copied? Might you infer that a Web document should be composed of anonymous containers in a tree structure with appropriate visual styling? You might. Is that good practice? Not in my book, no.

If the viral nature of View Source is good, from some points of view, it's bad from others, since it can just as easily spread bad practice and misinformation as it can anything else.

Unit of Authorship

Originally, unitary Web pages were composed in textual editors and early Web sites often had the Web page as both the unit of authorship and the unit of consumption. Some things seem to follow from that, like having the same authoring language as delivery language (HTML).  But it seems to me that the assumption that the authoring language and delivery language are the same is wholly open to challenge. More on that elsewhere, later, probably.

Most Web sites today - or many at least - do not have pages that correspond to units of authorship. This one (the one you're reading) is probably a little unusual, in fact, in that the majority of the page is a single unit of authorship (sorry to keep using that ugly term) with only a limited amount of site-wide framing and additional content.

But the structure of HTML seems to follow from this (unspoken) assumption. A good example being that the <style> element was not allowed outside of <head> in HTML until HTML5. I don't know whether it was included in HTML5 to facilitate fragment processing. It's also true that you can't properly embed HTML and XML documents inside other documents without resorting to ugly (and actually impractical) escaping and commenting (you can't put a comment inside a comment so that really doesn't work well at all.)

Even if you can work around this embedding problem, it ought to be the other way round, namely that the language should be designed to facilitate composing multiple discrete units of authorship into a unit of delivery/consumption.

It's not View Source, it's View Transfer Syntax

If you do View Source, what you're looking at, in fact, is unlikely to be what someone did directly (i.e. by writing HTML) to achieve a particular effect you want to copy. I might have created my Web page using PHP and you might want to do something similar using Ruby-on-Rails - does View Source on the HTML that is created as a result of my PHP executing help you much? And so much, that it's considered a fundamental principle?

Bootstrapping the Web imposed certain requirements

Bootstrapping the Web successfully was probably at least partly a result of there being minimal dependencies on tools and the fact that you could create Web pages using a simple text editor, save it to file store and have it delivered untransformed to a browser.

Operating the Web imposes different requirements

Moving forward a few years, though, and a different view starts to predominate. Now, the Web consists of content that's transferred between computers in a form designed for human beings to create and read, but that is verbose, inconvenient to generate and inconvenient to process for computers. It's increasingly rare for a delivered page to correspond to an unprocessed piece of content mapped to file store. The content is rarely unprocessed server side and is likely also to be processed client side.

The virtue of being able to capture content in transit and be able to interpret it using only a minimal tool is an extremely limited virtue. Though debugging is important, in general, the robustness and efficiency of an operational system once live are of equal or greater concern . To compromise the efficiency and robustness of live systems in the interests of being able to use a trivial debugging tool seems out of balance.

Debugging Tools

Using a text editor on raw content is in any case a matter of habit rather than priority. If you're debugging markup you're likely to be a lot better off using a validating parser to check the content. And given the inherent complexity of such a tool whether the markup was or was not originally human readable is moot. How hard would it actually be to have tools that provided a more easily human digestible view of the transfer syntax given that it was not originally so?

View Source in a browser is useful for debugging, what's even more useful is inspecting the DOM as built by the browser - as exposed by Firebug or Inspect Element in Webkit terms - i.e. view source doesn't tell you what the browser has done with it, it tells you what the author did before the browser sorted it out into something it understands.

False Friend

There's an insidious aspect to this supposed virtuosity of "View Source" - and that is that it is strongly implied that you "can" or even "should" create the human readable content "by hand".

Although we probably accept that being able to use a simple content creation tool available on any platform was an important ingredient to the early Web, it's actually quite hard for even quite adept humans to write HTML correctly. Always has been.

And today it's even harder than ever, given that the Web has become more sophisticated and the components of it are more numerous. Today you have to master 4 syntaxes - (X)HTML, CSS, Javascript and URLs (URIs or IRIs) - none of which appear to have given any thought to harmonious co-existence with any of the others. And that's just the syntax. Never mind more consequential issues relating to grammar, the DOM and other things.

I think it's fair to say that even the most skilled practitioners cannot create more than an extremely simple modern Web site without error by using basic tools. It's way too much to ask for practitioners who are merely "functionally skilled".


And as for teaching it to unskilled people who'd like to have even basic skills and are trying to attain  functional skills or better? Well, that's really the whole story behind this sequence of posts.

I haven't researched this and can't in any sense prove it - but I imagine that the teaching of basic mathematical skills is one of the most long term and hardest undertakings that we try as a matter of routine with children. Well those children who are fortunate to receive a systematic education as a matter of expectation of course.

Most will speak. Many will write. A large number fall by the wayside of mathematics despite over 10 years of tuition. The notational systems of mathematics have always seemed to me to be moderately coherent - though undoubtedly this hasn't always been the case. If you had to learn four different notations to get to a modest level of competence that would only make things so much harder, wouldn't it?

It's important to understand most of the aspects of mathematics that we teach. Arithmetic is "essential" for everyday life for most people. Being able to create a Web page is not essential in the same way, but in order for "The Web to reach its full potential" it ought to be within the reach of most high school educated people to be able to create a Web presence - beyond a most basic "hello world" (with the obligatory yellow-on-purple sideways scrolling ticker, of course - so sad that that still exists.)

Priesthoods and all That

I don't think for a moment that the current set of Web technologies was deliberately created to be hard. In fact, if anything, quite the opposite. However, it's worth wondering whether there are vested interests in making tools and services that depend to some degree on it being as hard as it is. Would Web consultants, educators and others rejoice in a simpler Web more open to less skilled people? Just because this is a paranoid point of view doesn't mean we shouldn't look at it :-)

What we should be looking for is the continual deskilling of routine tasks. This allows skills to be applied to higher level and higher value activities. It used to take an expert to make a Web page at all. It still takes an expert to make a Web page that stands a half-way decent chance of rendering in a usable way across a range of delivery targets. The business of building moderately functional cross-platform Web sites is something that is beyond the capabilities even of most priests.

Where are the tools then?

It doesn't matter, though, you may say. Look, you are using Blogger to create this page and its companion pages and to do so you haven't used any Web knowledge that you may profess to have.

That's fair, up to a point. However, using this tool I routinely create Web pages that pull in information from a number of different sources, and which I duplicate across various different destinations. I could not create those Web pages without a knowledge of HTML. I need that knowledge to sort out the Tag Soup that results from the cross pasting that is necessary to make those compositions. If I don't then I end up with a mish-mash of styles, line spacing and so on that is - well - awful. And it's not just because of the <div> <br /> HTML used here.

I don't know of any widely accepted tool set for Web creation that covers more than a niche aspect of the market or use cases for Web site creation. At best we seem to have syntax aware highlighting and optional validation. Deployment and testing cycles seem extremely poorly catered for.

One possible interpretation of this situation is that since the current set of standard components we not designed for creation by tools it turns out that creating tools for them is really hard. A "straightforward" HTML editor needs to have a split personality if it wants to provide you with an expected wysiwyg creation experience - along with a desirable logical view.

(I accept that I've never used the Adobe tools which apparently are quite good. It would be nice to think that this was an area for a vibrant market and open competition, though).

(I guess, to be fair, I'm also not aware of any very good wysiwyg non-Web word processor, either. At least not one that allows me to pick up a document edited by someone else and continue editing it, oblivious of the assumptions they've made in creating it).

Starting Conditions vs Conditions for Growth and Scalability

I don't know what, specifically, Mike Champion had in mind by way of "other requirements or constraints on the Web" in his quote above. I should try to ask him - I don't want to have wrongly co-opted his point in support of my own. Meanwhile here are some further thoughts on why this - and related vestiges of the Web's beginnings - should not be considered inviolate and indeed should be considered similarly to the coccyx - interesting from a historical perspective only. The coccyx isn't usually considered harmful. I'm suggesting that View Source is.

View Source may well have been a significant contributor  to the Web taking off. But the Web has now taken off and just as an airplane in flight doesn't need wheels, the Web today doesn't need View Source. Wheels get in the way of efficient flight and the current set of standard components (HTML, CSS, JavaScript and URL syntax) in their present form and together with View Source as a design principle and delivery format get in the way of efficient creation and operation of Web pages.

Deep respect, View Source, you showed the way - may your retirement be long and happy.

Thursday, 9 February 2012

Calendar Reform and other Good Ideas

In saying that something is broken, and proposing an alternative, you open yourself up to ridicule and worse. If it ain't broke, don't fix it is maybe the mildest form of response. It is broke, but can't be fixed or isn't worth fixing is maybe another.

Since a lot of what I plan to write about here can be seen a bit like Cassandra wringing her hands and wanting to fix something that other people simply don't see as a problem, it's worth starting from a discussion about some things that could be better, but aren't.

The Decimal System

We count in 10s and look at 10 as a round number. But it really is inconvenient. It's only evenly divisible by 5 and 2. How much better would it be if we counted in 12s. Divisible by 2, 3, 4 and 6. Or maybe better still, 60, whose divisors are 2, 3, 4, 5, 6, 10, 12, 15, 20 and 30. And how lucky we are to tell time in 60s.

Calendar reformists have been at it for a while - wanted to divide the year into 10 months. Doh! How would we have nice thirds or quarter years?

In the UK we used to have 12 pennies in a shilling and 20 shillings in a pound. That's 240 pennies in a pound, right? And it means that each pound was evenly divisible into halves, thirds, quarters, fifths, sixths, eighths, tenths and twelfths. With 100 pence in a pound nowadays, we only get halves, quarters, fifths and tenths. That was a step backwards.

I propose that we return to a system of 240 pennies in the pound.

Thanks for your idea, Jo, we'll call you, don't call us.

Keyboard Reform

It is widely said that the current keyboard layout was designed because mechanical typewriters had a tendency to jam when two keys next to each other were pressed in rapid succession. So the layout of the keyboard evolved  to limit jamming by placing apart the keys for letters that commonly occurred next to each other in actual words. Effectively - to slow typists down.

Now that mechanical typewriters are a thing of the past, wouldn't it be a good idea to rearrange the keyboard for maximum typing efficiency?

Yes, this has all been said and really hasn't taken off - despite the patenting of the Dvorak keyboard in 1936. In this context see also the chorded keyboard, invented by Douglas Engelbart, who also invented the mouse (which did take off). Maybe hope is not yet lost for this project.

Calendar Reform

And returning to calendars, which engages passionate support among a small number of people. It's true that it would be more convenient for some purposes if the months were of even lengths, and if the 2nd of March were always a Tuesday - we'd know where we were, wouldn't we? Of course there would be disadvantages too, like if you were born on a Monday your birthday would always fall on  a Monday, which would not be much fun for people who don't like Mondays.

A decimal calendar was adopted in France, in 1792, as well as decimal time, but was abandoned in 1805, reverting to our eccentric, but accepted, Gregorian calendar.

The Lesson Being?

Accept the wrong turns in history and continue. The so called standard gauge (the distance between the rails) for railway lines is 4 feet 8 1/2 inches (Imperial). A broader gauge would have advantages for speed and stability on main line railways.  It's an accident, live with it, we can work around it. 

In short, calendars and other human artefacts are messy, untidy and in the end quite complex. Any proposed improvement - in all probability - overlooks significant drawbacks and would anyway be hard to justify in terms of investment and disruption.

Er, but not so fast!

This is not about calendars, it's about the Web. It's about the untidiness of the Web today, about the importance of the Web continuing to be open-ended and the powerful "generative" tool it is today. 

It's also about inter-operability, the Web of applications, the basic failure of the Web to provide a meaningfully "working" experience, Javascript errors, usability errors, performance errors.

It's about the increasing difficulty in accommodating an increasing variety of devices, user contexts and types of content. 

Not least it's about the random nature of the technology choices that underpin what we do, how they don't cohere with each other as technology choices and how that limits the "learnability" of the technology as well as its adaptability.

If we are not going to engage in "calendar reform" - we should at least look at what the problems are with what we have today and what the possible benefits and costs of other approaches are, shouldn't we?

Wednesday, 8 February 2012

Unexpected, a nice gesture

I don't really know why, but ever since I first heard of the W3C I'd wanted to participate in it. Quite out of the blue, roughly 10 years later, I got the opportunity in 2005 when I started representing dotMobi on the Mobile Web Best Practices Working Group.

I went on to edit, co-edit or somehow to create or co-create 8 W3C documents and also to co-chair the working group. In the course of all that I got to research many Web related topics and I got to think about the Web in a way that I hadn't had the opportunity to before.

The Web is quite the most remarkable thing. It continues to grow and to expand. Its social consequences are immense. Only retrospect will tell us exactly how immense.

Despite all this, I have some concerns. They are born out of the experience I mention above. There's quite a lot about the Web that could be better. There are some aspects of how it works that may prevent it - as W3C puts it, achieve its full potential.

I've been meaning to write about those concerns ever since closing down the working group in December 2010. I didn't make a start immediately, partly because I wanted to think about what I was saying with the benefit of distance. It's partly because although my desire, or need, to write some of this stuff comes from quite a few moments of head-scratching as to how to represent "Best Practice" - by and large I have not been a participant in the proceedings that created what we have today, so it has to be said that I speak mainly in ignorance of what has been considered before. However, the fact that the Web is what it is without all that much structured explanation of why it is what it is is certainly noteworthy.

It was partly also because it's easy to be a critic and really not so easy to propose remedies.

Then, just the other day the following popped through the post:

A rather sweet gesture I thought. And a sign. Time to start writing.

I still don't have many remedies. But then it would be kind of presumptuous of me to think that I was in an instant going to solve the world's problems. It's maybe a starting point for some discussions. I don't claim that what I write about hasn't been thought about elsewhere, or expressed more clearly elsewhere.

And I don't expect this will all be about the Web, either.


Those documents:

W3C Mobile Web Best Practices Author/Editor
W3C mobileOK Basic Tests co-Author/co-Editor
W3C Device Description Repository Simple API Lead Author/Editor
W3C DDR Core Vocabulary co-Author/Editor
W3C Content Transformation Landscape Author/Editor
W3C mobileOK Scheme co-Author/co-Editor
W3C Content Transformation Guidelines Author/Editor