I once worked with a UI designer who insisted it was impossible to create a product that was both within the Web Content Accessibility Guidelines (WCAG) and aesthetically pleasing. ‘It can be pretty, or it can be accessible,’ he said, ‘you can’t have both.’
The belief that good design and accessibility are at odds is surprisingly common, but it’s a tension I don’t really believe exists. In this case, my colleague viewed the WCAG Level AA standard as stifling, and felt the constraints made it impossible for him to create effective solutions. The problem with this framing is it misses something fundamental about how good design works.
One of my favourite definitions of ‘design’ is ‘solving a given problem within a given set of constraints.’ The way we solve a problem is shaped by the constraints we’re working with. The same problem under different constraints may have a fundamentally different solution, and experience has taught me that tight constraints often push you toward better solutions.
This was illustrated vividly by a project I worked on several years ago, which became the work I’m most proud of in my professional career. Our customer was based in Africa, operating in a country with pretty poor telecoms infrastructure. They had an existing website, from which we were able to pull analytics and see exactly what sorts of devices their customers were using.
The results would have sent many teams running screaming, but fortunately we were just ignorant enough not to grasp the scale of the problem we were taking on. The users were mostly on mobile, but barely any of them had iPhones. Some users had Android devices, mostly running Ice Cream Sandwich or Gingerbread. There was 3G and some 4G coverage, but it was very spotty, and the most reliable connection was EDGE. More significantly, around 40% of their customers weren’t using a smartphone at all. They were using feature phones like the Nokia 2700. These devices had no touchscreen, navigated using a D-pad, had a display that was just 240px wide, and ran Opera Mini as a browser.
With this context, we established the design constraints for the project:
The 128K RuleThe entire application (HTML, CSS, JavaScript, images, fonts) had to fit within a 128KB page budget. This wasn’t arbitrary, it represented what we could realistically load on an EDGE connection within a reasonable time. Any page requested with a cold cache had to stay within this budget.
Extreme ResponsivenessThere is responsive design, and then there’s responsive design. For our project, the same codebase had to work beautifully on 240px-wide feature phones, and scale all the way up to 4K desktop displays, looking great the whole way. We had to design mobile-first, but desktop couldn’t be a second-class citizen.
Universal CompatibilityThe app had to run on Opera Mini, to support those feature phones. At the time, Opera Mini provided around two seconds of JavaScript execution time, and only on load. This meant we had to keep client-side JavaScript execution to a minimum, and full server-side rendering was mandatory. The application had to run properly with full page replacement, semantic markup, and progressive enhancement for more capable devices.
Opera Mini works by proxying your requests through Opera’s servers. The server fetches the web page you have requested, renders it, and then compresses it to Opera’s proprietary OBML format. The server will wait for up to 2.5 seconds during rendering for JavaScript to execute, before the page state is freeze-dried and pushed to the device. This worked well for us as OBML made the most of the EDGE connection, improving overall loading times. But it also meant that we couldn’t rely on real-time interactivity at all.
No FOUT About ItThere were some hard choices to make immediately. The first thing we discarded was webfonts, as these were bytes we simply didn’t have to spend.
font-family: -apple-system, “.SFNSText-Regular”, “San Francisco”, “Roboto”,
“Segoe UI”, “Helvetica Neue”, sans-serif;Discarding webfonts and instead using the system font on the device had three benefits for us.
First, it meant we didn’t have to worry about a flash-of-unstyled-text (FOUT). This happens when the browser renders the text before the font is loaded, and then renders it again after loading, resulting in a brief flash of text in the wrong style. Worse, the browser may block rendering any text at all until the font loads. These effects can be exaggerated by slow connections, and so being able to eliminate them completely was a major win.
Second, leveraging the system font meant that we were working with a large glyph set, a wide range of weights, and a typeface designed to look great on that device. Sure, customers on Android (which uses Roboto as the system font) would see a slightly different layout to customers on iPhone (San Francisco, or Helvetica Neue on older devices), or even customers on Windows Phone (Segoe UI). But, how often do customers switch between devices like that? For the most part, they will have a consistent experience and won’t realise that people on other devices see something slightly different.
Best of all, we got all of this at the cost of zero bytes from our page budget. System fonts were an absolute no-brainer, and I still use them today.
Going Framework FreeJake Archibald once described the difference between a library and a framework like this: a library is something your code calls into, a framework is something which calls into your code.
Frontend web development has been dominated by frameworks at least since React, if not before. SproutCore, Cappuccino, Ember, and Angular all used a pattern where the framework controls the execution flow, and it hooks into your code as and when it needs to. Most of these would have broken our 128KB page budget before we had written a single line of application code.
We looked at libraries like Backbone, Knockout, and jQuery, but we knew we had to make every byte count. In the days before libraries were built for tree-shaking, almost any library we bundled would have included wasted bytes, so instead we created our own minimal library, named Whizz.
Whizz implemented just the API surface we needed: DOM querying, event handling, and AJAX requests. Much of it simply smoothed out browser differences, particularly important when supporting everything from IE8 to Safari 9 to Android Browser to Opera Mini. There was no virtual DOM, no complex state management, no heavy abstractions.
The design of Whizz was predicated on a simple observation: the header and footer of every page were the same, so re-fetching them when loading a new page was a waste of bytes. All we really needed to do was fetch the bits of the new page we didn’t already have.
We then handled updates with a very straightforward technique. An event listener would intercept the click, fetch the partial content via AJAX, and inject it into the page. (These were the days before the Fetch API, when we had to do everything with XMLHttpRequest. Whizz provided a thin wrapper around this.)
{
“title”: “Document title for the new page”,
“content”: “Partial HTML for just the new page
”
}The AJAX request included a custom header, X-Whizz, which the server recognised as a Whizz request and returned just our JSON payload instead of the full page. Once injected into the page, we ran a quick hook to bind event listeners on any matching nodes in the new DOM.
function onClick(event) {
var mainContent; event.preventDefault();
mainContent = WHIZZ.querySelector(“main”);
WHIZZ.load(event.target.href, function (page) {
document.title = page.title;
WHIZZ.replaceContent(mainContent, page.content);
WHIZZ.rebindEventListeners(mainContent);
});
}
This really cut down on the amount of data we were transferring, without needing heavy DOM manipulation, or fancy template engines running in the browser. Knitted together with a simple loading bar (just to give the user the feeling that stuff is moving along) it really made navigation, well, whizz!
Imagine ThatProbably the most significant problem we faced in squeezing pages into such a tiny payload was images. Even a small raster image, like PNG or JPEG, consumes an enormous amount of bytes compared to text. Text content (HTML, CSS, JavaScript) also gzips well, typically halving the size on the wire, or more. Images, however, often don’t benefit from gzip compression. We had already committed to using them sparingly, but reducing the absolute number of images wasn’t enough on its own.
While we started off using tools like OptiPNG to reduce our PNG images as part of the build process, during development we discovered TinyPNG (now Tinify). TinyPNG did a fantastic job of squeezing additional compression out of our PNG images, beyond what we could get with any other tool. Once we saw the results we were getting from TinyPNG, we quickly integrated it into our build process, and later made use of their API to recompress images uploaded by users.
JPEG proved more of a challenge. These days Tinify supports JPEG images, but at the time they were PNG-only so we needed another approach. MozJPEG, a JPEG encoder tool from Mozilla, was pretty good and was a big improvement over the Adobe JPEG encoder we had been using. But we needed to push things even further.
What we came up with involved exporting JPEGs at double the scale (so if we wanted a 100×100 image, we would export it 200×200) but taking the JPEG quality all the way down to zero. This typically produced a smaller file, albeit heavily artefacted. However, when rendered at the expected 100×100, the artefacts were not as noticeable.
The left image is 100% scale, quality set to 0. The right image the same image at half size.The end result used more memory in the browser, but spared us precious bytes on the wire. We recognised this as a trade-off, and I’m still not 100% sure it was the best approach. But it was sparingly used, and effective for what we needed.
The real wins came from embracing SVG. SVG has the advantage of being XML-based, so it compresses well and scales to any resolution. We could reuse the same SVG as the small and large versions of an icon, for example. Thankfully, it was also supported by Opera Mini.
That isn’t to say SVG was all plain sailing. For one thing, not all of our target browsers supported it. Notably, Android Browser on Gingerbread did not have great SVG support, so our approach here was to provide a PNG fallback using the element.
Browsers which supported would fetch the SVG, but browsers without support (or without SVG support) would fetch the PNG fallback instead. This was an improvement over JavaScript-based approaches, as we always fetched either PNG or SVG and never one then the other, as some polyfills might. We were fortunate too that none of those devices had HiDPI (aka Retina) screens, so we only needed to provide fallbacks at 1x scale.
The larger problem we had with SVG was one more unexpected, because it turned out that vector design tools like Adobe Illustrator and Inkscape produce really noisy, bloated SVGs. Adobe Illustrator especially embeds huge amounts of metadata into SVG exports, with unnecessarily precise coordinates for paths. This was compounded by artefacts resulting from the way graphic designers typically work in vector tools: hidden layers, deeply nested groups, redundant transforms, and sometimes even nested raster images. Literally, PNG or JPEG data embedded in the SVG, which you would never see unless you opened it in a code editor.
The result was images which should have been 500 bytes coming in at 5–10KB, or larger. If we were going to pull this off, we needed to very quickly become experts at SVG optimisation.
Optimising SVG: Part OneSVGO, the SVG optimisation tool, was relatively nascent at the time, but did a grand job of stripping away much of the Adobe cruft. Unfortunately, it wasn’t good enough on its own.
Many hours of experimentation took place, just fooling with the SVG code in an editor and seeing what that did to the image. We realised that we could strip out most of the grouping elements without changing anything. We could strip back most of the xmlns attributes on the element too, as most of them were redundant. We also often found CSS transforms inlined in the SVG, which we had to figure out how to effectively remove.
When fooling with the code wasn’t enough, we started working with the designers to merge similar paths into a single element, which often produced smaller files. We worked toward a goal of only ever having a single path for any given fill colour. This wasn’t always possible, but was often a great start at reducing the size of the SVG.
Path definitions we would typically round to one or two decimal places, depending on what worked best visually. We found that the simpler we made the SVG, the greater the chance was that it would render consistently across various devices, and the smaller the files would get.
Unfortunately, it was an especially labour-intensive process which didn’t lend itself very well to automation. These days, I would be more relaxed about just letting SVGO do its stuff, and SVGO is a more capable tool now than it was then. But I still wince when I see unoptimised SVGs dropping out of Figma landing in a project I’m working on, and will often take the ten minutes or so needed to clean them up.
Universal MinificationMinifying across CSS and JavaScript has been standard practice for over a decade, but some developers question the utility of minification. They argue that once gzip/deflate is introduced, the wins from minification are trivial. Why go to all the trouble of mangling your code into an unreadable mess, when gzip offers gains an order of magnitude larger?
We didn’t find these arguments especially persuasive at the time. For one thing, even saving 3–4KB was considered a win and worth our time on the budget we had. But more than that, gzip/deflate support was pretty spotty on mobile browsers from the time. Opera Mobile (distinct from Opera Mini) had pretty poor gzip support, and Android Browser from the time was reported as inconsistent in sending the required Accept-Encoding content negotiation header (In hindsight, perhaps this reported inconsistency was overstated/FUD, but even if that’s true, we didn’t know it then.)
Introducing minification prior to zip meant that even if the client did not support gzip or deflate encoding, they still enjoyed reduced payloads thanks to the minification. We were using Gulp for our build tool, which at the time was the shiny new hotness, and presented a code-driven alternative to Grunt.
Gulp’s rich library of plugins included gulp-minify-css, which reduced CSS using the clean-css library under the hood. We also had gulp-uglify to minify the JavaScript. That was effective in reducing the size of our assets, but with only 128KB to play with, we were always hammering home this mantra that every byte counts. So we took things one step further and added minification to our HTML.
I don’t know that anyone is routinely doing this these days, precisely for the reasons outlined above. Gzip/deflate gets you bigger gains, and HTML (unlike JavaScript) doesn’t lend itself to renaming variables, etc. But there were a few techniques we were able to use to reduce the payload by even a few hundred bytes.
There were early wins from replacing any Windows-style newlines (rn) with UNIX-style ones (n). We were also able to strip out any HTML comments, excepting IE conditional comments, which had semantic meaning to that browser. We could safely remove whitespace from around block-level elements like , , , , , and . Multiple whitespace around inline elements, like , , , was rationalised to just a single space.
This often left us with HTML that was smaller, but was all on one line. This turned out to not be very well tolerated by some browsers, which disliked the very long lines, so we introduced an additional newline before the first attribute in every element, in order to break things up.
The HTML minifier also had to ensure it didn’t fool with the whitespace within inline scripts, inline styles, , or elements, but overall it bought us a few extra kilobytes.
The Brand PoliceThe first demo to the customer went incredibly well, and they were thrilled with the work we had done. However, they had one important piece of feedback. Their brand team had just spent a significant amount of time and money with Saatchi & Saatchi building up a new brand identity and advertising campaign for the company. This featured a specific typeface, with a thick border around each letter.
This is not the font or the style, but gives a flavour of what we were asked forDespite our protestations, they insisted that we use this as the primary heading style across the whole site.
Our initial thought was to return to webfonts. We would still leverage system fonts for body text and subheadings, but could we perhaps use a heavily subset webfont to render our titles? We found ourselves back in FOUT-territory, and wondering how expensive it might be to inline the font in base64 to avoid FOUT issues.
Unfortunately, it quickly became clear that even this wasn’t going to be an effective approach, as CSS at the time didn’t have a text-stroke property to render the thick outline from their brand guide. Being unable to persuade the customer otherwise, we had to render this using images.
The existing techniques we had developed worked pretty well. We could provide a heavily optimised PNG of the title (with alt-text set appropriately) alongside an SVG of the same for supported devices. But once again, we were stymied the need for a stroke around the letters.
SVG supports stroke, but unfortunately the brand guide required the stroke to feature only on the outside, and SVG strokes are always centred on the shape’s path. Our designer found a way to work around this, faking the stroke instead by overlaying two shapes. The base shape was a few points larger, and rendered in the stroke colour. Overlaid on top were the letters themselves, making it look like an outer stroke.
However, this doubled the size of every SVG title we had, as each letter now required two shapes. We had to do better.
Optimising SVG: Part TwoWe had learned enough about the structure of SVG files to be able to tackle the problem in a smarter way, one which leveraged shape definitions to avoid doubling up the data.
SVG supports a element, where you can define a reusable path to reference later on. Elements in are not rendered, but they can be given a unique name and referenced by a element. To create the final SVG titles, we first defined the shape for the letters as a within . We then referenced this path twice. The first time, we applied a heavy stroke to the element, double what we needed, but centred on the path so it extended the correct amount outside the shape’s boundary. The second time, no stroke was applied, but the shape was layered over the top, concealing the unwanted ‘inner’ portion of the stroke.
This gave us what we needed visually, at the cost of only a few extra bytes, to wrap our path in and reference it with .
The same path rendered in two ways, overlaid on itself, gave the effect we wanted
The Unexpected ResultsThe final product was lightning-fast even on the weakest data connections and most underpowered devices. Showing it off at trade shows, we found that competitive products were still displaying a loading spinner when our application was already loaded, rendered, and running.
But perhaps the most remarkable thing was that it worked on nearly everything we tried. At first, as a bit of fun , we tried running it on the text-based browser Lynx. To our delight, it ran almost perfectly. The D-pad navigation translated very well to keyboard-based navigation, and our choice of small payloads, semantic markup, and progressive enhancement meant that Lynx handled it like a champ.
There were a couple of minor bugs, but they turned out to be simple enough to fix. Once we saw it working on Lynx, we wanted to see it running on everything. It was perfectly usable on the PSP and PlayStation 3 browsers (though Sony, we discovered, renders everything inside elements using the PlayStation font, regardless of your CSS!) It even ran on the weird webOS browser installed on the television in the office.
Of course, it didn’t always work first time on every device; we’d usually come away with a short bug list have it patched by the afternoon.
It also worked on every creaky old flip-phone we could try it on, provided it ran Opera Mini. I once spent a fun afternoon in a cell phone store in Clearwater, Florida trying our application on every phone they had.
Equally, it ran in a variety of network conditions. It ran great on dodgy airport wifi, it worked well on a speeding train, even as the device had to hop from mast-to-mast. It ran on the crowded backhaul of a large conference centre and was even decently usable on dial-up (we checked).
The Paradox of Constrained DesignThe constraints on this project didn’t limit our creativity; they channelled it toward solutions that worked universally rather than just for the privileged few with fast connections and the latest device. A persistent fallacy in web development circles is that customers have up-to-date devices and fast Internet. But even someone with the latest iPhone has poor connection when they’re on the edge of signal at a train station, or at a bus stop in the rain.
At no point did the constraints make the product feel compromised. Users on modern devices got a smooth experience and instant feedback, while those on older devices got fast, reliable functionality. Users on feature phones got the same core experience without the bells and whistles.
The constraints forced us to solve problems in ways we wouldn’t have considered otherwise. Without those constraints, we could have just thrown bytes at the problem, but with them every feature had to justify itself. Core functionality had to work everywhere, and without JavaScript crutches proper markup became essential.
This experience changed how I approach design problems. Constraints aren’t a straitjacket, keeping us from doing our best work… they are the foundation that makes innovation possible. When you have to work within severe limitations, you find elegant solutions that scale beyond those limitations.
As I often point out to teams I’m working with, the original 1993 release of DOOM weighed in at under 3MB, while today we routinely ship tens of megabytes of JavaScript just to render a login form. Perhaps we can rediscover the power of constraints, not because we have to, but because the results are better when we do.