Your UI is not your father’s UI.
Back in the late 80’s, during the days of Windows 2.0, the big push was to make your applications SAA/CUA compliant. SAA (Systems Application Architecture) was IBM’s strategy for enterprise computing in the late 1980s and early 1990s, which defined three layers of service, one of which was Common User Access (CUA). This became the basis of what many graphical user interfaces look like under Windows, Mac, and the like.
I even started a company to develop an add-on tool for Clipper developers (CUAccess) back in 1990, which allowed developers to create compliant user interfaces for text-based applications (mainly to mimic Windows apps in an easier environment to develop under at the time).
Microsoft then came out with what was my bible for UI, The Windows Interface Guidelines for Software Design. Following these standards was all the rage. You could create consistent user interfaces. You’d only needed to teach a user one application following these guidelines, and they’d know how to use all the others. Everything was supposed to conform. Developers didn’t have to worry about how to lay out their presentation as much as what the application should do. It was all hunky dory.
But then the rumblings started. I believe it first came out of the gaming world. “Business application UIs should be as rich as these games,” the pundits would say. People intuitively knew how to navigate through games, no matter how different they looked from one another. It was obvious that to use a weapon, you needed to pick it up. To open a door, you just needed to click on it. To view a map, just click on something that looked like it could be a map. To change armor, just look for an armor icon and click on it.
Ah! Little pictures are all we need! So everyone started using toolbars in their Windows apps. These toolbars were made up of tiny “icons”, which became more numerous, and kept getting smaller and smaller, because you still needed room for those boring forms and controls.
Ah! I know! We’ll let the users customize their own toolbars, and choose from several standard groups, and they can put them anywhere on the screen!
And then came the attack of Microsoft Word…
So Microsoft went back to the drawing board, and said “I know where we went wrong! We don’t need toolbars, we need ribbons!” And then there were ribbons…
And Microsoft looked. And they said it was good. And so they slept.
The problem is, they missed the past dozen years. The browser changed everything.
Once upon a time, the software industry was yelling, “conform, conform!” Heck, I was yelling, “conform, conform!” I admit it — I was a “pixel counter”. But although I learned about the benefits of an intuitive user interface, I now somewhat disdain those ideas. I believe we missed the point, and we’re now starting to realize it.
The true goal of a rich user experience is intuitive and attractive user interfaces, which cause people to feel comfortable and even excited about using software to get their real work done. With technologies such as AJAX, Silverlight, and WPF, and tools such as Microsoft Expression Studio, the industry is finally realizing that you need the cooperative efforts of both developers and designers for creating satisfying user experiences. Developers used to try to go it alone, and we recognize that the earlier standards were necessary with the skill sets of the people writing these systems. But the world may be changing for the better.
When the Web started taking off, we reverted somewhat to garish user interfaces, while we used the excuse of the Web being, once again, the wild west where anything goes. The industry sort of rebelled against the conforming constraints the desktop UI development standards were restricting us to. But over the past few years, I believe we’ve started to find balance. We had our chance to play and experiment, but we’re finally finding a middle ground, and we’ll start seeing the most usable software the industry has seen so far.
At least that is my hope.