If you really think about it, design can definitely change the world.
The degree, the direction and the ethical value of that change might vary, so perhaps the accomplishment of what that phrase “change the world” usually means (as in “make a difference”) might not happen almost all the time, but change as the abstract concept of “making something different” is inherently what design does, so just by existing design is by definition changing the world.
I’m even compelled to argue that design is by definition “the process of changing the world”. So when design happens, there’s a change in the world.
If you’re not confortable with these arguments, perhaps it is because you’re thinking of a much more defined interpretation of “changing the world”. That, as with many design problems, is due to a poor definition of the problem to be solved.
Perhaps we, instead of uttering short, emphatic, compelling phrases like “design can change the world” should dedicate more time to really defining what we mean with it, as in “can interaction and visual design make a large part of the world population live a better life?”. The answer to such question being a rotund “Not at all. It requires much more than those disciplines to create a solution that can achieve such goal”.
(Inspired by A New Yorker walks into a San Francisco start up…)
I read The Hamburger Menu Doesn’t Work.
I thought the article a very interesting one hinging on a topic that requires more discussion and thought. However, I found the title rather misleading. Here are my thoughts.
I think the point is not about the “hamburger” menu (and implicitly the icon being responsible of the misusage) not working, but any catch-all drawer menu not working (by hiding actions under a drawer) including those drawer menus we start seeing in larger-screen implementations, albeit properly labeled “Menu”.
A similar issue happened years ago with the prominence of drop-down and pop-up menus. It happened for the same reason: trying to crunch a crap-load of sections in the smallest space possible. In my opinion, drop-down (and pop-up) menus are useful when they hold items of the same category (I.e. A list of countries, a list of references, a list of sharing services) since then the content might be easier to understand by labeling appropriately (like the aisles of a well-organised supermarket). But when those same menus hold items of varied (and dubious) provenience, they lack the power to suggest and thus effectively hide content from users. And user behaviour and focus being as flimsy as it can be, visitors ends up not being suggested on options they might as well like to explore but just aren’t aware of. Imagine arriving to a supermarket where there are three sections such as “dairy”, “produce” and “drinks” and then a big sign that reads “other stuff”, would you be tempted to go to such an overwhelming, under-appealing area? Who knows. And wouldn’t you get more customers to buy more by showing them more products and guiding them to more suggesting aisles? Probably so.)
It makes sense even in desktop, where for example research we did (at an AOL property, on a big bunch of the iOS/Android top segment of the US population) showed that bringing sections to the main menu (and thus making them visible) improved engagement.
A recent Don Norman article (defiling the new Apple user experience) on why Apple products are confusing so many, he mentions three core principles of good design amongst which there is “discoverability”. Arguing that discovering (and then remembering) what an interface does is of vital importance for users, I’d say “hamburger” menus do the disservice of hiding content, and thus miss the opportunity of suggesting users what to do next or where to go next.
To me after many years, the truest definition of IxD/IA/UX is “helping people make the next decision”. Showing them useful stuff instead of hiding it is hence better since it helps them make the decision on where to go and what to do next.
The big problem with the case for or against everyone learning to code is the word “everyone”.
“Many of us”, “most of those involved with design”, “an increasing amount of people”, “a growing number of the world population”, perhaps.
Just not “everyone”.
Machine writing will exist, sooner than we think.
Algorithm-driven writerbots, clumping together predefined phrases (or dissected ones from previous writings) spiced with words that computer-driven testing and big data crunching have found to enhance “catchiness”, headed by titles that extensive testing have proved engaging and “click-worthy”.
(To be honest, current headlining for sites like buzzfeed, huffpost or upworthy already feel created by a bot, almost undiscernible from each other)
Recently a friend asked:
“Imagine a phone…. Now tell me why does a phone icon still look like a landline headset?”
Icons as visual abstractions that refer to a specific concept, come from an old paradigm, that of the mechanical (mail envelope) and electronic (landline phone set) era, where object’s looks and shapes came from their use and function.
Now we’re in the digital paradigm, where objects are impervious to function and don’t need a defined shape or, worse, as pure digital, they don’t even have a shape anymore. So most give images that are more vague and less recognisable that those of their mechanical counterparts.
So symbology has to take from the physical recognisable world, and thus from old shapes derived from old paradigms.
If you think of it, a phone is really no longer a “phone”. “phone” is one of its functions, but my iPhone is more my personal computer (I work and play through it) than my phone (I hate talking on the “phone”).