Will devices of the future be just as as moral (or immoral) as our friends, family, and coworkers? Will they aid us in upholding our own sense of honesty?
In her panel yesterday at South by Southwest, Genevieve Bell posed the following question: “What might we really want from our devices?” In her field research as a cultural anthropologist and Intel Fellow, she surfaced themes that might be familiar to those striving to create the next generation of interconnected devices. Adaptable, anticipatory, predictive: tick the box. However, what happens when our devices are sensitive, respectful, devout, and perhaps a bit secretive? Smart devices are “more than being context aware,” Bell said. “It’s being aware of consequences of context.”
Our current devices are terrible at determining context, especially with regard to how we relate to other people via our existing social networks. Today’s devices “blurt out the absolute truth as they know it. A smart device [in the future] might know when NOT to blurt out the truth.” They would know when to withhold information.
This vision may seem attainable in the next decade, considering the research efforts that exists in this space. However, the limiting factor for consequence awareness is the human race, and our hardwired, tribal notions when it comes to social relations.
Taking Tips at the Dinner Party
We are anything but predictable, and we struggle with context all the time: at work, at home, in our romantic relationships, even in whether we’re running late to a dinner party.
Ben McAllister and Kate Canales of frog design led a panel today called “Unwritten Rules: Brands, Social Psychology, and Social Media,” which dug into how companies have adopted vehicles such as Twitter, but have struggled to understand how to communicate through those vehicles effectively.
The crux of their panel was two scenarios that inspire a gut reaction from most people:
Scenario 1: You go to a fancy restaurant, have a fantastic evening, and in thanks give your waiter a $100 tip.
How would your waiter react?
They’d say: “Wow, your’e so generous.”
Scenario 2: At a friend’s house for a dinner party, the food was amazing and you had fabulous conversation.
You tell them thank you, and hand them $100 for their trouble.
How would your friend react? Uh, awkward.
For the reason why the second scenario is so awkward, they dug into research by Steven Pinker and Dan Ariely that outlined different types of fundamental human relationships. In the physical a.k.a. “real” world, we have relationships based on authority, exchange, and communality. Ben and Kate’s theory is that people constantly shift between these different modes of relationships, often in a matter of moments. So people that run the marketing channels for brands need to understand these shifts in behavior, and move from promoting themselves (exchange) and begin understanding how to listen and share (communality).
Designing the very small gestures provided through those channels can often go a long way for a company. Compare it to when you show up at someone’s house for a dinner party and provide them a bottle of wine. No matter whether it’s $4 Chuck or a fancy Bordeaux, it will take you far—though it won’t save you from spending all evening talking about how great you are.
Giving Up on Being Honest
Can “smart devices” ever understand our intent in the range of ways with communicate with others? Can they understand when we are trying to be communal, rather than be an authority? And can they communicate in a manner that feels communal?
Genevieve noted in her talk that as human beings, we tell 2 to 200 lies a day. And while most of them are insignificant, the lies are often what smooth over friction in human relations.
But what kind of lies are these? Dan Ariely, in a somewhat unrehearsed session today with Sarah Szalavitz, walked the audience through his ongoing research into human dishonesty.
What he uncovered is that humans have a “fudge factor,” a level of dishonesty we’re willing to engage in and still consider ourselves honest. His insight into the behaviour isn’t huge, as we’ve all been caught in white lies (perhaps more in our lives than we’d care to admit). Instead, it’s rooted in what’s considered acceptable based on context and consequence.
In one example, he ran an experiment where people were given a test with a ton of questions, but only five minutes to solve them all. In the provided time period, it would be impossible to answer them all. When time was up, the people would grade their own tests, run them through a shredder in the back of the room, then tell the facilitator how many answers they got right.
The shredder was designed, however, to not shred the test. They could compare what people said to how many were actually right.
From this experiment, they saw that most people only lied just a little—if they only solved four problems, they’d say six. Makes sense, right?
But in an separate experiment, Dan saw if people would cheat with regard to remembering the 10 Commandments. In that case, no one cheated. One finding that came out of that research was that when we are reminded about own morality, we become more honest. But the honor code must come before we engage in an activity, not after it. Otherwise, we will be tempted to cheat.
But the third experiment he related was the following: You see two empty boxes, and then a couple of dots flash on the screen within those boxes. You are asked the question, “Are there more dots on the right or the left?” You receive 10 cents if you say right and one dollar if you say left, in all cases. This is repeated a hundred times with each research subject.
In the lab, they saw that people cheat a little bit through the process. But at some point, 80% of the people lose it, and they start cheating all the time. Different people switch at different points, depending on the context.
Dan called this the “what the hell” effect. In people’s minds, they’re saying: “I’m a cheat, I might as well enjoy it.”
Creating Devices that Get Creative
This made me think about whether future devices will understand these nuances of human dishonesty, and ever be able to model them accurately.
Really, are we asking too much of smart devices? Can they ever be aware of intent, of consequence, of when we say “what the hell” and take part in behaviors that may be potentially destructive? Can it let us fudge things without thinking we’ve made an error about whom we’re meeting for dinner, or that next big meeting, or a terribly scandalous rendezvous?
Dan believes that confession “is very useful for curtailing the ‘what the hell’ effect,” but can you imagine treating your device like a human being, a guidance counsellor, or a therapist? This is one of the major struggles we’re seeing in designing systems for positive behavioral change. On the other end of any critical exchange, you’ll usually find another human.
It should be obvious that in this new moral space for “smart devices,” designers must be extraordinarily sensitive and aware of the behavioral context of what we create. What may be less obvious is how to design systems that shape, accommodate, or deflect the actions of people saying “what the hell,” without turning us into robots.
This is not a technology problem, as technologies are just tools made by people (until the inevitable robot uprising). It means that “smart devices” are going to need to know when to hold their tongue. We’re going to need to trust our devices to tell stories that aren’t truthful, but instead a little creative.
Can smart devices really do that? Dan Arielly believes that creative people can tell better stories about flexing their morality, for better or for worse. But are we creative enough to make devices that understand how good we want to be as people?
via Design Mind
Conceived for those who have to face a lot of difficulties while communicating with the remaining population and need to use sign language, Signtel is a new device which promises to give them a helping hand. Normally, the hearing impaired individuals have to use an interpreter or use written language, and these might not always be handy. So, the newly evolved sign language interpreter conceptualized by the UK-based designer Viktoria Volosin converts spoken language into sign language and also, converts sign language into spoken or written language for ensuring effective communication.
It uses speech recognition to recognize and convert speech and sign language recognition to recognize and convert sign language. This could be a groundbreaking invention which would help deaf individuals to be more communicative with others and allow them to be more spontaneous speakers.
Signtel Picture Gallery
A few weeks ago while talking about visual perception and memory, I mentioned how the mental models your audience hold affect how they perceive your designs. Today I want to expand on the topic and consider the conceptual model of the designer as well as the interaction model or where designer and audience meet.
Before getting to the details let’s quickly define each of the 3 models.
- Mental model— how users think a system will work
- Conceptual model—how designers develop a system to work
- Interaction model—how people actually interact with a system
A mental model represents a persons thought process for how something works. It’s built through past experiences, incomplete facts, intuition, and the general understanding of how we think the world around us works.
We all use mental models to predict how systems work. They set a context that helps shape our behavior and actions with the system and they influence our visual perception by suggesting where we should look and what we pay attention to.
For example if you sit inside a car you have a mental model about how that car should work.
You expect to find an ignition, which you’ll likely turn on with a key. You expect a steering wheel that you’ll turn clockwise or counter-clockwise to turn the car right or left. You expect to find gas and brake pedals as well as many other things common to most cars.
Even though you’ve never been in that particular car it shouldn’t take you more than few seconds to figure out how to turn it on and drive it. You have a mental model for a car, which is easily transferred from one car to another.
Mental models are fluid. We create them quickly and modify them as new information comes in. We base them on
- Prior experiences
The first 2 allow us to form models quickly and the last allows us to modify them.
The main thing to understand is that we form mental models to help us make sense of world and interact with unfamiliar things. Our mental models influence our perceptions and our perceptions influence our mental models.
A conceptual model represents how something is designed to work. It’s the mental model of the designer put into action.
Consider again a car. There’s no reason we need to turn a key to start the ignition. A designer’s conceptual model of a car could suggest that a push button ignition and joystick controls are better ways to operate the car.
Push button and joystick break our mental model of a car. We do have mental models for what to do with buttons and joysticks in general and we can use them to figure out how to turn on and drive the cr.
Still it’s easy to see how some or even many would have trouble operating the push button and joystick car, because conceptual model and mental model disagree.
Every design decision creates either agreement or disagreement between mental model and conceptual model.
- Disagreement—leads to a system that’s harder to learn. It will typically create user frustration, more errors, and be less usable when first encoutnered.
- Agreement—leads to system that’s easier to learn. It will typically be highly usable and create less error and user frustration when first encountered. It will like be seen as more intuitive.
Designing for Disagreement
While agreement makes a system easier to learn, it may not always be desired. Consider touch screen devices. When touch interfaces first appeared they broke the mental model for interacting with a computer.
Users typed using software instead of hardware. They swiped, tapped, and pinched without a mouse. While new and not part of anyone’s mental model at the time, all are still pretty easy to learn.
A good conceptual model allows people to predict the effects of an action. We expect to pull on a door handle. We assume things will move in the direction we swipe. A good conceptual model reveals itself through its interface.
2 principles we can use to help users predict how our designs work:
- Affordance—The physical characteristics of a design element suggest how to use it. A door handle suggests pulling. A door knob suggests turning. A button suggests pushing.
- Mapping—The relationship between design controls, their movements, and their effects on the element(s) they control. Moving a joystick to the left should result in something moving left.
We can also build in constraints to help prevent errors and generally build a forgiving design in order to encourage exploration.
Designing for Agreement
When developing a conceptual model we naturally consider our own mental model for how something should work, but as we’re designing for an audience we want to also consider the mental models they’ll likely bring to our design.
Usually we have different groups of people in our audience, each with different mental models. For example beginner, intermediate, and advanced users.
Someone very familiar with your subject or object is going to bring a very different mental model than a complete beginner. Mental models are also based on:
We can use personas to see potential mental models for our audience and then design accordingly. We can also try usability testing and simple observation of people using our designs while we’re developing them.
Interaction models are how people actually interact with a system. Communication between mental model and conceptual model occurs through the interface of the system.
Designers have complete and accurate conceptual models However designers have weak interaction models early on as we don’t know in advance how people expect our designs to function. We can use personas, etc. to predict mental models, but we won’t really know them.
Users start with an interaction model based on their mental model and then refine that model based on actual use. Through experience users can gain a complete and accurate interaction model.
A system’s interaction model could be completely different from either mental model or conceptual model, though in time users come to understand the interaction model through experience.
Optimal design occurs when we create an interface where interaction model and conceptual model meet. To do this we want to
- Use the system—To understand how a design works in practice we need to become a user of the design. However as designers we need to be aware that we aren’t typical users as we created the conceptual model. Using our own designs won’t always reveal problems in the interaction model.
- Observe others using the system—By watching others we again gain information about how our designs are used in practice. The advantage here is that we get to see how people unfamiliar with our conceptual model use our designs.
Designing with Conventions
Since an interaction model will begin as a user’s mental model, we can take advantage of standard mental models through design conventions.
Nothing says a link on a web page needs to be blue and underlined, but by making it blue and underlined we align conceptual, mental, and interaction models.
It’s a good idea to use conventions where possible because of this agreement between the 3 models.
However don’t force design into convention just to take advantage of the model. It’s better to have people learn to form a new mental model than require they use a mental model that doesn’t really fit the design.
Swiping pages on a touch device could have instead been a horizontal scroll bar that we tap, hold, and move, but aren’t you glad designers opted not to keep this model?
Contrast and Similarity
Consider the simple web page text link. The expectation (mental model) is clicking a link takes you to a new page. That’s at least the predominant model for a link.
The first time a link opened a new window or took you to a new place on the same page or triggered an ajax request it broke that predominant mental model.
Each of these different types of links is useful, but because they are different they should be designed differently to indicate in advance their different behavior.
Perhaps a different color for links that make ajax requests or an icon when a new window will open. The first time someone clicks those links they may still expect the default going to a new page, but by their second or third click they will have learned the different behavior.
The subtle change builds a new mental model and creates a new convention and trains our audience to use our design.
At the same time all these links share some characteristics. The standard link, new window link, and same page link all take you to a new location.
It makes sense therefore that each share some design characteristics. Maybe all 3 use a cool color and remain underlined.
Using contrast and similarity this way we can help people understand our conceptual models. Contrast with convention where there is disagreement in models and similarity with convention when there is agreement.
We all build mental models to help us predict how unfamiliar systems will work. We build them based on experience using similar systems, assumptions based on general knowledge, and observations of the new system.
Designers use their mental models to build things. Our mental model becomes the conceptual model of the thing we build.
When people use our designs they build an interaction model for how the design actually works. Over time they can develop an accurate and complete interaction model through experience with the system.
When conceptual model and mental model agree our designs are intuitive to use. When the models disagree the design needs to be learned.
We should do our best to understand the likely mental models our audience will bring to our designs. Where possible we should take advantage of conventional design patterns to indicate the model is correct. Where the model needs to be altered we should indicate that as well, by diverging from convention.
Via Van SEO Design
It’s always a good tactic to look for examples of how a particular advantage or gap has been addressed in products or services outside of the situation you’re focused on.
Because the problem is that most easily conceived ideas are the most familiar ones, the ones you’ve experienced most often. As a result, more often than not, the first ideas out of people’s mouths are stale clichés—and the fundamental sin of any disruptive idea is for it to be a cliché.
It reminds me of Robert McKee’s advice to would-be film makers:
“Cliché is at the root of audience dissatisfaction…. Too often we close novels or exit theaters bored by an ending that was obvious from the beginning, disgruntled because we’ve seen these cliché scenes and characters too many times before.”
McKee could just as accurately be describing the first ideas to arise from a typical brainstorming session in a corporate boardroom. To break away from cliché-thinking, you need to develop a habit of looking for alternative ideas instead of immediately accepting the most obvious approaches.
Inspiration for alternative ideas often happens in the periphery, in analogous but not necessarily traditionally competitive categories. This is a powerful exercise, because it’s possible that you could take an idea that was developed in a completely unrelated field and directly apply it to your situation.
Think about the Nintendo Wii, a handheld controller that integrates the movements of a player directly into the video game. The inspiration for the motion controller idea didn’t come from looking at what other video consoles were doing; it came from a completely unrelated source: the accelerometer chip that regulates the airbag in your car.
Airbags respond to sudden changes in movement caused by accidents. Nintendo wondered if it would be possible to combine the accelerometer used by airbags with a handheld controller used to play video games. In other words, if you swung the controller like a tennis racket, could a “virtual you” on the screen swing as well?
The goal is to look closely at the unconnected example and figure out how you could apply the entire idea, or part of it, to your needs. As New York Times columnist and author Thomas Friedman puts it:
“The further we push out the boundaries of knowledge and innovation, the more the next great value breakthroughs—that is, the next new hot-selling products and services—will come from putting together disparate things that you would never think of as going together.”
via design mind
[This is a follow-up to Helen’s previous article on design thinking,The Seven Deadly Sins That Choke Out Innovation]
Recently, Kevin McCullagh of British product strategy consultancy, Plan organized a two-day event for executives to wrap their heads around the concept of design thinking—and, in particular, to think about how they might go about implementing it within their own organization. Kevin invited me along to give an overview of some of the things I’ve been thinking recently. “Don’t hold back,” he advised. So I came up with a talk entitled, “Design Thinking Won’t Save You” which aimed to outline what design thinking is *not* in order to help attendees figure out a practical way forward. Here’s an edited version of what I said:
Ladies and gentlemen, let me break this to you gently. Design Thinking, the topic we’re here to analyze and discuss and get to grips with so you can go back and instantly transform your businesses, is not the answer.
Now before you throw down your coffee cups and storm out in disgust, let me explain that I’m not here to write off design thinking. Really, I’m not. In fact, I’ve been a keen observer of the evolution of the discipline for a number of years now and I’m still curious to watch where it goes and how it continues to evolve as its influence spreads throughout industries and around the world. So to be clearer, I suppose I should say that design thinking won’t save you, but it really might help:
First, some context: Until July of 2010, I was the editor of innovation and design at Bloomberg BusinessWeek. Before that, I’d worked consistently in design journalism both here in New York and in London. The reason that I wanted to join BusinessWeek in the first place was precisely because it struck me as being the one place that had its eye on both camps, on the creative industries and on the business world writ large. And it struck me that it’s at this nexus and intersection that the thriving businesses of the future will be built.
I joined the magazine back in 2006, which was a time when design thinking was really beginning to take hold as a concept. My old boss, Bruce Nussbaum, emerged as its eloquent champion while the likes of Roger Martin from Rotman, IDEO’s Tim Brown, my new boss Larry Keeley and even the odd executive (AG Lafley of Procter and Gamble comes to mind) were widely quoted espousing its virtues.
Eager onlookers were left baffled about replicating this success.
Still, in the years that have followed, something of a problem emerged. For all the gushing success stories that we and others wrote, most were often focused on one small project executed at the periphery of a multinational organization. When we stopped and looked, it seemed like executives had issues rolling out design thinking more widely throughout the firm. And much of this stemmed from the fact that there was no consensus on a definition of design thinking, let alone agreement as to who’s responsible for it, who actually executes it or how it might be implemented at scale.
And we’d be wise to note that there’s a reason that companies such as Procter & Gamble and General Electric were held up time and again as being the poster children of this new discipline. Smartly, they had defined it according to their own terms, executing initiatives that were appropriate to their own internal cultures. And that often left eager onlookers somewhat baffled as to how to replicate their success.
This is something that I think you need to think very carefully about as you look to implement design thinking within your company. Coming up with ways to implement this philosophy and process throughout your organization, developing the ways to motivate and engage your employees along with the metrics to ensure that you have a sense of the real value of your achievements are all critical issues that need to be considered, carefully, upfront.
Designers often bristle when the term design thinking comes up in conversation. It’s kind of counterintuitive, right? But here’s why: Having been initially overjoyed that the C-suite was finally paying attention to design, designers suddenly became terrified that they were actually being beaten to the punch by business wolves in designer clothing.
Design thinking captures the qualities that drew designers to the field.
Suddenly, designers had a problem on their hands. Don Norman, formerly of Apple, once commented that “design thinking is a term that needs to die.” Designer Peter Merholz of Bay Area firm Adaptive Path wrote scornfully: “Design thinking is trotted out as a salve for businesses who need help with innovation.” He didn’t mean this as a compliment. Instead, his point was that those extolling the virtues of design thinking are at best misguided, at worst likely to inflict dangerous harm on the company at large, over-promising and under-delivering and in the process screwing up the delicate business of design itself.
So let’s be very clear. Design thinking neither negates nor replaces the need for smart designers doing the work that they’ve been doing forever. Packaging still needs to be thoughtfully created. Branding and marketing programs still need to be brilliantly executed. Products still need to be artfully designed to be appropriate for the modern world. When it comes to digital experiences, for instance, design is really the driving force that will determine whether a product lives or dies in the marketplace.
Design thinking is different. It captures many of the qualities that cause designers to choose to make a career in their field, yes. And designers can most certainly play a key part in facilitating and expediting it. But it’s not a replacement for the important, difficult job of design that exists elsewhere in the organization.
The value of multi-disciplinary thinking is one that many have touched upon in recent years. That includes the T-shaped thinkers championed by Bill Moggridge at IDEO, and the I-with-a-serif-shaped thinker introduced by Microsoft Research’s Bill Buxton, right through to the collaboration across departments, functions and disciplines that constitutes genuine cross disciplinary activity. This, I believe, is the way that innovation will emerge in our fiendishly complex times.
Just as design thinking does not replace the need for design specialists, nor does it magically appear out of some black box. Design thinking isn’t fairy dust. It’s a tool to be used appropriately. It might help to illuminate an answer but it is not the answer in and of itself.
Instead, it turns up insights galore, and there is real value and skill to be had from synthesizing the messy, chaotic, confusing and often contradictory intellect of experts gathered from different fields to tackle a particularly thorny problem. That’s all part of design thinking. And designing an organizational structure in which this kind of cross-fertilization of ideas can take place effectively is tremendously challenging, particularly within large organizations where systems and departments have become entrenched over the years.
You need to be prepared to rethink how you think about projects, about who gets involved and when, about no less than how you do things. The way that you approach innovation itself will probably need to change. This might seem like a massive undertaking, but if you’re after genuine disruption more than incremental improvement, these kinds of measures are the only way to get the results that you need.
Design thinking is not a panacea. It is a process, just as Six Sigma is a process. Both have their place in the modern enterprise. The quest for efficiency hasn’t gone away and in fact, in our economically straitened times, it’s sensible to search for ever more rigorous savings anywhere you can. But design thinking can live alongside efficiency measures, as a smart investment in innovation that will help the company remain viable as the future becomes the present.
Somehow, for a time there it seemed like executives thought that if they bought into a program of design thinking then all their problems would be solved. And we should be honest, many designers were quite happy to perpetuate this myth and bask in their new status. Then the economy tanked and as Kevin wrote in a really brilliant article published on Core77, “Many who had talked their way into high-flying positions were left gliding… Greater exposure to senior management’s interrogation had left many… well, exposed. The design thinkers had been drinking too much of their own Kool-Aid.”
The disconnect between the design department, the D-suite, if you will, and the C-suite is still pretty pronounced in most organizations. Designers who are looking to take a more strategic role in the organization, who should really be the figures one would think of to drive these initiatives, need to ensure that they are well versed in the language of business. It’s totally reasonable for their nervous executive counterparts to want to understand an investment in regular terms. Fuzziness is not a friend here. And yet, as I’ll get into in a moment, sometimes there’s no way to overcome that fuzziness. Leaps of faith are necessary. But designers should do everything they can to demonstrate that they have an understanding of what they’re asking, and put in place measurements and metrics that are appropriate and that can show they’re not completely out of touch with the business of the business, even if they can’t fully guarantee that a bet will pay off.
Designers were quite happy to bask in their new status.
The two worlds of design and business still need to learn to meet half way. Think of an organization in which design plays a central, driving role, and there’s really only one major cliché of an example to use: Apple. But what Apple has in Steve Jobs is what every organization looking to embrace design as a genuine differentiating factor needs: a business expert who is able to act as a wholehearted champion of the value of design. In other words, Jobs has been utterly convinced that consumers will be prepared to pay a premium for Apple’s products, and so he’s given the design department the responsibility to make sure that every part of every one of those products doesn’t disappoint.
He is also notorious for his pickiness. I’ve talked with Apple designers who say he would scrap a project late in the game in order to make sure something is exactly as he thinks it should be. Now I don’t know about you, but how often does a project come back and it’s not quite how you wanted it but it’s okay and it’s really too late to make the changes to make it great and so you go with it? I know I’m guilty of doing that. Jobs doesn’t countenance that approach. And he’s set up processes to ensure that problems are caught, early, and the designers have enough time to get back to the drawing board if necessary. This commitment to excellence has helped turn Apple into the world’s most valuable technology company.
Note too Jobs’ approach to customer research: “It isn’t the consumers’ job to know what they want.” Jobs is comfortable hanging out in the world of the unknown, and this confidence allows him to take risks and make intuitive bets that for the past decade or so have paid off every time. And he’s instilled this spirit in his team. New company leader Tim Cook is renowned for the creative way in which he worked on supplier issues.
So now we get into something of a problem of terminology, because more than likely, Steve Jobs doesn’t consider Apple’s approach to be “design thinking”. Yet he’s the consummate example of one who’s built an organization on its promise. This approach of risk taking, of relying on intuition and experience rather than on the “facts” provided by spreadsheets and data, is anathema to most analysis-influenced C-suite members. But you need this kind of champion if design thinking is to gain traction and pay off.
I once heard a discussion between the current director of the Cooper-Hewitt museum, Bill Moggridge, and Hewlett Packard’s VP of Design, Sam Lucente. Sam was talking about how design thinking had helped him and his team to redevelop the design of one particular product that had done badly in the marketplace in order to produce a later, more successful version. The way he told the story, design thinking meant that this couldn’t be seen as a failure, because every moment had been one of wonder and learning. My interpretation was initially a little less poetic, that in fact design thinking no more guarantees the success in the marketplace of a product than any other tool or technique.
But actually, reframing failure in terms of learning is not just a kooky, quirky thing to do. In and of itself, it’s perhaps a useful exercise. By taking the pressure off design thinking and not expecting it to be the bright and shiny savior of the world, those trying out its techniques will be empowered to use it to its greatest advantage, to help introduce new techniques, to give new perspectives, to outline new ways of thinking or develop new entries to market.
Reframing failure in terms of learning is not just a kooky and quirky.
In fact, I would argue, beware the snakeoil salesmen who promise you’ll never take another wrong step again if you buy into design thinking. While some executives have been running their businesses according to its principles for years now, the formal discipline is still pretty new, and individual companies really have to figure out how it can work for them. There’s no plug and play system you can simply install and roll out. Instead, you have to be prepared to be flexible and agile in your own thinking. You’ll likely have to question and rethink internal processes. For there to be a chance of success, you’re going to have to ascertain what metrics you want to use to judge whether a program has been successful or not. And you’re going to have to figure out how to allocate resources to make sure that an initiative even has a chance of taking off.
I know some of you are familiar with the work and thinking of Doblin’s Larry Keeley, with whom I’m working now. For a long time, Larry has been at the forefront of the movement to transform the discipline of innovation from a fuzzy, fluffy activity into a much more rigorous science. His thinking in that arena holds for design thinking too. It’s time to move beyond the either/or discussions so often entertained within organizations. This isn’t about left brain vs right brain. This is about the need for analysis and synthesis. Both are critically important, from data analytics to complexity management to iteration and rapid prototyping. But even with all of this, there’s never going to be a way to 100% guarantee success. The goal here is to be able to act with eyes wide open, to have a clear intent in mind and to have systems in place that allow you to reward success and quickly move on from disappointment—and to make sure that your organization learns from those mistakes and thus does not repeat them.
What will the world be like in the future? We spend a lot of time on such conjecture, especially in relation to the effects of climate change, but somehow, the possibilities never seem real. The future, however, is just around the corner, and cities like London could be altered dramatically by rising sea levels, drought, temperature extremes and food scarcity.
Artists Robert Graves and Dider Madoc Jones vividly illustrate the horrifying changes that could occur in London within decades with their digital art series Postcards from the Future, which just wrapped up an exhibition at the Museum of London.
Concerned that the reality of impending climate change just wasn’t sinking in for many people, the London-based pair identified 10 iconic views of London’s greatest landmarks that are repeated again and again in postcards of the city and modified those views to reflect disastrous scenarios like ‘London as Venice’ (top image).
Buckingham Palace is suddenly hemmed in by a vast shantytown packed with poor Londoners as well as climate refugees from around the world, particularly those from the now scorched and uninhabitable equatorial regions. The ‘Gherkin’, once one of the city’s most eye-catching modern office towers, has now become a high-density slum.
But London has its own extremes, painfully dry and hot in the summer and so cold in the winter, the Thames freezes over completely, building up long-term ice. Each winter, as the slowing Gulf Stream ushers in a new mini ice age, temperatures get lower and lower.
“We didn’t want to create the stuff of nightmares although we did make images showing the potential disasters London could suffer,” say the artists. “We also strove to show how resourceful we could be as a capital, and how by adapting we could rise to meet the challenge of our changing environment.”