Innovation focuses on creating radical change in existing fields. Innovation aspires to shape the future using the tools of the present. While innovation is one of the ways we can “invent the future”, it is also shaped by existing circumstances and changes in the world. With increasing environmental, economic and political uncertainty, it should be apparent that technological innovation alone cannot “fix” things. Simple “solutions” usually create unintended consequences. Creating anything in such turbulent times brings with it a sense of agency, but also responsibility. When we intervene in complex systems, our future-shaping actions are hard (or perhaps impossible) to predict. How can engineers, entrepreneurs and technopreneurs embrace complexity and uncertainty, in order to act in meaningful ways, whatever the future may bring? This article includes a series of suggestions, propositions and open questions. They’re based on our work with futures and grounded in our experience at FoAM, a transdisciplinary network of labs for speculative culture.
“Innovation matters”, according to the EPFL website, because “Innovation improves the quality of life and provides solutions that help deal with grand societal challenges.” How can innovation truly matter in a world marked by increasing complexity and uncertainty? A world that is messy, unruly and unpredictable. A world in which age-old constants can morph unexpectedly into stochastic variables. A world which is in need of radical change and innovation. However, it is also a world where there is a healthy fear of change, economic precarity and rising social inequality. We have front row seats to accelerationism and mass extinction. It’s a beautiful and terrifying world, filled with both hope and despair. A world that cannot be reduced to a neatly ordered power-point presentation. It refuses to conform to carefully laid out plans, even those based on extremely well researched strategies. Life is in constant motion. It is non-linear and multilayered. Sometimes it matches up with your expectations, sometimes it doesn’t. How can you, in this world, be sure that any innovation improves quality of life? That it does so for everyone? It’s impossible. Your work could improve the quality of life for some people, some of the time, in a specific context. And that’s already something amazing. Yet it might simultaneously reduce the quality of life for some other people, or at a later time, or inadvertently break down, causing problems in different contexts.
Innovation, like any other human endeavour, is a situated practice. It is grounded in the experience of the innovators themselves and the context in which you are working. In your backgrounds, in your personalities, your interactions with other people and situations. Even the most straightforward technological innovation will have your own assumptions about the world built into it. For example, some years ago we met an engineer who proudly showed us his new invention, a haptic interface for “Chicken Touching” at a distance. It included a harness for the chicken and either gloves or a chicken “doll” for the human. The haptic feedback of touching could be felt both by the chicken and the human. While this might sound like a case of animal cruelty, the young engineer designed it with best intentions. He explained that he comes from a remote village in China where he grew up surrounded by chickens. He was missing his chickens and he was sure that the chickens were missing him. Furthermore he assured us that this is a widespread concern many immigrants who left rural life as students, refugees or economic migrants. His invention worked well for him and his friends, so he was convinced that it was going to improve mental health and wellbeing of separated humans and animals alike, all over the globe. Since the humans would be happier, they would work more efficiently leading to increased economic growth and productivity. All due to the simple innovation of Internet Chicken Touching.
Incorporating your assumptions into the process of innovation doesn’t have to be a problem, as long as you are aware that your assumptions are not generalisable facts, and, as long as you are willing to seriously take other perspectives into account. One such perspective is that innovation has become fetishized for its own sake, with little regard for the underlying ethics, politics and worldviews that enable it. Innovators can perpetuate unsustainable lifestyles and social injustices, often without even being aware of it. For example, the household innovations of the 1950s, such a the vacuum cleaner or the dishwasher, certainly improved the quality of life for the housewife, without questioning a social order where women are expected to be housewives. The automated dishwasher was invented in 1872 by Josephine Cochran, in her search for alternative ways of washing her heirloom china after some precious dishes were chipped by reckless servants. However, the dishwasher only became a household fixture with the ready availability of water heaters in the 1950s. Yet neither Josephine nor the engineers of the 1950s seriously questioned the availability of servants and housewives.
IBM, through a German subsidiary, provided “punch card solution technology” during the second world war which was pivotal in the Nazis efforts to identify, isolate, and ultimately destroy the country’s Jewish minority. More recently we have Facebook and Cambridge Analytica, selfie related accidents, and governments who are spending more worlwide on fossil fuel subsides than healthcare.
If you’re at all interested in the internet of things, you can not afford to ignore @internetofshit on twitter. Here’s a recent thread commenting on an article in the New York Times, about IoT technologies being repurposed for domestic abuse…
Test your innovation by exploring all the ways in which things can go wrong.
“Don’t simply focus on what would be ideal or critique the status quo.” says technology scholar Danah Boyd “Genuinely examine how what you’re seeking could also be corrupted and abused. I believe, more than anything, that deep empathy and self-reflection is critical for us to build a healthier future.”
In a world of context-aware technologies, we could benefit from more context-aware innovation. If innovation is about introducing new technologies, new methods, and new approaches to old questions, it should also ask new questions to solve current problems from new perspectives.
If you want innovation to matter you must consider not only its direct impact, but reverberating implications and unintended consequences in the world. Design for how your innovation can go wrong. What does User Centred Design look like when “The User” is a genocidal dictator or a psychopathic CEO with the resources of a small nation? If such possibilities are not taken into account during planning, you may be facing the consequences of death, destruction or the minor banalities of evil during production. On the other hand, if an innovation can help prevent its own misuse, it’s more likely to remain a force for good.
Innovation aims to shape the future using the tools of the present. In the process the tools themselves are inevitably transformed. A tool designed to shape a particular future is in turn influenced by the visions of that future. This can create positive feedback loops, but it can also be a dangerous path. It is essential to question a default future, the presumption that the future can only go a certain way, that it will be mostly like the present, just a little bit better, faster and brighter. The default future is often assumed without critically examining or testing it in practice. Some Asian cities, for example, planned based on the ideals of western car-centric cities which are now becoming obsolete. When urban planners transplant ideas from another continent, without taking local culture into account could result in the creation of slums, in the disappearance of a healthy street culture and loss of green public spaces.
Ask yourself — Where does my image of the future come from? Is it an image I’ve created, borrowed or been given? Is this a desirable future? It is mine or is it someone else’s future? If you attempt to innovate without challenging your own assumptions about the future, your work will likely perpetuate the same problems you may be trying to solve. Without discussing what the future might be like, you might be working towards a future your users don’t want, or a future that will never happen. The promises of jetpacks, flying cars, atomium shaped homes, space elevators, Xanadu, or immortality… As much as transhumanists would like to believe we’re headed exponetially towards a technological singularity, we’re living in a world with more of the character of cyberpunk dystopias. The future is always messier that we think.
Innovation requires innovative approaches to futures. You will notice that we use the plural, futures. Innovation can matter to more people in multiple contexts if we work with the understanding that there is not one but many possible futures.
So how do you challenge your default future?
“In our accelerated world” says Beth Comstock, “we’re best served by taking stock of our assumptions and transforming as many as possible into hypotheses. (…) [T]wo questions I try to ask more often are “What’s the hypothesis?” and “How will we know if it’s true?” Thinking this way takes the pressure off, because we don’t feel like we have to know something that isn’t yet knowable. We’re free to let the future be the future.”
Once you are aware of your hidden assumptions and develop them into hypotheses, it’s possible to design experiments to test their validity. You can get a better empirical understanding of the intricate complexities of change. Instead of attempting to bend the world to fit your ideas, look for ways you can make your innovation truly matter in a changing world, today.
“We’re solarpunks because the only other options are denial or despair.” says Adam Flynn “Solarpunk is about finding ways to make life more wonderful for us right now, and more importantly for the generations that follow us — extending human life at the species level, rather than individually. Our future must involve repurposing and creating new things from what we already have. Our futurism is not nihilistic like cyberpunk and it avoids steampunk’s potentially quasi-reactionary tendencies: it is about ingenuity, generativity, independence, and community.”
As solarpunk illustrates, innovation isn’t necessarily always about progress — at least not in the modernist sense of progress as linear growth. Sometimes the most innovative thing you can do is nothing at all. Other times, you might create exactly what you want, developing the most innovative technology imaginable, only to realise that it’s completely useless until the worldview and the society around it changes. Sometimes we can forget that innovation isn’t an end in itself. It’s a process, a means to an end.
Take for example Project Drawdown, a wide ranging effort to evaluate solutions which address climate change. Some of the solutions are innovative, others are more concerned with changing mindsets and policies. “The research revealed that humanity has the means and techniques at hand.” claim the founders of Drawdown “Nothing new needs to be invented, yet many more solutions are coming due to purposeful human ingenuity.”
It’s interesting to look at the list of the Drawdown goals, ranking 100 most important areas of innovation to reverse global warming. In the top ten are of course new solutions for solar and wind energy, also included are innovations around refrigeration, plant diet, girls education, tropical forests, food waste, silvopasture and family planning. Drawdown advocates understand that what we need is an ecology of practices, rather than single solutions. Innovation can only thrive in a healthy environment.
Thinking about context
While innovation is one way to “predict the future by inventing it”, innovation is also shaped by the circumstances of the world around it. Understanding the context in which we innovate is essential for making innovation matter. Contextualisation takes externalities into account.
“Contextualization” says futurist Scott Smith “helps not just in making enormous leaps or moonshots, or finding breakthrough innovations alone, but instead it helps to build a richer, more valuable picture of these innovations within more complex future worlds. By considering the context, your innovation strategy can better leverage emergent opportunities and surface risks that may not even be apparent today.”
How do you know what’s worth disrupting? And at what cost? By considering the wider context we can become aware of risks that might otherwise be missed. Technological innovation looks quite different when we attempt to understand it from a wide range of social, political, economic and environmental perspectives. A “solution” can take on new dimensions when we consider the many possible causes behind the problem. When we design for situations we can’t anticipate. If we aren’t willing to consider underlying causes or wider systemic issues, a simple solution can lead to devastating consequences.
Let’s look at two very different examples of the importance of contextualisation in engineering — The Boeing 737 MAX disaster, and the solar engineering course at the Barefoot college in Rajasthan.
We assume most of you have heard about the recent 737 MAX crashes and the resulting grounding of the planes. In short, the two aircraft that crashed experienced upward and downward speed fluctuations, the nose pitched downward and the pilots were unable to regain control. It was determined that both planes had an automated anti-stall flight control system (“MCAS”) that was overcorrecting. Since the planes did not have warning lights installed for this situation, the pilots had no way of knowing what the MCAS was doing and there was no way of shutting it down.
The opinion machine cranked up and quickly blamed the crashes on a software problem. However, considering the wider context we can see a cascade of compounding problems. It began with an economic and environmental problem — the existing engines were using too much fuel. The solution was to install more efficient engines with bigger fans. Since the engines were larger, this solution lead to problems with the airframe. It was more economic for Boeing to use the existing 737 airframe rather than building, testing and approving a new one. But the existing airframe did not have enough clearance for the new engines. As the design of the airframe couldn’t be modified, the solution was to mount the engines higher. This solution lead to changes in the aerodynamics of the plane. The higher mount of the engines lead to the airframe not having a sufficiently stable handling at high angle of attack. The new planes could therefore not be certified. Boeing then designed an anti-stall system to electronically correct this handling deficiency of the aircraft. The design of this system had to be as simple as possible in order to fit the existing systems architecture, to reduce the amount of rework for the engineers and minimise any new training for pilots and maintenance crews. The simplest fix was to add features to the existing Elevator Feel Shift system. Similar to the old system, the MCAS relies on non-redundant sensors to decide how much to correct the angle of attack (AoA). But unlike the Elevator Shift system, the MCAS could make bigger “nose down” adjustments.
On both flights that crashed, the non-redundant sensors were unreliable and gave incorrect readings. Boeing does sell an option package with an additional AoA vane and an AoA warning light to let the pilots know when there is a problem. Neither of the two aircraft that crashed had this additional package installed. No 737MAX with the warning system installed has ever crashed. To compound the problem, there were human failures, since the pilots were not sufficiently trained with the new system. On the LionAir flight, the pilots were not even told about the MCAS, and had not done any simulation training for this potential failure. Furthermore, the previous crew experienced similar problems but didn’t record them the maintenance logbook.
Without looking at the wider context, we might be tempted to think the 737MAX tragedies happened due to the MCAS software failure or a lack of hardware redundancies. Yet when we zoom out, it becomes obvious that the software was only a small piece of the puzzle. No matter how brilliant a technological innovation might be on its own terms, in a complex system its success relies on how it interacts with other internal components under unpredictable external influences.
A very different approach to contextualisation can be found in the development of a solar engineering course at the Barefoot College in India. To begin with, the core concern was to provide education for girls in rural Rajasthan. Even in places with enough schools, the number of girls attending classes was alarmingly low. Researchers found out that while boys were sent to school, girls were expected to help at home and in the fields. The only time the girls had for formal education was in the evening, after all the chores were done. Night schools therefore had a higher number of girls. However, as many villages were without electricity, in the evening there was usually not enough light to learn by.
In this case an education problem called for an engineering solution. The villages weren’t on the electrical grid, but they had plenty of sunlight. So the Barefoot college started a solar engineering course. The Barefoot engineering lab focuses on designing and developing systems which utilise solar power to provide ambient lighting and cooking facilities for remote, off-the-grid villages. Lighting is the most crucial part of their work, as it allows for night-time education, an important aspect of self-empowerment for the villagers in poor rural areas of India and other parts of the world.
People are trained to assemble electronic circuits, using a simple “look and match” technique. This is important, as many of the students are illiterate women. The programme became so successful that it spread beyond the borders of India. Visual learning has proven to be a scalable technique, allowing people that speak different languages to learn together. The course is designed to be rapid, comprehensive and hands-on. It is also designed to be spread informally, through peer-to-peer mentoring. People with no prior education execute a whole series of engineering tasks, from constructing electronic components (such as inverters and transistors), to putting together complex PCBs.
After six months the students return to their villages, electrify them and help setup student-governed night schools. The previously disadvantaged women often became sought after engineers. They’d set up small businesses and go around the region to help other villages and teach other women. What began as a social problem lead to innovation in technology, education and entrepreneurship, contributing to a healthier society. At the Barefoot college a solar lantern is much more than a technological innovation. It is part of a larger ecosystem of solutions that lead to improvements in women’s emancipation, reduction of poverty and increase in literacy. The Barefoot engineers are a successful example of situated innovation, where a simple technology applied in an appropriate context enabled a cascade of multiple benefits.
Engineers are in high demand in today’s techno-materialist society. You are high value assets. As such, you have the agency to challenge the ends your innovations are used for. When you collaborate across disciplines and cultures, aware of the wider context and each other’s assumptions, you have the power and the responsibility to make your innovation meaningful today and for generations to come.
Working with complexity
In a globalised world, the contexts in which most of us live and work are increasingly complex and entangled. Even if your innovation focuses on a single solution for a seemingly isolated problem, once it is out in the world, it will become part of a complex ecology of different systems and stakeholders. When you’re engaging with a complex system, unexpected things will always happen.
For example, in 1935 the Bureau of Sugar Experiments Stations introduced cane toads from Hawaii to Australia, in an attempt to solve the problem of native beetles that were damaging cane sugar crops. 102 cane toads were released into the “wild”. Not only is there no evidence that the toads reduced the number of beetles, but as they secrete a poison, they continue to contaminate wells and ground water, spread diseases and disrupt local biodiversity. Since the cane toads had no predators in Australia, their population has grown exponentially to over 200 million today, making eradication near impossible.
A more current example of a reductionist approach to complexity is a drone designed to replace the bees dying off due to colony collapse. While this may seem like an easy fix to increase pollination, it ignores the rest of the ecosystem and will inevitably introduce new problems. Futurist Jose Ramos describes this mindset as “A lack of fundamental understanding of the complexity of biological systems. An inability to see humans as part of the web of life rather than engineers on it or masters of it.”
This does not instill confidence for any of the current geoengineering schemes, almost all of which involve a single large scale intervention to “fix” the Earth’s climate. These include, pouring thousands of tonnes of iron filings into the sea, or spraying sulphur compounds into the sky to dim the sun, etc. Based on the current scientific consensus, the most effective thing we could do to manage climate change is to decarbonise the global economy but that’s a bit more of a challenge. It requires to approach the problem from a whole systems perspective, tackling causes rather than merely dealing with the symptoms. The scope of the problems include social and economic systems, individual and collective behaviours, mindsets, lifestyles, worldviews and even the mythologies underlying the culture of endless consumption and extraction.
So how do we deal with problems of complex systems? We can think about relationships, rather than single solutions. We can acknowledge the interdependence between our small contribution to a system, its many existing and emerging parts, along with the wider environment in which the system exists. Stable inputs, isolated interactions and outputs that behave predictably in simple systems are likely to behave very differently when faced with chaotic behaviours, emergent properties and self-organisation. Modelling and prediction can only take us so far. Working with complex systems requires a capacity for open-ended experimentation, tinkering, feedback and adaptation.
How can we balance long-term visions with short-term responses, such as adaptation, resilience or revolution? How can we keep our visions alive, while also responding appropriately to unexpected change? An answer may lie in tighter feedback between vision and adaptation, where knowing how to respond to a situation emerges from iterative prototyping of a wide range of futures and failures. Such “visionary adaptation” shows that we can go further than making things resilient, towards what Nassim Taleb calls “antifragility”.
An antifragile system (such as evolution or international air travel) grows stronger when faced with uncertainty and adversity. The overall safety and reliability of air travel has been due to improvements made by investigating and learning from every aircraft accident, whether minor or catastrophic. At a smaller scale, Netflix uses their Chaos Monkey to perform intense randomised stress-testing of infrastructure to help prevent catastrophic failure. “By building a server architecture that expects failure, the system as a whole can learn how to withstand bigger and tougher obstacles even if they don’t know exactly when or how they will occur in real life.”
Taleb suggests several strategies for antifragility, starting with being curious and getting out of your comfort zone. Considering that you can’t rely on predictable outcomes, he suggests maintaining a range of options (which he calls “optionality”), conducting lots of small experiments and commit to tinkering. Further, he cautions not to always trust data, and to collaborate using approaches which combine both long and short term strategies. While innovation can be part of an antifragile approach, it is just as important to respect the old, to understand things that have persisted over time and have survived. Yet we must also take into account the things that have failed, gone extinct or fallen into ruin. This negative infomation (or “via negativa”) can provide valuable insights about the wider context of sucess. This may sound reactionary, but in complex systems disruptive innovation is rarely the only answer.
How do we respond to the fact that life on Earth is threatened by mass extinction? Innovative green tech could be as important as learning indigenous approaches to sustainable agriculture. We know that as individuals we can’t provide an adequate response to address the scale of this threat. At the same time, every response has the capacity to change the conditions for the future.
In the words of philosopher Adam Nocek, even if “the problematic field doesn’t go away, it generates new conditions for learning and responding. [We can learn to craft] responses that are always local, always situated, and always risky. Each adjustment, each pull, changes the nature of the composition of the problem, which is why attention and care are so essential to learning from problems. With one wrong adjustment, the field of potential action changes and the milieu can become “poisoned”. One must always be attentive to dosages.” We need to do more than just come up with solutions, we need to figure out the amounts and timing that might be needed. While a small amount could act as medicine, too much can become toxic.
When working with complex systems we have to be careful not to become overconfident. We can’t be sure that any of our innovations will reduce the effects of climate change. We can’t even be certain about how to evaluate our proposed solutions. But we can work with specific questions and experiment with provisional answers. We can contribute to changing the conditions to make future responses possible.
We can learn from sympoietic systems, such as cultures or edge habitats. These systems evolve through connections and feedback. They adapt when new information becomes available and exist in dynamic balance. Designing in such decentralised systems works best when applying simple rules, incorporating redundancy and multiplicity, working with randomness as an inherent characteristic and using noisy heuristics. It requires engagement, communication and collaboration.
Communication, however can become slower and more difficult across diverse networks and ambiguous fields. If you find yourself working in transdisciplinary contexts, you will almost certainly encounter “wicked problems”. These are problems of many dimensions which include social, political or environmental issues. Problems like universal healthcare, nuclear weapons, plastic waste, land degradation, or the refugee crisis. Problems that are urgent and important, but may have complicated disagreements between a large number of stakeholders.
Wicked problems are unique and often contradictory, they can be symptomatic of wider issues. They’re often interconnected. The people who should be a part of the solution are often also a part of the cause. Any attempt to solve a wicked problem will usually not make it go away, but can potentially improve the situation or make it much worse. The tricky thing is that a complete solution to a wicked problem can’t be tested in a lab. You only have one chance to try out a potential solution. If your one-shot-solution doesn’t work, it will change the problem space and you will have to try something different. Any sucessful engagement needs to be ongoing and adaptive.
Collaboration across disciplinary and organisational boundaries is key when tackling wicked problems. In successful transdisciplinary collaborations the power to make decisions and implement them is distributed among a wide group of committed stakeholders. Everyone involved should acknowledge that no one has a complete answer. The approaches tend to be holistic rather than linear, looking not just at single issues but more importantly, their interrelationships. Collaboration takes time and resources, and collaborative skills tend to be in short supply in situations of specialisation and competition. Collaboration requires openness, adaptability and flexibility from the individuals and organisations involved.
In their book Handling the Wicked Issues, researchers Clarke and Stewart say that “The style is not so much of a traveller who knows the route, but more of an explorer who has a sense of direction but no clear route. Search and exploration, watching out for possibilities and inter-relationships, however unlikely they may seem, are part of the approach. There are ideas as to the way ahead, but some may prove abortive. What is required is a readiness to see and accept this, rather than to proceed regardless on a path which is found to be leading nowhere or in the wrong direction.”
When we intervene in complex systems, our future-shaping actions will have increasingly uncertain outcomes. Faced with the uncertainty of an ever more interconnected world, a common response is to rely on increasingly elaborate predictions. This is a tempting but often misleading path.
“The most calamitous failures of prediction usually have a lot in common.” according to statistician Nate Silver “We focus on those signals that tell a story about the world as we would like it to be, not how it really is. We ignore the risks that are hardest to measure, even when they pose the greatest threats to our well-being. We make approximations and assumptions about the world that are much cruder than we realize. We abhor uncertainty, even when it is an irreducible part of the problem we are trying to solve.”
Innovation (as we currently understand it) works well when the problem space can be clearly articulated. But, what if the problem space itself is uncertain? Or, to take it a step further, when the locus of uncertainty becomes uncertain? When the unknown mutates into the unknowable? When you “don’t know what you don’t know”. This can involve ethical uncertainties, rapid social change, shifting supply chains, etc. By taking what appears to be a calculated risk for you (and your investors) you could make unacceptable risks for others. Innovating in turbulent times brings with it a sense of agency, but also responsibility.
Sometimes innovation is about going forward. Other times it’s about going sideways, going backwards or just staying in place and tinkering in the present. In some cases disruptive innovation is necessary to break with the status quo. Other times you better take history into account to avoid unnecessary damage. Sometimes it’s good to throw out the old, other times it’s better to repair, reclaim and repurpose. Sometimes knowledge is power, other times too much prior knowledge can get in the way of finding viable alternatives.
People working sucessfully in future-oriended fields, including engineering and entrepreneurship, often value thinking in terms of multiple futures. Futures that can drastically diverge from your own ideals.
Imagine, say, a future in which startup culture has become obsolete as way of creating value. Where the ideals and infrastructure of Silicon Valley are considered old fashioned and embarrasingly silly. What alternative approaches to innovation have developed? Who have become the new stars? Are they clustered geographically, or by some other means? Who would you be and what would you be doing in this future? What would innovation mean for you?
Or another future in which nation states are less important, and entrepreneurs govern a technologically advanced yet culturally conservative society. What would disruptive innovation mean in this world? How could it fail? What role would you take in its governance? What technologies would be used daily? What would be supressed? What does venture communism look like in contrast to venture capitalism? What would your life be like in this future?
Challenging your default future — the image of the future you take for granted — can be unsettling, but can also lead to surprising opportunities. You don’t need to invent fantastic futures either, it’s enough to keep an open mind in the present. William Gibson’s famous maxim “the future is already here, it’s just not evenly distributed.” encourages us to look more closely at the present and question the means of distribution. The fluid nature of futures. If you don’t take one path for granted, other possible paths can open up. Innovation can be found in the most unexpected places.
Within the maze of possible and probable futures, it can help to think about what your preferred future would be like. Even though this future might not come to pass as you imagine it, what aspects of your vision could you try prototyping in the present? What experiments could you design and test? Who should be involved? What could you learn from these experiments? How would you select the most promising experiments? What would be an appropriate scale to develop the experiments into concrete products, services or initiatives?
Techniques from action research, futuring, the lab approach or even permaculture might provide some interesting heuristics to navigate uncertainty. All of these approaches begin by taking time to observe the wider situation before engaging. They value collaboration, accept difference and acknowledge the importance of feedback and adaptation.
Nine heuristic principles for meaningful innovation in uncertain times
In conclusion, we’d like to return to the core question of this lecture — How can engineers, entrepreneurs and technopreneurs embrace complexity and uncertainty, in order to act in meaningful ways, whatever the future may bring? We’ve compiled a list of nine heuristics or rough principles, that have proven their validity in our work and life over and over again.
When working with complexity and uncertainty, we would suggest to…
- Challenge your assumptions
- Improve your collaborative skills
- Contextualise your innovation
- Experiment with futureproofing
- Embrace serendipity
- Design for failure
- Prototype and iterate
- Build antifragile systems
- Cultivate interconnected approaches
Challenge your assumptions. To give your ideas a solid foundation for meaningful change, translate your assumptions into hypotheses and test their validity in real life and with other people. To understand what might be going on in a system, observe, then interact. Take your time to understand and take different perspectives before interfering. Correlation is not causation. Causation is not mechanism.
Improve your collaborative skills. Acknowledge that creativity comes in many guises and should be valued as a shared resource. Collaboration will improve your chances of dealing with complex problems in meaningful ways. One person’s weakness is likely someone else’s strength. When combined, they can complement and support each other. Work with people you are able to disagree with.
Contextualise your innovation. Understanding the context will allow you to innovate from different perspectives and for multiple benefits. When you take externalities into account you will be able to surface less apparent risks and unexpected opportunities.
Experiment with futureproofing. Question where your ideas about the future come from and design with multiple futures in mind. Improve your capacity to deal with uncertainty of the future by building tighter feedback loops between vision and adaptation.
Embrace serendipity. Some of the most important discoveries in human history happened by chance. Acknowledge that seemingly aimless exploration can lead to surprisingly innovative results. Leave room for the unexpected, the unplanned and the inconsequential.
Prototype and iterate. To make your innovation more adaptive to changing conditions start with prototyping — think about what would be the simplest way to test your idea with minimal resources and in the shortest time possible. To reduce risk, iteratively scale up or change scope of experiments. Think about dosages.
Design for failure. Being aware of the problems your solutions could cause can help reduce unintended consequences. To minimise damage caused by your ideas being corrupted and abused, design with worst-case scenarios in mind. Failure is important, but only if you learn from mistakes.
Build antifragile systems. Designing systems that become better through adversity will improve their capacity to thrive in uncertain conditions. When building or engaging with complex systems, apply the tactics of decentralisation, redundancy and overcompensation to avoid catastrophic failures.
Cultivate interconnected approaches. To reap the intangible, long term benefits for everyone involved, engaging with complex problems requires a change of mindset from reductionist to relational. To assure a wider take-up of systemic outcomes, open your sources and share your findings with a wide community of stakeholders.
Because, in order to act in meaningful ways in our complex and uncertain times, the most important thing to remember is that we are all in this together.