Nudge, Dark Nudge, Sludge and Dealing with the Ethics of BE
After several invigorating days of planning the global launch of BVA Nudge Unit at a beautiful chateau outside of Paris, I approached Richard Bordenave (co-founder of BVA Nudge Unit) to discuss something. And well, let’s just say things got very dark…
JM : Richard, as I embark on my exciting journey to lead the BVA Nudge Unit, USA which launched this fall, I look to my previous experience working with the private sector. I have observed companies embrace the principles of behavioral economics (BE), as showcased and applauded through nudging for good awards.
But at the same time, I heard from the BVA Nudge Unit team in France that this reactivates the debate in the press around the ethics of nudge, is this because marketers are oftentimes labeled as being less trusted?
RB : I think that this is oversimplifying the debate; stating that “nudging for good” is an exclusive domain for public actors is implicitly saying private companies are nudging for bad. Moral judgement can be made on both a public nudge and on a private one. And the more we work with private companies, the more we create nudges that are neither for good nor for bad, but just to make things work seamlessly. A nudge to facilitate new product usage, or to ease the flow of a customer journey is neither good nor bad per se, it is just helping a transactional or relational behavior to happen smoothly.
JM : But still, we have seen Richard Thaler & Cass Sunstein pointing out what they call “sludges” , processes that prevent people from getting a benefit they are entitled to, such as the use of unnecessary red tape when claiming a rebate. Is adding “sludge” to the process not a form of manipulation?
RB : Well, yes you can manipulate a customer’s behavior by adding unnecessary barriers to discourage them from engaging in costly behaviors for a company, such as unsubscribing or claiming a refund. But you can also encourage self-serving behaviors doing the exact opposite, creating a slippery slope that makes it too easy to buy one more item, or reducing the pain of paying to make you fall in a trap without thinking. These become condemnable when they are putting psychological pressure based on fake information or threats: “one ticket left”, “most people take insurance”… So dark nudges, or “nudges for bad”, are not to be limited to sludge on the journey. Instead it’s the moral orientation that is important. Sometimes an administration or people in charge of processes can create “sludge” without even noticing, so the intent also makes all the difference. You can “sludge” for good, adding friction to help people think twice about a high stake decision that could be derailed by emotions.
JM : Don’t you think these negative uses tend to raise suspicion around nudge?
RB : Well, let’s take a step back. Any scientific knowledge can be used for a positive purpose or a condemnable one. It is the intention of the originator that is to be questioned. Ethical debates in marketing or science have existed long before nudge was created, and abusive selling also started before the digital era. Now I fully acknowledge that it is frightening for society to see that the bad guys have adopted behavioral insights faster than the good ones, including legislators. This can be seen today with the abuse and algorithm biases prevalent in platforms and social networks. That is why it is important that academic researchers feed the debate with their thinking in order to help the general public, the authorities, and key players define codes of conduct, or to design “anti-dark nudges” legislation just like when abusive advertising laws were designed. Banning opt-out questions when asking consumers to share personal data in GDPR, or preventing confectionary manufacturers from selling their products at check-out are illustrations of legislative trends in Europe. To date, the most important ethical thinking on the public side has been published by Cass Sunstein in the US and Alberto Alemanno or Anne-Lise Sibony in Europe. But little has been made available to make it easy for the private sector to embrace ethical thinking for their practices. We need to nudge the ethics of nudge!
JM : Maybe the work of Pelle Hansen with his typology: transparent/non-transparent, reflexive/automatic to discriminate an ethical nudge vs an unethical one is a step in the right direction?
But how do you think we can help marketers not fall into the dark side of nudge on a day-to-day basis? How would you draw the line between a nudge for good and a dark nudge?
RB : Well extremes are easy to tell apart, but there is a critical grey zone, just like nuances between influence and manipulation, and this is where the inflexion point occurs. And transparency, although key to discrimination, is too broad a concept to be turned into concrete managerial implications. Basically, nudges for good and dark nudges have a lot in common: they are interventions based on behavioral insights, they are easy, simple, cost effective, and are meant to change behaviors… but it is the purpose of their outcomes that are of a very different nature.
I have then tried to design a simple cheat-sheet to help marketers embrace ethical questions. It’s not a scientific scoring, but more of a thought process to challenge the different dimensions in balance. If too many criteria fall on the wrong side, it definitely shows they are on the dark side of the nudge, but it also gives marketers the opportunity to rebalance it by improving their design. Of course this tool can be improved, as it is a work in progress, but most often it helps them realize that they can create social value beyond the individual marketing “benefit”. I insist on showing that this can also enrich their brand value, gain consumer respect and create a win-win. Most marketers also understand that short term benefits gained from abusing company power in one transactional aspect always translates to a poor reputation if it is denounced publicly by the consumer. Dark nudges can kill the rest of their marketing efforts and destroy long term brand trust, and this often motivates them more in the short term than the higher level ethical debates.
JM : Can you describe how your cheat-sheet works ?
Well it’s pretty easy. There are 5 questions corresponding to 5 steps of the nudge design process. For each step, you should try to evaluate on which side of the scale the intervention falls. Of course answers are often to be debated, and you shouldn’t be judge and jury, so a team evaluation is required if you don’t have an ethical committee.
Question 1: Purpose, is actually challenging the originator’s intent. If it is to serve morally condemnable goals (such as abusive selling, cheating people), you don’t need to go further. It’s easy to call it a dark nudge. You can find lots of digital examples in: https://darkpatterns.org/
But there is a wide range of purposes that are completely acceptable although they are not necessarily directed toward welfare goals or saving the planet. By including behavioral insights into design, marketing touchpoints, or human interactions with companies’ reps, you can help prevent errors, direct people to the most relevant contact channel, better manage their waiting time, which would fall on the light side because you are improving their journey and reducing complexity.
Question 2: Beneficiaries. Again, if nudge is self-centered around a brand & company’s interests only (i.e. no benefit for consumers: such as abusive red tape for rebates), you can be sure it’s a dark nudge.
But like in any commercial transaction, lots of nudges are benefiting both parties. The question then becomes “where does the balance fall”? Could we say that the nudge has been designed first to serve the individual, and even better, its community (of clients, users, citizens), or first the brand? Medicine compliance nudges can sometimes be questioned on which side they fall, just like any cost-benefit analysis they require looking to the broader scope of stakeholders: including individuals, healthcare insurance companies, manufacturers…
Question 3: Choice. If nudge frames choices to serve a purpose, but allows choosers to select true alternatives or to opt-out without friction, then it preserves their liberty and dignity. Any framing that reduces choice falls into the dark nudge side. For example, hiding some cheaper options because of the customer’s identified purchase power, or surging prices when the customer revisits a site by using personal navigation data. Taking out options, or making the preferred choice difficult (such as unsubscribing) are definitely dark nudges.
Question 4: Outcome. The best way to judge social acceptance of a nudge is to get your target audience involved in its design or to ask them their opinion about it after experiencing it in full transparency… Do they approve of it as serving a goal defined “by themselves” as proposed by Sunstein? That’s what we do in any of our pretests. Any nudge that after being exposed generates feeling of being unduly cheated falls in the dark zone. If it’s abusing people without them realizing it, it’s even worse. That is why nudges aimed at sensitive populations (such as kids, or people with disabilities) should be evaluated with the help of their guardian bodies, parents or NGOs.
Question 5: Monitoring. Publishing is proof of goodwill for transparency. A nudge for good requires some proof it’s actually delivering the intended good, because without testing it’s just an idea, says Thaler. We have seen numerous backlash effects that proved the value of testing even when the nudge was designed around the benefit of welfare. So documenting results through public channels (papers, brand site…) is a step in the right direction. Even if you can’t measure it in the most rigorous way, giving access to the field data can help learn and improve for a potential roll-out. No one will be blamed for experimenting, whatever the outcome. But rolling-out blindly or doing nudge-washing without proof falls into the black, as well as hiding its impact.
In the end, if too many criteria end-up in the dark part, you can visually spot that you are on the dark side of nudge, and should prompt you to question your sense of ethics. But again, even if some criteria are a bit greyish, this gives you the opportunity to rethink your nudge design in order to improve it.
JM : So, don’t you think that Behavioral Science has a lot to lose if it is now associated with evil or manipulation in a System 1 way ?
RB : Social acceptance is a major risk that could prevent nudge’s transparent adoption across many domains. And although academics have tried to embed normative criteria in their definition, the fact that Thaler now signs every book with “nudging for good” demonstrates using nudge for social welfare requires continuous publicity. Naming the “evil” side with a word like “sludge” is also a great way of debunking abuses without letting them be associated with the original “nudge for good” concept. This reminds me when the advertising industry suffered from the “subliminal effect” reputation in the 1950’s. In fact, it was later revealed by James Vicary that it was an agency stunt to attract his own clients. Nudge may endure the same from its current promoters; too many “magic promises” by apprentice sorcerer agencies without science backing will destroy its credibility and result in “nudge washing”. So it is important that we at BVA Nudge Unit continue to educate our clients with the latest ethical thinking, making the bridge between academic research and practical managerial decision making. This is where our value is, particularly because managers are not interested in definitions, but in efficiency and moral choices! We can also help organize more debates (like Nudge France ones) and publications, like Eric Singler’s books, demonstrating how nudge can contribute to a better world if mastered by those who want to change it for good.