Hi there, everyone. I would like to start by asking you a simple question. And that is, who of you wants to build a product that is as captivating and engaging as, say, Facebook or Twitter? If you think so, please raise your hand. Something that's as captivating and engaging as Twitter. Please keep your hands up. Now, those of you who have kept your hands up, please keep your hands up if you feel that you're spending more time than you should spend on sites like Facebook or Twitter, time that would be better spent with friends or spouses or doing things that you actually love. Okay. Those who still have their arms up, please discuss after the break. (Laughter) So, why am I asking this question? I am asking it, because we are today talking about moral persuasion: What is moral and immoral in trying to change people's behaviors by using technology and using design? And I don't know what you expect, but when I was thinking about that issue, I early on realized what I'm not able to give you are answers. I'm not able to tell you what is moral or immoral, because we're living in a pluralist society. My values can be radically different from your values, which means that what I consider moral or immoral based on that might not necessarily be what you consider moral or immoral. But I also realized there is one thing that I could give you, and that is what this guy behind me gave the world... Socrates. It is questions. What I can do and what I would like to do with you is give you, like that initial question, a set of questions to figure out for yourselves, layer by layer, like peeling an onion, getting at the core of what you believe is moral or immoral persuasion. And I'd like to do that with a couple of examples as was said, a couple of examples of technologies where people have used game elements to get people to do things. So, here's the first example leading us to our first question. one of my favorite examples of gamification, Buster Benson's health month. It's a simple application where you set yourself health rules for one month. Rules like, I want to exercise six times a week. Or, I don't want to drink any alcohol. And every morning you get an email, asking you, Did you stick to your rules or not? And you say yes or no to the different questions. And then, on the platform, you can see how well you did, you can earn points and badges for that, That information is shared with your peers, and if you don't stick to a rule, you loose a health point, but you friends ccan chip in and heal you. Beautiful example, and I believe most of you will agree with me that it kind of sounds like ethical persuasion, right? It sounds like something that is good to do. Here's another example. Very similar in the kind of thinking behind it, very different example - Lockers. It's a social platform where people set up profiles and on them, the main thing they do is they put up product pictures pictures of products they like, and link their profiles with their friends and every time I click on one of those products on your page, you get points, and every time you click on a product page on my page, I get points, and if you actually buy something I get a lot of points. and both of us can then exchange these points into percentages of these products. Now, I don't know how you feel, but personally I think that Health Month is something that feels to me very benign and a good piece, a moral piece of technology, whereas there's something about Locker that makes me feel a little queezy. Thinking about what is it exactly that makes me feel queezy here, in this case, versus the other, was a very simple answer, and that is, well, the intention behind it, right? In one case, the intention is, "That site wants me to be healthier, and the other site wants me to shop more." So it's at first a very simple, very obvious question I would like to give you: What are your intentions if you are designing something? And obviously, intentions are not the only thing, so here is another example for one of these applications. There are a couple of these kinds of Eco dashboards right now... Dashboards built into cars... Which try to motivate you to drive more fuel-efficiently. This here is Nissan's MyLeaf, where your driving behavior is compared with the driving behavior of other people, so you can compete for who drives a route the most fuel-efficiently. And these things are very effective, it turns out... So effective that they motivate people to engage in unsafe driving behaviors, like not stopping at a red light, because that way you have to stop and restart the engine, and that would use quite some fuel, wouldn't it? So despite this being a very well-intended application, obviously there was a side effect of that. Here's another example for one of these side effects. Commendable: a site that allows parents to give their kids little badges for doing the things that parents want their kids to do, like tying their shoes. And at first that sounds very nice, very benign, well-intended. But it turns out, if you look into research on people's mindset, caring about outcomes, caring about public recognition, caring about these kinds of public tokens of recognition is not necessarily very helpful for your long-term psychological well-being. It's better if you care about learning something. It's better when you care about yourself than how you appear in front of other people. So that kind of motivational tool that is used actually, in and of itself, has a long-term side effect, in that every time we use a technology that uses something like public recognition or status, we're actually positively endorsing this as a good and normal thing to care about... That way, possibly having a detrimental effect on the long-term psychological well-being of ourselves as a culture. So that's a second, very obvious question: What are the effects of what you're doing... The effects you're having with the device, like less fuel, as well as the effects of the actual tools you're using to get people to do things... Public recognition? Now is that all... intention, effect? Well, there are some technologies which obviously combine both. Both good long-term and short-term effects and a positive intention like Fred Stutzman's "Freedom," where the whole point of that application is... Well, we're usually so bombarded with constant requests by other people, with this device, you can shut off the Internet connectivity of your PC of choice for a pre-set amount of time, to actually get some work done. And I think most of us will agree that's something well-intended, and also has good consequences. In the words of Michel Foucault, it is a "technology of the self." It is a technology that empowers the individual to determine its own life course, to shape itself. But the problem is, as Foucault points out, that every technology of the self has a technology of domination as its flip side. As you see in today's modern liberal democracies, the society, the state, not only allows us to determine our self, to shape our self, it also demands it of us. It demands that we optimize ourselves, that we control ourselves, that we self-manage continuously, because that's the only way in which such a liberal society works. In a way, the kind of devices like Fred Stutzman's "Freedom," or Buster Benson's Helth Month, are technologies of domination, because they want us to be (Robotic voice) fitter, happier, more productive, comfortable, not drinking too much, regular exercise at the gym three days a week, gettin on better with your associate employee contemporaries. At ease. Eating well. No more microwave dinners and saturated fats. A patient, better driver, a safer car, [unclear] sleeping well, no bad dreams. SD: These technologies want us to stay in the game that society has devised for us. They want us to fit in even better. They want us to optimize ourselves to fit in. Now, I don't say that is necessarily a bad thing; I just think that this example points us to a general realization, and that is: no matter what technology or design you look at, even something we consider as well-intended and as good in its effects as Stutzman's Freedom, comes with certain values embedded in it. And we can question these values. We can question: Is it a good thing that all of us continuously self-optimize ourselves to fit better into that society? Or to give you another example: the one at the initial presentation, What about a piece of persuasive technology that convinces Muslim women to wear their headscarves? Is that a good or a bad technology in its intentions or in its effects? Well, that basically depends on the kind of values you bring to bear to make these kinds of judgments. So that's a third question: What values do you use to judge? And speaking of values: I've noticed that in the discussion about moral persuasion online and when I'm talking with people, more often than not, there is a weird bias. And that bias is that we're asking: Is this or that "still" ethical? Is it "still" permissible? We're asking things like: Is this Oxfam donation form, where the regular monthly donation is the preset default, and people, maybe without intending it, are encouraged or nudged into giving a regular donation instead of a one-time donation, is that "still' permissible? Is it "still" ethical? We're fishing at the low end. But in fact, that question, "Is it 'still' ethical?" is just one way of looking at ethics. Because if you look at the beginning of ethics in Western culture, you see a very different idea of what ethics also could be. For Aristotle, ethics was not about the question, "Is that still good, or is it bad?" Ethics was about the question of how to live life well. And he put that in the word "arĂȘte," which we, from [Ancient Greek], translate as "virtue." But really, it means "excellence." It means living up to your own full potential as a human being. And that is an idea that, I think, Paul Richard Buchanan put nicely in a recent essay, where he said, "Products are vivid arguments about how we should live our lives." Our designs are not ethical or unethical in that they're using ethical or unethical means of persuading us. They have a moral component just in the kind of vision and the aspiration of the good life that they present to us. And if you look into the designed environment around us with that kind of lens, asking, "What is the vision of the good life that our products, our design, present to us?", then you often get the shivers, because of how little we expect of each other, of how little we actually seem to expect of our life, and what the good life looks like. So that's a fourth question I'd like to leave you with: What vision of the good life do your designs convey? And speaking of design, you'll notice that I already broadened the discussion, because it's not just persuasive technology that we're talking about here, it's any piece of design that we put out here in the world. I don't know whether you know the great communication researcher Paul Watzlawick who, back in the '60s, made the argument that we cannot not communicate. Even if we choose to be silent, we chose to be silent, and we're communicating something by choosing to be silent. And in the same way that we cannot not communicate, we cannot not persuade: whatever we do or refrain from doing, whatever we put out there as a piece of design, into the world, has a persuasive component. It tries to affect people. It puts a certain vision of the good life out there in front of us, which is what Peter-Paul Verbeek, the Dutch philosopher of technology, says. No matter whether we as designers intend it or not, we materialize morality. We make certain things harder and easier to do. We organize the existence of people. We put a certain vision of what good or bad or normal or usual is in front of people, by everything we put out there in the world. Even something as innocuous as a set of school chairs as a set of chairs that you're sitting on and I'm standing in front of, is a persuasive technology, because it presents and materializes a certain vision of the good life... A good life in which teaching and learning and listening is about one person teaching, the others listening; in which it is about learning-is-done-while-sitting; in which you learn for yourself; in which you're not supposed to change these rules, because the chairs are fixed to the ground. And even something as innocuous as a single-design chair, like this one by Arne Jacobsen, is a persuasive technology, because, again, it communicates an idea of the good life: a good life... a life that you, as a designer, consent to by saying, "In a good life, goods are produced as sustainably or unsustainably as this chair. Workers are treated as well or as badly as the workers were treated that built that chair." The good life is a life where design is important because somebody obviously took the time and spent the money for that kind of well-designed chair; where tradition is important, because this is a traditional classic and someone cared about this; and where there is something as conspicuous consumption, where it is OK and normal to spend a humongous amount of money on such a chair, to signal to other people what your social status is. So these are the kinds of layers, the kinds of questions I wanted to lead you through today; the question of: What are the intentions that you bring to bear when you're designing something? What are the effects, intended and unintended, that you're having? What are the values you're using to judge those? What are the virtues, the aspirations that you're actually expressing in that? And how does that apply, not just to persuasive technology, but to everything you design? Do we stop there? I don't think so. I think that all of these things are eventually informed by the core of all of this, and this is nothing but life itself. Why, when the question of what the good life is informs everything that we design, should we stop at design and not ask ourselves: How does it apply to our own life? "Why should the lamp or the house be an art object, but not our life?" as Michel Foucault puts it. Just to give you a practical example of Buster Benson. whom I mentioned at the beginning. This is Buster setting up a pull-up machine at the office of his new start-up, Habit Labs, where they're trying to build other applications like "Health Month" for people. And why is he building a thing like this? Well, here is the set of axioms that Habit Labs, Buster's start-up, put up for themselves on how they wanted to work together as a team when they're building these applications... A set of moral principles they set themselves for working together... One of them being, "We take care of our own health and manage our own burnout." Because ultimately, how can you ask yourselves and how can you find an answer on what vision of the good life you want to convey and create with your designs without asking the question: What vision of the good life do you yourself want to live? And with that, I thank you. (Applause)