Still life illustrating ethics concept

Technology Innovation, but Make It Ethical: An Interview with Elizabeth Hofvenschiöld on Automated Vehicles, Values, and the Future

Mobility systems are key to our lives - how must we build them to reflect core values such as equity, sustainability, and accessibility? What role should technology play in this vision?

These were questions we explored during the summer academy “Expanding Innovation Horizons - Creating a Mobility System that Works for the 21st Century,” organised by the Swiss Study Foundation with Katja Schechtner and Wolfgang Gruel. We spent the week creating a vision of a future mobility system and the role that Automated Vehicles (AVs) should have in it—following the idea that society should shape technology to create the future we imagine, rather than let technology shape our future.

'How exactly should we shape them, and how could this be done' are questions that Elizabeth Hofvenschiöld has been working on for years: Elizabeth is an expert in the field of Applied Ethics, and is currently a professor at the European School of Business at Reutlingen University, with experience as an applied ethics practitioner in the automotive & communications industries. She is also a Futurist: Somebody who anticipates change with the help of different tools, and who investigates how emerging technologies might shape our future.

In this interview, she talks about the importance of including ethics in product development, why diversity matters in the context of ethics, and how we can include a little futurism in our daily lives.

Sonja Leyvraz: Beth, you work in the field of ethics for AVs – why do we need ethics for AVs? Are traffic laws not enough to regulate the behaviour of AVs?

Elizabeth Hofvenschiöld: It is important to apply ethics to any form of technology, but especially emerging technologies. Ethics is already hardwired in the decisions of the designers and the developers – but they are not aware of it. Therefore, by explicitly incorporating ethics in the product development process, the designers and engineers can make much more conscious decisions, and also understand what the potential consequences of their decisions are – for technologies that will only be on the market in five to ten years in the future.

Social media, I believe, is a great case study for understanding the importance of ethics in design decisions. Back in the 2000s, people really did not think about the consequences of what they were doing; there was this culture of “move fast and break things, we can correct it later”. But they did not understand that their design decisions, like making things addictive so people stay online or optimising the advertisements people see, had social and even physical impacts. And we have seen really disturbing statistics coming out of this: The rate of suicide in girls between the ages of 12 and 18 in the US has increased by 50% from about 2010 to about 2020 – and there is a correlation between this rate and their use of social media. That is why it is so crucial to think about ethics in the beginning.

In the specific context of AVs, it is important to use ethics also because it fills in the gap: traffic laws are created for human beings and behaviour that has already been codified for over a hundred years. But now we are talking about driverless vehicles going through situations that cannot be dealt with the existing traffic laws because they were made from a human perspective, based on human behaviour. So, ethics helps to bridge the gap and to think about the consequences of your technology.

SL: That makes sense. What do you do on a daily basis to achieve that?

EH: When I was responsible for the ethics of automated driving at Mercedes Group AG, I worked on three levels: very high level, middle level, and low level. The low level is the actual creation of the requirements, for example how to react when you see a particular traffic sign – let us take the traffic sign for deer crossing, which indicates that there are deer, crossing the road regularly. Some humans would slow down, some would keep their speed because perhaps the weather is good, and they trust their reflexes. The law says it is recommended to slow down in such situations, but this is really different from country to country. So how do we interpret that? Should we make it dependent on weather, because the AV systems are so good that they would be able to detect animals as quickly as humans can, or maybe even faster? But perhaps from an ethical perspective – especially if you are introducing the technology – you need to use or mimic culturally acceptable behaviour. Automated systems are trusted less than human beings although they perform more consistently than human beings, and if we want to build trust towards AVs and a particular brand, then we might have to behave better than the average driver, and towards the ideal driver. So, there might be the decision to reduce the speed by 20 or 10% for a certain amount of time although that is not hardwired in the law, and you could maintain your normal speed.

On a mid-level, it is also about embedding ethics into the governance process within the organisation or corporation, about getting people to think about it, even when they have time constraints. It should be just as normal as speaking to the marketing person to speak to the ethicists about your new product idea or service idea at the very beginning when you are about to develop it, or even before.

On a very high level, I worked on international projects like the development of an ISO standard on ethical considerations for AVs, which is important because there are not necessarily laws in many countries around the world, these countries tend to look at what existing standards and guidelines there are. So, it is helping to shape how AVs will function in different societies in the future and doing so in an ethical way so that it is not just based on efficiency or cost, or the lecture of existing laws.

SL: What do you then base those decisions and guidelines on? During the summer academy, you mentioned multiple schools of thought.

EH: There are so many different schools of thought, which have their understanding of what is good and bad and what is right and wrong, and I touched on only five during the summer academy. The classic two used in Western philosophy are Deontology and Consequentialism: Consequentialism essentially thinks about the consequences of a decision and asks what the best outcome is – the best could mean the most appealing or what is perceived as ”good” by the majority group in a society. Deontology is much more rule-based and focuses on duty, which is why I think that in the Western world, especially German-speaking areas, ethics is often associated with deontology. However, there are also many other different schools: for instance, Virtue Ethics, which is driven by the idea that you make decisions based on what a virtuous person would do. This is different to Deontology in that it is not as rule-based and can be more inward-focused instead of outward-focused.

I also spoke about Ubuntu ethics, which is based in African traditions and looks at the human being in their relationship to others within a community – you are making decisions based on how that decision would impact relationships within a whole community, and whether it is good for the community at large. We also looked at Shinto Ethics, which I find very fascinating, because it really looks at automated systems, especially if they have a physical representation, like a robot, as having a spirit that is no different than a tree or an animal or a rock having its own spirit. It is just a representation that was perchance crafted by a human being that made that robot. Therefore, what rights does this being have? Do they have the same rights as human beings as we know them?

With these different schools of normative ethics to choose from, how do you move forward? For many areas of applied ethics, I would say it is a combination of Deontology and Virtue Ethics because lots of it is very Western-based – although it is changing now, thankfully. Then, perhaps one of the most challenging processes is to interpret what is considered ‘good’ from whatever normative ethics school you want to use into values and principles, in order to use it in a practical manner.

SL: But in the end, does it make such a big difference which school of thought we use, are the ideas of ‘good’ and ‘bad’ really that different? Do we need to look into Ubuntu ethics or Shinto ethics to formulate principles for an AV system in Europe?

EH: That is a huge and a very good question: why is it important to go beyond the typical schools of Western normative ethics? First, it is important because the automated products, for example AVs, they are going to be deployed around the world. And it is easy to say that because we used ethics, it should be universally acceptable – do we really know that? Or is it universally acceptable from our European perspective? Some people would go so far as to say that if you only take a Eurocentric perspective in terms of ethics and development on the vehicles themselves, you cannot call yourself ethical, because you did not take everybody’s perspective into account.

Second, if you realise you could take different perspectives into account, but you consciously did not for whatever reason, you are in a way extending a form of colonialism in your product development. As somebody who comes from the East and West, that makes me irritated on a very human level. I find it extremely arrogant because there is so much we can learn; it is rather ignorant to only use the Western perspective and assume that it is alright for the rest of the world.

A good example of this was the creation of the IEEE 7000 standard series, the very first document they created which was on ethically aligned design. The first version of this was created with a hundred different specialists mostly in the area of ethics, but also engineers and developers. They thought they had a robust document until some of the people on this team sent it to colleagues around the world – and they saw so many red flags, things they could not agree with. They realised that the document they had thought to be so robust actually had so many holes in it. In the next round, they did not just have a hundred but two hundred experts, and the second hundred were people who had experience in non-Westernised, non-US-based thinking. I think the end document is incredible and extremely robust and relevant precisely because of that extra input. It became a truly inclusive document, or as inclusive as it could be.

Again, for example, in terms of interpreting traffic signs or the treatment of animals, I think a really good example could be: when do you brake? Maybe engineers would say we should only brake for a medium to large-sized animal because it will impact the trajectory of the car because if it is a large animal, it could injure the passengers in the car. But maybe someone outside of Europe would say, but what about small animals? Or about insects? It is important for us to protect biodiversity so it is important for us to think about ways we can avoid killing as many insects as possible. The programming of driving behaviour could be so different according to what you consider to be important, what you consider to be good, and what you consider to be bad.

SL: And what if we look just at Europe, does it then make sense to stick to our ‘own’ schools of thought?

There is huge diversity even in a relatively homogenous community, and if we look at Western schools, they were mostly conceived by a very homogenous group of people: white educated men. Even if you just want to protect your ‘own’ community, you have to think about the different world perspectives from different groups within one society, or one country. About feminist perspectives. Or the perspectives of children. A horrifying statistic is that the most deaths occurring in traffic is actually in children. They are the most vulnerable group of traffic participants out there. So even if the Shinto ethics thing with the spirits is a little too far out there for you, and you are struggling to understand – even then it is important to view your decision from the perspective of a small seven-year-old that is trying to walk to school on a rainy and dark day. How would you make your decision on when and how to brake, or how to communicate to different vulnerable road users, different human beings, on the street?

Many people say that technology, and especially automated systems, are going to be safer because they will never be tired, they react much faster, and so forth. However, if that only applies to the 50th percentile, or even the 95th percentile, of the population, and life is even worse for those that were already extremely vulnerable before the introduction of automated vehicles, should you really be introducing them into the market? This is a question that I would really pose to people that develop automated systems. What gives you the right to make life even worse for a vulnerable group, even if they say that it makes life way better for everyone else? They already had it better anyways, so there is no equity there. I think that looking at different schools of thought might help to understand these different perspectives better.

SL: You said before was that that it is changing, that it is becoming less focused on Western schools of thought. Why is it changing?

EH: I think it is changing because people are becoming more aware of different schools of thought. And I also want to say that even if you just use a Western school of thought, it is already a step in the right direction. I do not want to discredit people that are saying: “But we are doing ethics, at least we are doing something.” That is good! But we can do better. Now, we see many other movements for equity in the world, we see the movements like Black Lives Matter, feminist movements... It helps.

It also helps that the teams that are developing these automated systems are also more diverse and bringing their voices to the table. There are a hundred studies that show that diverse teams where everybody has a voice just produce better products, have a better company culture, and have fewer people on sick leave and everything. It does not just make ethical sense; it makes business sense. To be honest, I do not understand why it is not implemented more in many organisations. We are talking about a global market, so if you take it from financial profit perspective, it actually makes business sense to incorporate as many perspectives as possible.

SL: A last question: what would you say to someone who has nothing to do with ethics or AVs – is there anything you would like them to think about in their daily life?

EH: That is a big question. I think it is to be aware of the decisions you make in life. The decision to buy something, like a car, or clothes. Ask yourself: Why am I making this decision, and what is based on? To increase your awareness – maybe it sounds little cheesy, but to become more mindful of the decisions you make, because they truly impact your behaviour in the long term. To be more aware of how you create your future because as you are moving towards it, you are acting towards it, and you are doing things that make one future more likely than the other. Ask yourself once in a while: “Am I aware that I am doing this? Do I want to move in this direction?” And maybe: “What kind of impact does this have on my community? On society? Do I contribute to what I think is good, or not?” If everybody would do that, that would be fantastic. I think it is doable.

As a futurist, I believe everybody is capable of anticipating change, and the one biggest thing that I love about futures thinking is the opportunity to stretch and use your imagination. Especially for the younger generations, it is not always about doom and gloom – as it often is in the media – but you have the opportunity just to dream and then trying to make those dreams a reality, you can say: “I do not like this narrative, I want a new one. This is it and this is how we will get there.” That is what futures thinking is about.

About Elizabeth Hofvenschiöld

Elizabeth Hofvenschiöld is an expert in the field of Applied Ethics, currently a professor at the European School of Business at Reutlingen University, with experience as an applied ethics practitioner in the automotive & communications industries.

Autor*innen

Autor*in

Sonja Leyvraz arbeitet beim Europäischen Umweltbüro
im Bereich. Kreislaufwirtschaft. Sie hat einen Abschluss in Environmental Management and Policy vom International Institute for Industrial Environmental Economics (IIIEE) in Lund und in International Studies von
der Universität Leiden.

Die Beiträge auf dem Reatch-Blog geben die persönliche Meinung der Autor*innen wieder und entsprechen nicht zwingend derjenigen von Reatch oder seiner Mitglieder.

Zum Kommentieren mehr als 20 Zeichen im Text markieren und Sprechblase anklicken.

Wir freuen uns über nützliche Anmerkungen. Die Kommentare werden moderiert.