Team:Wageningen UR/Ethics

Xylencer

Ethical Considerations

Ethics

Besides the importance of developing a product that meets the needs of all stakeholders, we believe it is important to develop a responsible product that does good in the world. We discussed how to go about this within the team, with several experts on ethics and with Biotechnology students in a workshop format. As the self-spreading phages are a potential controversial topic, we wanted to carefully consider our approach. This page contains our reflections upon those discussions and the ethical considerations we see in our project. We used this to improve our project.

Approach

divider

To familiarize ourselves with ethical reasoning, we attended two lectures given on the ethics of biotechnology by dr. Zoe Robaey and dr. Martin Sand.

  • Report of Lectures arrow_downward

    Dr. Martin Sand introduced different kinds of responsibility. There is forward-looking responsibility, or duty, which means that one must make sure that a certain state of affairs comes about. For example, you can have the duty to inform your customers about the risk of using a certain product. There is also backward-looking responsibility. This means that when something goes wrong, one can be blamed for it and one is accountable for making things right again. For example, you can be responsible for the actions of your children, meaning that if they break something, you must pay for it. The main question of backward-looking responsibility in technology is the question of who is to blame when a technology does harm of some sort. We kept this in the back of our mind during the formation of Xylencer.

    Furthermore, Sand spoke about autonomous technologies. These are technologies that are controlled by artificial intelligence, and therefore make their own decisions. A mainstream example of this is the self-driving car. This sparked our interest because autonomous technologies appear to have similarities with synthetic biology applications. Both adapt themselves to their situation, one by algorithm, the other by evolution. Many questions that arise in the ethics of autonomous technology also arise in biological technology. For example, the build-in lack of control and potential risks in entities that evolve. Respective fields of ethical research could learn a lot from each other in their approach to these problems. In the section “Autonomous technology and responsibility” we will reflect further upon this similarity.

    Dr. Zoe Robaey spoke on the ethical status of life. Discussing such questions as what it means to own life, and what our duties are to the life we own. This made us realise that an interesting avenue of exploration may be to find out how the question of life ownership is solved legally. Especially in case of our phages, which may evolve rapidly, such that it is difficult to determine what needs to be patented.

    We also started to wonder what the difference is between an artificial product and a natural product. Is the Phage Delivery Bacterium (PDB) or the adhesion phage natural, or synthetic? Whether an organic farmer would accept the use of our product on his farm, for example, might depend on this.

    Finally, we discussed how our values shape the kinds of products we make, and how we can incorporate good values in the design of our product. This goes under the name of “value-sensitive design”, which we later discussed with prof. dr. Vincent Blok.

At these lectures, we got into contact with prof. dr. Vincent Blok, a professor of philosophy, specializing amongst others in philosophy of technology and responsible innovation. We discussed our project with him, and he proposed we invite what we believe to be critics of our project, and biotechnology in general, to examine our project. He proposed we use the Product Impact Tool to guide our discussion with them [1]. The Product Impact Tool is a workshop template that is highly suitable to discuss technology with individuals from different backgrounds who have different worldviews. We envisioned that those individuals might give feedback on our project that we ourselves had not yet discovered.

  • Product Impact Tool arrow_downward

    The product impact tool was created by dr. Steven Dorrestijn. It is meant to facilitate socially responsible innovation by offering a systemic way to identify socio-ethical issues. By comparing the current situation with four different categories of effects technology has on society, and society on technology, a discussion on this impact can be neatly guided into different sub-discussions.

    The 4 categories of effects in the order that we discussed them are “Above-the-head”, “Behind-the-back”, “Before –the-eye” and “To-the-hand”.

    “Above-the-head” signifies the general influence that technology has on humans, and broadly divides opinions of this influence in Utopian, Dystopian and Ambivalent vies. Discussing this influence is important even before discussing any of the specific effects of a particular technology, to prevent differences in view from decreasing the efficiency of the discussion. Also, it is important to reflect on the views that members of the group have as compared to society at large. For example, our workshop was comprised of students of biotechnology, and to put the results of our efforts into a societal context, we should remember that we will generally have a more favorable outlook on technology than the average.

    “Behind-the-back” signifies all influences that technology has indirectly through the environment, and not through any direct contact. The most well-known example is (unwanted) social side-effects, but the Product Impact tool also emphasizes that every technology relies on background conditions. For example, Xylencer relies on an infrastructure of production (factories) and delivery (roads, trucks gasoline etc.) in order to function. The final example is that of technical determinism. We will address this influence prominently under “responsibility”.

    “Before-the-eye” signifies all influence that technology has by communicating some information to us, either by sight or other senses. Technology is often shaped or coloured in some way as to inform a particular use or change user behavior in some other way. For example, car handles may often be shaped such that one can directly see they are meant for gripping and opening. An example for our project could be largely printing the amount of land for which one dosage is appropriate on the packaging of our product.

    “To-the-hand” signifies effects that technology has by physical interaction that is non-informative. For example, the technology of a fence physically blocks you from going to certain places. To-the-hand interactions can by touch-based, like a fence, or more subtle, like a high tone meant to be intentionally annoying, especially to young people [1]. An example in our project could be the packaging of Xylencer in appropriate dosages, so that users are guided to using the correct dosage on their plants.

Product Impact Tool and this associated graphic were developed by prof. dr. Steven Dorresteijn. The use of this graphic was personally approved by him. www.productimpacttool.org

We tried to engage with several different critics over the course of our project. Over time, we directly approached eight different institutions and people to invite them to our ethics workshop. Unfortunately, all either refused our invitation to engage or did not respond. As it was important to us to include critical voices, we reached out internationally to individuals or groups that may be critical. Although our conversation with these was insightful, they preferred not to appear on our wiki. Lastly, via our outreach with the general public at MENSA and the Freemasonry, we found critical voices and discussed our project with these. As a result, we got to a better understanding of ethical concerns of our project, an explanation of which we have outlined below. Although we felt like it was difficult to engage with critics of synthetic biology, their feedback was extremely helpful for our project.

This did however mean that we did not get to use the Product Impact Tool yet. We next contacted dr. Marcel Verweij to further discuss our project and find a different way to make use of the Product Impact Tool workshop. He suggested that we organize our workshop for Bachelor students of Molecular Life Sciences. As this audience would probably have a very different perspective on biotechnology then we had first anticipated, we reflected at length on our position in society as students of biotechnology, and how it shapes our perspective on the technology.

Through these different means, we got feedback on ethical concerns of our project. We have selected the most recurrent and relevant concerns and reflected upon them in the section below. Finally, in the last section, we will reflect on the responsibilities we as developers of biotechnology have.

Ethical Considerations

divider

Through our discussions with these experts on ethics and the workshop we gave, we discussed several possible ethical concerns in our design. We have selected the most relevant criticisms of our project and outlined them as best we can. Furthermore, we have provided lines of argumentation that may refute these criticisms. It is ultimately up to social discourse to determine whether our product is deemed acceptable to apply in real life, and we hope this section may help inform a dialogue that determines exactly that. We as scientists do play an important role in the discussion.

Coexistence

First of all, we were attended to the fact that our proposal of self-spreading phages may violate the coexistence principle. Coexistence, or “freedom of cropping”, is defined as “the ability of farmers to make a practical choice between conventional, organic, and GM crop productions” [2]. Coexistence is thus all about choice: in our case the choice of farmers to decide for themselves whether or not to use GMO products. If our phages turn out to have spreading capacity beyond the boundaries of one farm, the application of Xylencer on one farm may lead to the appearance of the Xylencer phages on farms that have not given their explicit consent to have these present on their farm.

In addition, if the spread of Xylencer phages is even more efficient, and they can spread beyond the borders of individual legislatures, problems may occur when those legislatures differ in their laws regarding the use of Xylencer. For example, if the use of Xylencer phages were to be allowed in the Italian region of Puglia, but forbidden in the neighbouring region of Basilicata, a problem comes up when phages spread from Puglia to Basilicata.

Schematic on the concept of coexistence. The phages from the yellow farm might spread to the other farms, even though they might not want to.

We cannot circumvent the mentioned problems. We are currently modeling the spread of the phages to see how far the phages will spread. One dangerous pitfall to avoid, is to consider our solution Xylencer outside of the context of its application. It is the case right now that preventative measures violate the absolute autonomy of farmers to do with their crops as they please in some way. Quarantine measures violate their autonomy to move and sell their property and destruction of plant material as an emergency measure by the government violates their property right in the most extreme way: by destroying it. This governmental interference is in place with very good reason: to protect the greater good. Without it, the disease could spread and seriously damage the agriculture in entire regions. The harm done by violating the autonomy of individual farmers is outweighed by the greater good of having disease-free farms for many other farmers in the region. That said, the autonomy of individuals to decide what to do with their property and the right of states to interfere with this is the subject of longstanding philosophical debate.

To become legally accepted, our product would need to go through extensive testing for side-effects that affect safety amongst others. Our product would only become accepted when those tests would indicate that little to no effect exists. One could argue that the presence of our phages, if tested harmless, violate farmers property rights to a far smaller degree then preventative measures currently in place would, like destruction of plant material and quarantine measures.

Creation of Need

Another potential issue we were attended to, was that Xylencer may create the need for similar solutions. That is, amongst others, solutions that require the spreading of GMOs in the environment, solutions that require spreading of bacteriophages and solutions that require the release of GMOs in the environment in general.

We were attended to this issue when discussing technical determinism in the Product Impact Tool, or how history is determined by the material and technical circumstances of the time. For example, a 1998 survey indicated that people did not feel the need to have a mobile phone. However, when asked 16 years later, the majority of people indicated this need. This means that the presence of a certain technology can create a need that was not there prior to its introduction.

One could argue that the acceptance of Xylencer, even if it is not itself dangerous, popularizes acceptation of other synthetic biology applications through setting a precedent including applications in which engineered biological entities are spread into the environment. This is not in and of itself a criticism of Xylencer, but one could develop this into a criticism, by arguing that governments would respond to this by becoming laxer in their application of safety measures. This may lead to the acceptance of some other application that could do serious harm to the environment.

This line of argumentation against the application of Xylencer and synthetic biology in general is an example of a slippery slope argument. In general, slippery slope arguments start with a minor claim, from which ever bigger inferences are made, for example:

Table 1: two examples of slippery slope argurments
Example 1 Example 2
If you don’t go to college, you won’t get a degree If xylencer is accepted, this sets a precedent for future applications
If you don’t get a degree, you won’t get a good job Through this precedent, more applications of spreading GMOs may be accepted
If you don’t get a good job, you won’t enjoy life This acceptation may cause governments to be lax in their application of safety measures
But you should enjoy life Laxness in application of safety measures may lead to a serious disaster
Therefore, you should go to college Therefore, we cannot accept Xylencer

Any slippery slope argument risks also being a case of the so-called “slippery slope fallacy”. This is a slippery slope argument wherein intermediate causal claims (statements 2-4 in the left example and 2-5 in the right) are vague or dubious, which debases said argument [3].

In the example on the left, depending on the ability of the person in question to restrain himself, any of the statements 2-4 could be dubious. In the argument regarding Xylencer, one could argue that the third, but especially the fourth clause of the argumentation regarding Xylencer is dubious. We believe it to be dubitable that government organizations would become lax in their application of safety measures, as we have found no precedent for such a scenario.

Market forces

Finally, we were attended to the issue that different market effects may jeopardize effective application of our solution or have unwanted side-effects.

First of all, there is a significant precedent for the development of monopolies through the application of biotechnology in agriculture. For example, if a company patents a superiorly resistant strain of crops to a particular disease, it may ask for exorbitant payments to farmers who live in areas where that particular disease is prevalent, bringing financial harm to the farmer in the process.

One would first have to note that this problem is not specific to Xylencer, but a general feature of a system in which only patented technologies are financially feasible to develop, and patents give patent holders a right to a temporary monopoly. This, however, does not exempt Xylencer from this criticism. One could argue that Xylencer, as it is a biocontrol product that spreads itself, would require oversight in its application, and it is therefore likely that the production and application of Xylencer would be regulated by a government entity. This would limit the effect a temporary monopoly granted by a patent may have. In addition to this, our product was developed to be open source as to fit within the context of iGEM, therefore Xylencer cannot be patented.

In the case that Xylencer phages would be able to travel beyond one farm, another potential issue that may arise is a situation in which no farmers would want to pay for and apply our solutions, in the hope that farmers around him will. In this scenario, it is favorable for every individual farmer not to apply the solution, whereas for the group of farmers it is favorable to apply our solution. A similar situation can be seen in vaccination: vaccination rates are dropping as people feel safe due to the majority of society being vaccinated, which causes the disease to appear less. This kind of situations is refered to in literature as the tragedy of the commons, or, more broadly, the prisoner's dilemma [4].

However, extensive precedent exists for farmers self-organizing and cooperating on similar issues of biocontrol. For example, in the case of the usage of pheromone traps to disrupt the communication of pest insects, which only works if a high amount of farmers in an area cooperate [4].

Responsibility

From the ethics lectures we learned that two mayor kinds of responsibility exist. Briefly, forward-looking responsibility concerns the duties one has, and backward-looking responsibility concerns the ways one is accountable. In this section, we will reflect on our duties and our accountability.

  • Types of Responsibility arrow_downward

    In ethics, 2 views of responsibility can be distinguished. One is called “forward-looking responsibility”. These are the duties one has. Specifically, having a forward-looking responsibility means that one ought to act in some way, or ensure some future state of affairs comes about. It asks “What do I have to do?”. “Backward-looking responsibility” is when one is accountable for something. Backward-looking responsibility concerns the past, and asks “Who is to blame?”.

Our responsibility

We have reflected on what duties we have and in what ways we are accountable. For one, we believe we have the duty to rightly represent stakeholders. Therefore, we have send every stakeholder in our human practices page. We have respected the preference of some stakeholders not to appear on our website. Secondly, as we have had the privilege to receive support on a level we could have not previously imagined in many different ways (see: attributions, sponsors), we believe this imparted upon us the duty to make good use of this support, both in developing ideas we believe will make the world a better place and doing so with our full effort. This wiki on a whole is a record of our fulfillment of that duty. Thirdly, moving on to our application, we believe to have the duty to create a product that is safe for users and the environment. Our efforts towards fulfilling this duty are outlined on our safety page. Finally, we believe to have the duty to prevent the application of our research for nefarious purposes as much as possible. Our efforts in this regard are outlined on our Biosecurity page.

Responsibility and autonomous technology

In addition to this, we would like to draw your attention to a form of responsibility that exists for developers of biotechnology, but that has, to our knowledge, not yet been considered thusfar.

  • A case study of autonomous technology in ethics arrow_downward

    Recently, much attention has been paid in applied ethics to self-driving cars. These have the objective to keep the driver and other traffic safe and minimize any harm that may be done. Usually the means to achieve this goal are straightforward: keep on the road, don’t hit pedestrians etc., but one can imagine scenarios in which the correct way to achieve this objective is much more ambiguous.

    Imagine, for example, a situation in which a crash in unavoidable. Should your autonomous car prefer to sacrifice your life by swerving into a tree, but saving multiple pedestrians? Should your autonomous car prefer to swerve into an older, or sick person instead of a young one? Or consider the scenario in which a car can swerve into a motorcyclist wearing a helmet, or one without a helmet. The chance of a fatal accident would be minimized by hitting the helmet-wearing motorcyclist, but many people would find it offensive to reward the undesirable behavior of not wearing helmets in this way.

    In cases that humans control vehicles in crash situations, little responsibility is attributed to the driver, as decisions are generally split-second ones. However, in the case of an autonomous car, these decisions can be considered months in advance, and many argue that this gives the developer at least some responsibility for their outcome.

Autonomy of a technology is defined as “the ability of the system to respond to the environment ‘‘by itself’’, without a prior ‘‘script’’” [7]. We noted that according to this definition, biotechnology is also a form of autonomous technology. This is because all living beings make decisions on how to respond to their environment, much like a computer makes decisions based on input variables. A computer program makes those decisions based on its code. We see a direct parallel, in that the microbes we engineer make decisions based on their genetic code.

Furthermore, biotechnology adapts to its environment through evolution, much like many forms of autonomous technology adapt to their environment through artificial intelligence. Whereas artificial adapts to meet some objective set by the developer (usually minimizing a cost function), biotechnology adapts through evolution to survive its environment (maximizing it’s fitness).

Although issues like those in autonomous driving are not yet as universal in synthetic biology as in the field of autonomous technology, the prevalence of cases in which engineered biomachines are faced with ethical decisions can be expected to rise as the applications of synthetic biology grow broader and more sophisticated.

Although these considerations are not yet as universal in synthetic biology as in the field of autonomous technology, the prevalence of cases in which engineered biomachines are faced with ethical decisions can be expected to rise as the applications of synthetic biology grow broader and more sophisticated. The broad consensus in the field of autonomous technology is that the developer is responsible for the behavior of its technology. We believe that it follows that the synthetic biology community has the duty to concern possible ethical issues resulting from the autonomous nature of their technology.

Imagine, for example, a case in synthetic biology of bio machines meant to degrade multiple toxins in natural water. If one bacterium is used to degrade multiple toxins, and levels of toxin rise so high that not all can be degraded. What toxin should it prioritize? If a human oversees the cleaning of water, these decisions can be made by that human actor and the ethical weight of the consequences can be carried by this person. Should it preferentially degrade compounds toxic to humans above other life? Should it prefer toxins that cause birth defects over toxins most damaging to the elderly? These are choices with a strong ethical dimension that our biotechnology would be forced to make, based on instructions we, the developers, have either explicitly or implicitly coded into its DNA.

Although these considerations are not yet as universal in synthetic biology as in the field of autonomous technology, the prevalence of cases in which engineered biomachines are faced with ethical decisions can be expected to rise as the applications of synthetic biology grow broader and more sophisticated. The broad consensus in the field of autonomous technology is that the developer is responsible for the behavior of its technology. We believe that it follows that the synthetic biology community has the duty to concern possible ethical issues resulting from the autonomous nature of their technology.

  • References arrow_downward
    1. Dorrestijn, S. (2012). The Product Impact Tool. Design for Usability Methods & Tools, 111.
    2. European Commission. (2003). Guidelines for the development of national strategies and best practices to ensure the coexistence of genetically modified crops with conventional and organic farming, 2003/556/EC. Official Journal of the European Communities, 36-47.
    3. Hansen, Hans, "Fallacies", The Stanford Encyclopedia of Philosophy (Fall 2019 Edition), Edward N. Zalta (ed.), URL = .
    4. Lacey, N. (2008). THE PRISONERS’DILEMMA. Cambridge UK.
    5. Kydonieus, A. F. (2019). Insect suppression with controlled release pheromone systems (Vol. 1). CRC Press.R
    6. Müller, V. C. (2012). Autonomous cognitive systems in real-world environments: Less control, more flexibility and better interaction. Cognitive Computation, 4(3), 213
    7. Vincent, N. A., Van de Poel, I., & Van Den Hoven, J. (Eds.). (2011). Moral responsibility: beyond free will and determinism (Vol. 27). Springer Science & Business Media.
    8. Müller, V. C. (2012). Autonomous cognitive systems in real-world environments: Less control, more flexibility and better interaction. Cognitive Computation, 4(3), 213