Sensemaking for a plural world

Perspective Map

Synthetic Biology and Gene Editing: What Each Position Is Protecting

March 2026

In December 2023, the United States Food and Drug Administration approved Casgevy — the first therapy based on CRISPR-Cas9 gene editing to reach clinical use. The treatment works by editing patients' own bone marrow cells to reactivate fetal hemoglobin production, effectively curing sickle cell disease and beta-thalassemia. Victoria Gray, the first American patient to receive an earlier version of the therapy in 2019, remained free of the debilitating pain crises that had defined her life for more than a decade. By early 2025, physicians at Children's Hospital of Philadelphia had gone further: they used a personalized CRISPR base editing therapy to treat a baby with carbamoyl phosphate synthetase deficiency, a rare enzyme disorder that is otherwise uniformly fatal. The child survived. These were not incremental advances. They were, by any reasonable measure, miracles of applied science.

Five years earlier, a researcher named He Jiankui had used the same underlying tools to edit the germline of human embryos — changes that would be heritable, passed to all future descendants of the twin girls born from that experiment. He had done it, he said, to protect them from HIV. He had not disclosed the work to regulators or obtained meaningful informed consent. He was sentenced to three years in prison. The scientific community's condemnation was nearly unanimous — but the reasons varied. Some were appalled by the recklessness of the specific act. Others were appalled by the category of act: heritable editing, regardless of how carefully done, crosses a line that had not been crossed before. The distinction mattered, because Casgevy and He Jiankui used the same molecular scissors. What separated the miracle from the crime was not the technology. It was where it was pointed, and who decided.

What biomedical optimists and therapeutic advocates are protecting

The lives of people who are suffering now — and the recognition that deferring powerful therapies on precautionary grounds is itself a choice with moral costs that tend to be invisible because the people who bear them have no voice in the governance conversation. Sickle cell disease affects approximately 100,000 Americans, the majority of them Black, and roughly 8 million people worldwide. Beta-thalassemia is the most common inherited blood disorder in the world, concentrated in populations across South Asia, Southeast Asia, and the Mediterranean. Before Casgevy, the only curative option was a bone marrow transplant requiring a matched donor — difficult to find, expensive, immunologically risky — available to a small fraction of patients. CRISPR-based therapies are a single treatment rather than a lifetime of blood transfusions. They transform the prognosis of diseases that have historically fallen below the research and development priority threshold because their patients were poor, non-white, or both. Advocates for therapeutic development are protecting the recognition that the precautionary principle, applied asymmetrically, reliably protects the healthy and the privileged by slowing therapies that would most benefit the chronically ill and the marginalized. Kiran Musunuru, who led the team that treated the baby with CPS1 deficiency, has argued that the ethical framework for gene therapy must grapple seriously with the cost of inaction — not as an abstract utilitarian calculation, but as an acknowledgment that every year of regulatory delay represents real patients who did not receive a treatment that existed.

The logic of the therapeutic pipeline — and the argument that germline editing, done carefully and for severe hereditary conditions with no other treatment options, is a continuation of medicine's existing commitment to preventing suffering rather than a departure from it. Not all advocates in this space endorse He Jiankui's act. But some argue that the categorical prohibition on germline editing — treating it as a line that cannot be crossed regardless of indication, careful governance, or patient benefit — is itself a position that requires defending, not assumed. Proponents of carefully regulated germline research point to the existing practice of preimplantation genetic testing: selecting embryos in IVF cycles that do not carry severe genetic conditions is already routine, already legal, and does not generate comparable moral alarm. If selecting against a heritable condition is acceptable when done at the embryo selection stage, the argument goes, why is editing the gene that causes the condition in a wanted embryo categorically different? The National Academy of Medicine's 2020 report on heritable genome editing did not conclude that heritable editing should be permanently prohibited — it concluded that current technologies did not yet meet the precision standard required for clinical use, and proposed criteria under which a responsible path forward could eventually be established. Therapeutic advocates are protecting the space for that path to remain open.

What biosafety and biosecurity advocates are protecting

The recognition that synthetic biology creates a class of risk that is categorically different from most industrial hazards — because engineered biological agents can self-replicate, evolve, and spread — and that the same knowledge enabling medicine enables catastrophe. Kevin Esvelt, who directs the Sculpting Evolution group at MIT and who, importantly, helped develop gene drive technology before becoming one of the most vocal advocates for restricting information about it, has argued that synthetic biology represents a fundamental change in the biosecurity threat landscape. The horsepox virus — closely related to smallpox — was reconstructed by a Canadian research team in 2016 from mail-order DNA fragments, at a cost of approximately $100,000, without any specialized facilities beyond a standard virology laboratory. The research was published in full. Esvelt's concern is not that any specific actor will immediately weaponize this knowledge; it is that the base of people with the technical ability to attempt it is growing rapidly, that published research continuously lowers the technical barrier, and that in a world of eight billion people, the probability of catastrophic misuse approaches certainty over sufficiently long time horizons if the knowledge continues to spread without constraint. Biosecurity advocates are protecting against a risk that does not announce itself in advance — that looks like ordinary scientific progress until it does not.

The norm of scientific openness — and the painful recognition that it may need to be partially revised for a specific category of research where unrestricted publication creates information hazards with mass-casualty potential. The open publication of scientific findings is one of science's foundational commitments: reproducibility requires it, credit assignment requires it, and the collective accumulation of knowledge depends on it. Biosecurity advocates are not arguing against this norm in general. They are arguing that certain specific categories of research — primarily research that enhances the transmissibility or lethality of potential pandemic pathogens, and research that describes methods for synthesizing dangerous agents from commercially available components — may require a different publication framework: one in which findings are shared with relevant biosafety authorities before or instead of full public disclosure. The gain-of-function research controversy, which predates synthetic biology but has been sharpened by it, illustrates the tension: experiments that modify influenza strains to test pandemic potential generate both useful scientific knowledge and a published recipe for a more dangerous pathogen. Esvelt's SecureDNA project is a specific proposal in this direction: cryptographic screening of DNA synthesis orders to prevent the creation of dangerous sequences, without requiring a centralized registry of what is dangerous. What biosecurity advocates are protecting is the possibility of governing a technology whose risks grow as its capabilities do — before the governance framework becomes necessary in the wake of a catastrophe.

What ecological precaution advocates are protecting

The integrity of wild ecosystems against a new category of irreversible anthropogenic change — one that operates at the genetic level and cannot be recalled once released. A gene drive is a genetic system that spreads through a wild population faster than Mendelian inheritance would allow: it copies itself into the matching chromosome in every individual it infects, ensuring near-100% transmission instead of the 50% expected for normal genetic variants. First proposed formally by Austin Burt at Imperial College London in 2003, gene drives were made technically feasible by CRISPR — which provided the precise editing machinery the concept had always required but never had. Target Malaria, funded in part by the Gates Foundation, has been developing gene drives designed to suppress mosquito populations that carry malaria in sub-Saharan Africa — a disease that kills more than 600,000 people annually, the majority of them children under five. The therapeutic argument for gene drives is, in its humanitarian terms, as compelling as any argument in this collection. Ecological advocates are not dismissing it. They are protecting the recognition that releasing a self-propagating genetic modification into a wild population is a one-way door: unlike a field trial, it cannot be stopped once it begins to spread. The population genetics models that predict drive spread are well-validated in laboratory conditions and small cage experiments. What they cannot tell us is what happens to ecosystems when a mosquito species is suppressed or modified at continental scale — because no experiment of that scale has ever been run, and running it is, by definition, irreversible.

The precautionary principle in a form that takes genuinely seriously the asymmetry between the timescale of ecological understanding and the timescale of regulatory approval — and the recognition that the communities most exposed to both malaria and to potential ecological disruption have had limited voice in the governance discussions that will determine whether drives are released in their environments. Gene drive governance has been the subject of sustained attention from international bodies, including the Convention on Biological Diversity and the International Risk Governance Council, both of which have emphasized that current governance frameworks were not designed for a technology that is both transnational (genetic modifications do not respect borders) and potentially irreversible. The governance gap is not primarily technical — it is procedural and democratic. The communities in the Sahel, in West Africa, in Southeast Asia who would be most affected by both malaria and by drive release have institutional systems and knowledge frameworks that are poorly integrated into the regulatory processes being conducted in London and Cambridge. Ecological precaution advocates are protecting the principle that the people most exposed to a risk should have meaningful authority over whether it is taken — not merely consultation, but decision-making power. Target Malaria has invested significantly in community engagement in Mali, Burkina Faso, and Uganda, and ecologists on the project genuinely believe the work is being done carefully. What precaution advocates are protecting is the difference between engagement and consent.

What justice and governance advocates are protecting

The recognition that the ability to edit human heritable traits at scale — once accessible — will reproduce and deepen existing inequalities, because access to powerful reproductive technologies has always followed the distribution of wealth. Casgevy, the first approved CRISPR therapy, costs approximately $2.2 million per patient in the United States. That price reflects development costs and market logic, not the actual cost of the therapy's manufacture — but it illustrates the pattern. Disability rights advocates and bioethicists at the Center for Genetics and Society, including Marcy Darnovsky, argue that a world in which heritable genome editing is technically available but economically restricted to the wealthy is worse — not better — than a world in which it is prohibited. The concern is not primarily about individual treatments for severe genetic conditions; it is about the trajectory of the technology once it has been normalized for any use. A governance framework that permits germline editing for sickle cell disease creates the infrastructure, the clinical practice, and the social license for germline editing for complex traits — intelligence, height, disease predisposition — as those targets become technically tractable. Disability advocates are protecting against the predictable consequence: a society in which certain categories of human variation are progressively edited out of the population, not by force, but by the cumulative weight of individual choices made available only to those who can afford them.

The lives and standing of disabled people — and the argument that the expansion of heritable gene editing implicitly communicates that lives like theirs are less worth living, regardless of whether that is the intention of any individual who makes an editing decision. The philosopher and disability activist Adrienne Asch articulated what has come to be called the expressivist objection: genetic selection against disabilities is not morally equivalent to abortion for any reason, because it is a selection against a specific trait — against the kind of person the fetus would become. This expresses a social message about which kinds of human lives are worth bringing into the world. Disability communities have been making this argument in the context of prenatal testing and selective termination for decades; synthetic biology amplifies its stakes substantially. The Autistic Self Advocacy Network's 2019 submission to the National Academies on heritable genome editing argued that the framing of autism and many disabilities as "diseases to be eliminated" reflects a medical model of disability that is contested within disability communities — that the suffering associated with many conditions is, in significant part, a consequence of inaccessible and inhospitable social environments, not of the disability itself. Justice advocates are also protecting the Global South's position in this governance conversation: the primary risks of gene drive technology are concentrated in tropical regions, the primary benefits of synthetic biology are being captured by pharmaceutical companies headquartered in North America and Europe, and the governance frameworks being designed are primarily shaped by institutions in wealthy countries. Gabriela Arguedas-Ramírez has called this dynamic "techno-scientific colonialist paternalism" — the pattern by which powerful biotechnology is governed by those who benefit from it, deployed in the territories of those who bear its risks, and framed as progress for all humanity.

What the argument is actually about

The synthetic biology debate is, at its core, a debate about whether the same tools can be safely governed differently for different purposes — and whether the moral logic that justifies therapeutic use can be kept from migrating into territory where it produces outcomes most of the involved parties would, on reflection, reject. The therapeutic case for CRISPR is genuinely compelling. So is the biosecurity concern. So is the ecological caution about gene drives. So is the justice critique of heritable editing. What makes this debate structurally difficult is that these concerns do not align into two teams — they cut across each other. Biosafety advocates and therapeutic advocates are not opponents; Kevin Esvelt and Jennifer Doudna share a laboratory tradition, and Esvelt has been explicit that his biosecurity concerns are not arguments against CRISPR therapeutics. Justice advocates and ecological precaution advocates are not the same community; disability rights objections to germline editing are about human lives, while gene drive concerns are about wild ecosystems, and they require different governance responses. The tendency to collapse this into a single "pro-synthetic biology / anti-synthetic biology" axis obscures the fact that most thoughtful participants in this debate accept some uses and oppose others — and the hard work is precisely in specifying which.

Whether the distinction between somatic editing (changes to an individual's own cells) and germline editing (heritable changes to all future descendants) is morally fundamental or merely technically convenient — and whether governance frameworks that permit the first and prohibit the second can hold as the technologies become more capable and less expensive. The current international consensus — expressed in the WHO advisory committee recommendations, the National Academies reports, and the policy frameworks of most high-income countries — treats somatic editing as acceptable in principle (subject to clinical evidence standards) and germline editing as not yet appropriate for clinical use (subject to future revision if criteria are met). This distinction has considerable moral logic: somatic editing affects only one consenting patient; germline editing affects all future descendants, none of whom can consent. But governance frameworks built on technical distinctions have a poor track record of stability when the underlying technology continues to advance. Base editing, prime editing, and other CRISPR variants are progressively improving the precision of germline modification. The price of DNA synthesis has fallen by roughly nine orders of magnitude since the 1970s. What is technically difficult today becomes technically routine in a decade. The question is whether governance frameworks designed for the current state of the technology will be robust to the state of the technology in 2040 — or whether they will require continuous renegotiation as capabilities advance, leaving the most consequential decisions to be made under the least favorable conditions.

Whether the Asilomar precedent — scientists voluntarily pausing recombinant DNA research in 1975 to allow governance to catch up — can be replicated for a technology that is simultaneously more powerful, more dispersed, more commercially developed, and being advanced by actors in dozens of countries with different regulatory philosophies. The 1975 Asilomar Conference is the canonical example of scientists exercising precautionary self-restraint before a governance crisis forced it. A group of molecular biologists, recognizing that recombinant DNA techniques raised safety questions they could not yet answer, voluntarily halted certain classes of experiments and convened a conference to develop guidelines. The moratorium worked, and the field emerged from it with a regulatory framework — NIH guidelines for recombinant DNA research — that held for decades. Whether this model is available for synthetic biology is genuinely uncertain. Asilomar worked because the relevant researchers were a small, relatively coherent community concentrated in a handful of American institutions, at a moment before the commercial biotech industry had developed significant interests in the field. Synthetic biology in 2026 involves university researchers, pharmaceutical companies, defense contractors, startups, iGEM student competitions, and government programs in dozens of countries. The community is not small. The interests are not aligned. And the technology has already migrated far enough from the research laboratory that a voluntary pause would be incomplete at best. What Asilomar offers is not a model to be replicated but a reminder that deliberate governance choices made before crises are more likely to hold than governance choices made in their wake.

Synthetic biology is unusual in this collection because the same tools that make it dangerous make it miraculous — and there is no version of the therapeutic promise that does not also carry the biosecurity risk, no gene drive that cures malaria that does not also demonstrate how to spread engineered traits through wild populations without consent. Most debates here involve a genuine conflict between values that can, at least in principle, be distributed to different institutions or different people. This one does not. The knowledge is the same. The choice is not between using it and not using it — that choice was made in 1972, in Paul Berg's Stanford laboratory. The choice now is about whether the governance of an irreversibly powerful technology can be designed before the worst applications of it are demonstrated, or whether, as has happened before with nuclear weapons and gain-of-function research, the governance arrives only after something has gone wrong. The answer to that question is not in the technology. It is in whether the communities with the most to gain, the most to lose, and the least institutional voice all end up in the room.

Further Reading

  • Jennifer Doudna and Samuel Sternberg, A Crack in Creation: Gene Editing and the Unthinkable Power to Control Evolution (Houghton Mifflin Harcourt, 2017) — the most accessible account of CRISPR's development by one of its co-inventors; Doudna traces the scientific discovery, the therapeutic implications, and her own evolving discomfort with the pace at which the technology was moving toward clinical and commercial application; the book is an unusually honest account of what it feels like to create something you do not fully control, written before Casgevy's approval but after He Jiankui's announcement, and it holds the therapeutic promise and the governance fear in genuine tension.
  • Walter Isaacson, The Code Breaker: Jennifer Doudna, Gene Editing, and the Future of the Human Race (Simon & Schuster, 2021) — a narrative account of the competitive race to develop CRISPR and the patent dispute between the Doudna lab at Berkeley and the Zhang lab at the Broad Institute; Isaacson's reporting on the He Jiankui case and its aftermath is among the most detailed available in accessible form; the book is especially valuable for understanding how commercial and reputational incentives shaped the governance conversation — and how the researchers at the center of the technology navigated the gap between their scientific work and its social implications.
  • Kevin Esvelt, "Inoculating Science Against Potential Pandemics and Information Hazards," PLOS Pathogens 14, no. 10 (2018): e1007286 — Esvelt's most direct statement of his biosecurity argument, explaining why he believes certain classes of published synthetic biology research constitute information hazards and why the scientific community's open publication norm requires revision for this specific domain; essential for understanding why a researcher who developed gene drives became one of the most prominent advocates for restricting information about them — and for the argument that the Asilomar precedent needs to be updated for a context in which the relevant research community is global and the commercial interests are substantial.
  • National Academies of Sciences, Engineering, and Medicine, Heritable Human Genome Editing (National Academies Press, 2020) — the definitive international scientific and ethical assessment of whether, and under what conditions, heritable human genome editing might be ethically permissible; the 18-member commission from ten countries concluded that current technologies do not meet the precision standard required for clinical use, proposed specific criteria that would need to be met before clinical application could be responsibly considered, and explicitly left open the question of whether meeting those criteria would be sufficient — rather than treating heritable editing as permanently prohibited; the report's framing of conditions rather than prohibitions is the most important single document for understanding what responsible governance in this area looks like.
  • Kiran Musunuru, The CRISPR Generation: The Story of the World's First Gene-Edited Babies (BookBaby, 2019) — written in direct response to the He Jiankui case by one of the leading researchers in therapeutic CRISPR applications, this is the most detailed clinical account of what He actually did, why it failed by the standards of both science and ethics, and what a responsible path to heritable editing would have required; Musunuru's perspective is that of a physician-scientist who strongly supports somatic CRISPR therapies and is deeply committed to preventing He's recklessness from foreclosing a legitimate scientific program — making him an unusually useful voice for understanding the fault lines within the pro-therapeutic community.
  • Adrienne Asch, "Disability Equality and Prenatal Testing: Contradictory or Compatible?" Florida State University Law Review 30, no. 2 (2003): 315–342 — the most rigorous philosophical statement of the expressivist objection to genetic selection against disabilities; Asch argues that selecting against disability traits is morally distinct from terminating a pregnancy for other reasons because it expresses a judgment about a category of person rather than a judgment about timing or readiness for parenthood; the argument anticipates the synthetic biology context and has become the foundational text for disability rights engagements with genome editing governance.
  • Austin Burt, "Site-Specific Selfish Genes as Tools for the Control and Genetic Engineering of Natural Populations," Proceedings of the Royal Society B 270, no. 1518 (2003): 921–928 — the paper that formally proposed gene drives as a concept, predating CRISPR but describing the theoretical mechanism; reading this alongside the current Target Malaria and Island Conservation gene drive programs traces the twenty-year arc from theoretical proposal to near-clinical application, and makes visible how long the ecological governance questions have been known and how inadequately they have been addressed in that time.
  • Center for Genetics and Society, "Proposed Moratorium on Heritable Genome Editing Is a Welcome First Step" (2019) — a concise public-interest statement of the Center for Genetics and Society's position after the He Jiankui scandal; the statement argues that heritable editing cannot be governed as a private reproductive choice because its consequences are social and multigenerational; they propose an international governance framework with democratic legitimacy and explicit prohibitions on enhancement uses, and explain why they believe the current trajectory — in which clinical applications are governed primarily by the discretion of well-meaning researchers and institutional review boards — is insufficient for a technology with population-level implications.
  • International Risk Governance Council, Gene Drives: Environmental Impacts, Sustainability and Governance (IRGC, 2022) — the most comprehensive policy analysis of gene drive governance gaps; the report identifies three core governance challenges — the transboundary spread of engineered traits across national borders, the absence of an international framework for environmental release decisions, and the inadequacy of existing biodiversity conventions for a technology that modifies rather than introduces organisms — and proposes a set of governance principles including community consent requirements for affected populations and staged release frameworks with monitoring and exit criteria; essential for understanding why gene drive governance is harder than most other forms of environmental risk governance.
  • Sheila Jasanoff, The Ethics of Invention: Technology and the Human Future (W. W. Norton, 2016) — a political scientist's account of how technological governance works in practice: not through rational deliberation about risks and benefits but through institutions, power structures, and the cultural assumptions embedded in regulatory frameworks; Jasanoff's concept of "civic epistemology" — the ways different societies assess what counts as legitimate knowledge and credible expertise — is essential for understanding why synthetic biology governance looks different in the United States, the European Union, China, and sub-Saharan Africa, and why harmonization of governance across those contexts is harder than the shared molecular biology might suggest.
Patterns in this map

This map illustrates several recurring patterns in how contested positions work:

  • The same tools, different fears: What makes synthetic biology governance unusual is that the same molecular tools generate multiple independent concerns that do not reduce to one another. CRISPR as a therapeutic tool raises access and equity concerns. CRISPR as a research tool raises biosecurity concerns. Gene drives raise ecological concerns. Germline editing raises justice and disability concerns. These are not the same argument in different clothes — they require different institutional responses. The tendency to frame the debate as "is synthetic biology good or bad?" obscures this multiplicity and makes governance harder by forcing allies into adversarial positions.
  • Consent across time: Germline editing and gene drives share a governance problem that appears rarely elsewhere in this collection: the people most affected by a decision — future descendants of edited embryos, future generations living in ecosystems where a drive has spread — cannot consent to it, because they don't exist yet. This creates a structural challenge for utilitarian governance frameworks, which typically aggregate preferences across affected parties. When the affected parties include all future humans or all future members of a wild population, the aggregation is not possible. This is a variant of the long-term governance problem that appears in climate policy and deep-sea mining, but in synthetic biology it is especially acute because the change is encoded at the level of inheritance itself.
  • Information as irreversible release: Kevin Esvelt's biosecurity argument introduces a governance challenge that has no clean precedent: the "release" that biosecurity advocates want to prevent is not primarily the release of a physical agent but the release of information about how to create one. Once a synthesis pathway or a gain-of-function method is published, it cannot be unpublished. The governance question is therefore not "should this be done?" but "should the details of how it was done be published?" This is a form of the dual-use problem that has been discussed in nuclear physics and chemistry for decades, but synthetic biology makes it more urgent because the barrier between information and capability is lower — a published method can be reproduced in a moderately equipped laboratory in a way that a nuclear weapons design cannot.
  • The therapeutic exception and its migration: Governance frameworks that permit technologies for clearly therapeutic purposes and prohibit them for enhancement purposes rely on a distinction that erodes as the technology becomes more capable. What counts as treatment and what counts as enhancement is unstable: is editing a predisposition to depression treatment or enhancement? Is adding malaria resistance to a germline in a region where malaria is endemic treatment or enhancement? This erosion has a known trajectory in medicine — cosmetic surgery, hormone therapy, and antidepressants have all followed a path from narrow medical indications to broader social uses — and governance frameworks that do not account for it tend to be overtaken by it. The question is not whether the therapeutic exception will hold forever. It is whether the governance transition from "therapy only" to "other uses too" can be made deliberately and democratically, or whether it will happen through the accumulation of individual clinical decisions with no moment of collective choice.
Structural tensions in this debate

Three tensions that the body text names but does not fully resolve:

  • The dual-use entanglement. The biosecurity problem in synthetic biology is not a problem of misuse by bad actors — it is a problem built into the knowledge itself. The same molecular logic that allows CRISPR to correct sickle cell mutations describes how to reconstruct dangerous pathogens from commercially available components. Kevin Esvelt's core argument is not that researchers will misuse their findings; it is that a published method cannot be unpublished, and that in a world of eight billion people, the gap between "technically possible" and "catastrophically misused" closes over time regardless of the intentions of the researchers who established the possibility. This means the therapeutic and the biosecurity debates cannot be separated: the governance that enables the medicine also enables the catastrophe. There is no version of "continue therapeutic research" that does not also mean "continue building out the knowledge base from which mass-casualty applications could be derived." The question of whether the therapeutic benefits justify the biosecurity exposure is real and genuinely difficult — but it cannot be avoided by separating the two discussions, because the knowledge is the same discussion.
  • The consent impossibility. The two most contested applications of synthetic biology — germline editing and gene drives — share a governance property that creates a formal problem for any framework grounded in consent: the people most affected by the decision cannot be consulted, because they don't exist yet. Future descendants of an edited embryo will inherit a genetic change they had no voice in. Future generations living in ecosystems where a gene drive has spread through wild populations will live with an ecological reality decided before they were born. Consent frameworks work by aggregating preferences across affected parties. When the most-affected parties are future people or future populations that don't yet exist, the aggregation is formally impossible. Existing governance responses — ethics boards, precautionary principles, community engagement processes — are institutional workarounds for an impossibility, not solutions to it. Target Malaria's extensive engagement with communities in Mali and Burkina Faso is a serious effort, and it still cannot obtain consent from the people who will actually bear the consequences of continental-scale gene drive release decades from now. What it can do is create accountability structures for the consent that is possible, in full awareness of the consent that is not.
  • The therapeutic exception as a ratchet. Governance frameworks that permit synthetic biology tools for clearly therapeutic purposes and prohibit them for enhancement or non-clinical uses rely on a distinction whose stability is inversely proportional to the technology's capability. This is not a theoretical concern: the path from IVF embryo selection (routine since the 1980s, legally uncontroversial in most jurisdictions) to somatic CRISPR therapy (approved in 2023) to germline editing (currently prohibited for clinical use) follows a trajectory in which each step creates the clinical infrastructure, commercial investment, and social normalization that makes the next step easier. The justice critique of heritable editing is precisely that permitting it for sickle cell disease builds the platform from which editing for intelligence or height predispositions becomes technically feasible, commercially rational, and socially normalized before any democratic moment of collective choice occurs. The ratchet dynamic means that governance designed for the current state of the technology tends to be overtaken by the next state. Whether a democratic society can make deliberate, collective choices about which uses are acceptable — rather than discovering after the fact what it has allowed — is the governance question underlying all the others.

See also

  • Who bears the cost? — the framing essay for arguments about who absorbs the risks of innovation when the benefits and harms of synthetic biology are distributed unevenly; the same underlying question appears here in disputes over who receives breakthrough therapies, who bears dual-use and ecological risk, and whose consent counts when biological interventions can spill across borders and generations.
  • AI Safety and Existential Risk — a parallel debate about transformative technology with catastrophic potential, where a similar fault line exists between those focused on near-term concrete harms and those focused on low-probability, civilization-scale risks; the biosecurity argument in synthetic biology and the existential risk argument in AI share the same structural challenge of governing risks that are difficult to quantify and may manifest slowly.
  • Animal Rights and Factory Farming — synthetic biology offers routes to cultured meat and animal product replacements that could substantially reduce factory farming's scale; the ethics of engineering biology to reduce animal suffering is part of the debate about what legitimate uses of these tools look like.
  • Disability Rights in Employment — the accommodation and accessibility framework that disability advocates apply to employment is grounded in the same social model of disability that informs the expressivism objection to germline editing; understanding how disability communities reason about inclusion in society helps clarify what is at stake in their critique of genetic selection.
  • Deep-Sea Mining — a parallel case of irreversibility under time pressure: both gene drives and deep-sea mining involve decisions whose consequences will persist on geological or evolutionary timescales, made by institutions whose accountability horizons are measured in years; both involve communities in the Global South bearing the primary risks of interventions designed to solve problems that affect everyone.
  • Indigenous Land Rights — the gene drive governance conversation about community consent in affected populations is structurally similar to the indigenous land rights debate about free, prior, and informed consent for resource extraction; both involve externally developed technologies being deployed in ways that affect communities whose governance systems are not recognized by the international frameworks making the decisions.
  • Bioweapons Governance — the downstream governance challenge that synthetic biology's dual-use problem most directly amplifies: the Biological Weapons Convention was designed before the democratization of the life sciences, and synthetic biology has lowered the barriers to biological weapons development faster than the BWC's governance architecture has adapted; the information hazard problem that biosecurity advocates raise in synthetic biology is the same problem that dual-use research governance advocates are trying to solve in the bioweapons context.