Ethics. Research. Community.

Blogging Ethics

Note: blog content is not currently included in EthicShare's Search Results.

12/02/2020 - 9:26pm

The Care Quality Commission is the independent regulator of health and adult social care in England. It is conducting an investigation into the use of do-not-resuscitate orders. Its report is due in early 2021.

12/02/2020 - 11:39am

By James Toomey

If you want to understand America, you must understand our politics of abortion. And if you want to understand our politics of abortion, you must read Mary Ziegler’s recent legal history, “Abortion and the Law in America: Roe v. Wade to the Present” (2020).

In comprehensive detail and in a singularly fair and thoughtful way, Ziegler tells the story of American regulation of abortion from the Supreme Court’s historic Roe v. Wade decision to the present, and looks ahead to an uncertain future. Through vignettes of activists who have dedicated their lives to one side of the debate or the other, Ziegler shows that, notwithstanding the superficial constancy of the abortion debate — one side proclaiming the constitutional, essential rights of the fetus, the other the similarly irreducible right of bodily autonomy — the character of the debate, and the kinds of arguments made, have shifted over the course of the last fifty years.

At different times and in different ways, partisans on both sides have sought out arguments about what Ziegler terms the costs and benefits of abortion — that is, arguments about whether society is better or worse off with legal abortion, as measured by some exogenous metric. These debates, distinct from rights claims, are about whether abortion helps women or harms them, helps or harms minorities, strengthens or fragments the American family. But more than that, the arguments are about epistemology and the “good life” — how we can know whether something is a benefit or harm, and under what theory of value is it good or bad.

Reading Ziegler’s book, it is hard not to see connections to the broader context of intellectual history in which the abortion debate has taken place. For example, Ziegler relates that in the 1990s, as the medical profession, and particularly the American College of Obstetricians and Gynecologists, took a more consistent line that abortion is a safe and ordinary medical procedure, pro-life activists came to challenge medical science and scientists themselves. They argued that the medical profession was conducting bad science in self-interest, suppressing dissent and framing scientific facts to accord with their philosophical commitments, and that the media was helping them. Moreover, they contested that science was the appropriate epistemological framework for answering these questions at all, relying increasingly on testimonials from women who regretted abortions and the notion that emotional responses to graphic images of abortion are a way of knowing whether it is beneficial or harmful.

The compatibility between these arguments and nearly identical moves among skeptics of climate change at nearly the same time is obvious. But reading the book today, it’s hard not to see the similarity also between these kinds of arguments and contemporary skepticism of public health and scientific authority in the coronavirus pandemic, which we know is concentrated on the right.

Like many histories, Ziegler’s book leaves us with at least as many questions as answers. She doesn’t opine, for example, on what might be going on with this confluence of arguments. It is at least plausible that similar arguments from similar people come from a consistent underlying epistemology. But if so, where did that epistemology come from? Was it the motivated result of the structure of the abortion debate? Or did it come first? And is it on to something? Are there good reasons for those with certain substantive normative views to be skeptical of science in certain fields? Maybe in some contexts, but not others? I don’t know, but Ziegler has given us a research agenda for a decade.

Amid this fascinating uncertainty, Ziegler’s book does clearly show something relevant for understanding partisanship more broadly. Specifically, she shows that shifts in the discussion to the purportedly utilitarian costs and benefits of abortion have not been correlated with a reduction in the intensity of the debate or the distance between the combatants.

There is, I think, something of a tendency in conversations about partisanship to see part of the problem as a proclivity to talk about big questions rather than little ones, sweeping claims of rights rather than utilitarian policy tinkering, whether the United States is a metaphysically racist country rather than the composition of the school board. If we talked more about real policy, the argument goes, we might realize that we’re not all that far apart at all, that we want the same things, that we have reasonable disagreements about how to get there.

Ziegler’s book tells us this isn’t so. And it makes sense. As with the costs of benefits of abortion, our views on the composition of the school board come from our answers to the big questions. And we disagree on our answers to the big questions. Moving the conversation to narrow issues of policy, then, doesn’t set aside our underlying disagreements but obscures them. Indeed, by not talking about the big questions underneath, perhaps it is harder to understand where our opponents got their policy positions, and harder not to see them as evil or stupid.

What to do with this observation is not obvious. After all, Ziegler also points out that discussing the fundamental philosophical questions implicated by abortion hardly resolved them. For those of us who worry about the future of a country united by hatred, it is easy to find in the inescapability of these hard normative disagreements a kind of nihilism.

But maybe that’s okay. Maybe the point is that we have to understand that we do disagree about very important things, that we can’t silence or ignore or get rid of those we disagree with, and that we probably won’t persuade them. We have to live with them.

It’s a far cry from the civic republican ideal of 1950s fantasies and vague recollections of Rome. But it might be better than what we’ve got.

The post Book Review: Mary Ziegler’s ‘Abortion and the Law in America’ appeared first on Bill of Health.

12/02/2020 - 7:00am

By Ifeoma Ajunwa

As scientists develop increasingly accurate tests for COVID-19 immunity, we must be on guard as to potential inequities arising from their use, particularly with respect to their potential application as a prerequisite for returning to the workplace.

A focus on immunity as a yardstick for return to work will only serve to widen the gulf of economic inequality, especially in countries like the U.S., which has severe racial health care disparities and uneven access to effective healthcare. This focus could also serve to diminish societal support for further understanding and curtailing the disease.

On November 12th, 2020, the New York Times reported that a new type of blood test to detect T cells could be more accurate in detecting a person’s immunity to the coronavirus. The new blood test, which is developed by Adaptive Biotechnologies, is superior to antibody testing, as it can detect a T cell response for at least 6 months, whereas antibodies may become undetectable sooner than that. Just a few days later, on November 18th, 2020, the  Times published the findings of a new, not-yet-peer-reviewed study whose findings suggested that COVID-19 immunity could last for years.

Antibody, or serology, tests are currently used as an imperfect measure to presume COVID-19 immunity. These tests check for the presence of antibodies thought to result from a Sars-CoV-2 infection, the virus that causes COVID-19. However, as the FDA has noted, at this time, researchers are not certain that the presence of such antibodies means that the individual is immune to the coronavirus. Furthermore, in some  cases, the antibodies were found when there had not previously been a Sars-CoV-2 infection. In these instances, the presence of antibodies was attributed to other, similar viral infections and certainly could not be relied on as a sign of COVID-19 immunity.

The obvious utility of the new T cell test is for public health purposes ­­– determining immune response and possible immunity to the coronavirus is helpful for combating its spread. But another use case, determining immunity prior to a return to the workplace, is fraught with ethical considerations.

Early on in the pandemic, several governments called for literal “immunity passports” to be accorded those with detected antibodies for Sars-CoV-2 and which could allow those individuals to travel freely and return to work. However, in April 2020, The World Health Organization (WHO) released guidance noting that, given the lack of adequate scientific evidence about the effectiveness of antibody-related immunity, the efficacy of an immunity passport could not be guaranteed. In fact, there is a heightened risk that individuals who assume they have immunity from the coronavirus, due to inaccurate antibody testing, may be more likely flout public health guidelines, leading to more infections.

This guidance misses an important point. Even as we develop better tests to detect COVID-19 immunity, the important question is not how accurate those tests are. The more important quandary is how society should treat individuals who either have genetic immunity or acquired immunity.

History has shown that immunity to disease as passport to work can draw a dividing line based on both socio-economic factors and racial group memberships. Writing for Slate, Rebecca Onion notes that “[w]hen yellow fever ravaged 19th-century New Orleans, wealthy white people who ‘acclimated’ [i.e., developed immunity] were rewarded.” White people who had survived yellow fever benefited from “immunoprivilege,” while others suffered social and economic repercussions. In the 21st century, a focus on coronavirus immunity rather than prevention of infection could play out similarly.

To acquire immunity, an individual must first survive the disease. Surviving the disease necessities adequate healthcare. Yet, access to healthcare services in the U.S. is unequal. While some COVID-19 patients, like President Trump, are able to receive high levels of healthcare (and even experimental drugs), others lower on the socio-economic spectrum do not have health insurance and can only receive emergency care. Thus, immunity as passport to work would only serve to increase inequality as it would reward those who could afford the care needed to survive.

It is also worth noting that people of color are generally more likely to die of the disease than their white counterparts. Could this lead to a social (even if not scientifically proven) view that white people have greater immunity to the disease than others? If so, imagine how this social view could play out in racial employment discrimination as businesses re-open. Past research shows that racial minorities have had to contend with genetic discrimination in the workplace.

Even as more accurate tests are developed to detect COVID-19 immunity, society must continue to grapple with the ethical questions surrounding the use of those tests. We must remain on guard to ensure that immunity to the coronavirus is not used as a wedge to further separate the haves and the have-nots and to widen the chasm of inequality.

 

Ifeoma Ajunwa is an Associate Professor (with tenure) at Cornell University’s Industrial and Labor Relations School and Cornell Law School and a Faculty Associate at the Berkman Klein Center at Harvard Law School.

The post COVID-19 Immunity as Passport to Work Will Increase Economic Inequality appeared first on Bill of Health.

12/01/2020 - 12:29pm

Ariane Lewis and I co-edited the December 2020 issue of the AMA JOURNAL OF ETHICS focusing on "Socially Situated Brain Death."  

From the Editor...

12/01/2020 - 11:43am

By Ana Santos Rutschman

As several pharmaceutical companies approach the U.S. Food and Drug Administration (FDA) seeking authorization to bring COVID-19 vaccines to market, concerns about vaccine mistrust cloud the prospects of imminent vaccination efforts across the globe. These concerns have prompted some commentators to suggest that governments may nudge vaccine uptake by paying people to get vaccinated against COVID-19.

This post argues that, even if potentially viable, this idea is undesirable against the backdrop of a pandemic marked by the intertwined phenomena of health misinformation and mistrust in public health authorities. Even beyond the context of COVID-19, paying for vaccination is dubious public health policy likely to backfire in terms of (re)building public trust in vaccines.

The Problem of Vaccine Trust

The ongoing problem of diminished trust in vaccines is not specific to the COVID-19 pandemic. Already in 2019, the World Health Organization listed vaccine hesitancy – defined as “reluctance or refusal to vaccinate despite the availability of vaccines” – as one of the top threats to global health. Several pre-COVID studies showed that public confidence in vaccines has been dipping for years. As a result of growing vaccine hesitancy, vaccination rates in several geographical areas or communities have dropped, and vaccine-preventable diseases like measles have caused multiple outbreaks in recent years.

The COVID-19 pandemic has introduced a renewed sense of urgency in debates about how to best increase vaccination rates among populations indicated to receive COVID-19 vaccines. This debate has unfolded in an environment marked by heightened mistrust in many of the institutions operating in the public health space, and, in particular, those involved in vaccine regulation, such as the FDA.

To be sure, there have long been trust issues surrounding public perceptions of public health-oriented agencies in the United States. But a series of recent faux pas at the FDA – largely in connection with emergency use authorizations, the same regulatory pathway through which the use of COVID-19 vaccines will likely be greenlighted –  have further compromised public trust in emerging COVID-19 vaccines. As a result, multiple studies have found it likely that the rates of COVID-19 vaccination will remain well below the levels required to achieve herd immunity –  even after initial problems of vaccine scarcity are overcome. This lack of trust is especially pronounced among the Black and Latino communities, which are also the communities on which COVID-19 has taken a disproportionately high toll.

In the face of these problems, proposals suggesting that governments should pay people to get vaccinated against COVID-19 have gained ground in late 2020. However, this seemingly simple potential solution is misguided on many levels.

Tracing the Idea: Incentivizing COVID-19 Vaccination Through Payment

One of the earliest proponents of a COVID-19 vaccine payment nudge in the United States was Robert Litan, who laid out the concept in a Brookings op-ed in August 2020. Reacting to then-emerging news of lack of trust in emerging COVID-19 vaccines, Litan presented the idea as an “adult version of the doctor handing out candy to children.”

In his view, the prospect of a payment would incentivize otherwise vaccine-hesitant individuals to receive the required shots of a given COVID-19 vaccine. Litan concedes that the proposal would likely be prone to overcompensation, as individuals willing to be vaccinated would still be able to reap the reward, but his focus is on capturing vaccine-hesitant individuals – and from that perspective, Litan argues, governments should be prepared to overspend, as opposed to dealing with the protracted economic and public health effects of the pandemic.

Litan’s proposal assumes that it would be desirable to vaccinate 80% of the United States population, or 275 million individuals (even though COVID-19 vaccine clinical trials for children are far behind other trials, and ongoing trials have excluded pregnant adults and other subpopulations). Litan proposed a payment amount of $1,000 per individual, which would mean a grand total budget of US $275 billion for the incentive system.

When pondering how much a given individual should receive in exchange for agreeing to be vaccinated against COVID-19, Litan freely admits that “I know of no hard science that can answer that question, but my strong hunch is that anything less than $1,000 per person won’t do the trick.” This means that a family of four “would get $4,000 (ideally not subject to income tax) – a lot of money to a lot of families in these difficult times, and thus enough to assure that the country crosses the 80 percent vaccination threshold.”

Litan’s proposal was heartily embraced by economist N. Gregory Mankiw, who wrote an opinion piece in the New York Times in September, arguing that in order “to get the economy back on track” Congress should pass a law implementing the payment incentive scheme immediately.

By November, as the number of COVID-19 cases began to sharply rise again, the idea of a payment nudge gained further traction. In early November, Oxford philosopher Julian Savulescu published an article in the Journal of Medical Ethics arguing that payment for vaccination is preferable to penalties for failure to comply with vaccination mandates. The piece quickly permeated popular media, often being quoted in support of payment nudges outside the context analyzed by Savulescu. Throughout November, permutations of Litan’s proposal were seemingly ubiquitous. For example, in former presidential candidate John Delaney’s version, described in the Washington Post, the federal government should pay every American $1,500 upon proof of vaccination against COVID-19.

These proposals rest on thin math: the initial proposed amount of $1,000 appears to have been conjured up with no support from behavioral economics or any kind of data. And the estimated number of individuals to whom payment would be offered (and hence budgeted for) appears to include sub-populations for which a COVID-19 vaccine might not even be authorized in 2021. Nonetheless, it is the non-economic aspects of these proposals that are the most troubling.

Problems with Payment-based Incentives to Vaccination

Vaccines have long been one of the most polarizing types of health technology. From the enduring legacy of the now discredited Wakefield study to the ways in which the National Vaccine Injury Compensation Program is often misconstrued in anti-vaccine discourse, this remains a highly idiosyncratic field. Any intervention designed to promote vaccine uptake cannot ignore this reality. The proposals for COVID-19 vaccination payment nudges in the United States, however, do not pause to reflect on the underlying roots of vaccine mistrust, the probable spectrum of reactions to being offered payment in exchange for vaccination, and the post-COVID-19 effects of establishing a payment system for vaccination.

While highly heterogenous, vaccine mistrust is most commonly rooted in hesitancy about the technology itself or the entities mandating or endorsing vaccination, such as federal or state agencies or departments in the United States, international actors like the World Health Organization, and, more recently, funders of vaccine development and vaccination campaigns, like the Gates Foundation. In recent years, social media have fueled the spread of content that is disproportionately anti-vaccine. Some of the same groups that spread misinformation about United States electoral and political themes also spread content about vaccines – in some cases spreading both pro- and anti-vaccine content on social media, with the sole goal of using vaccine debates to further increase political divisiveness in the United States.

COVID-19 further complicated this landscape. Anti-vaccine groups or accounts on social media grew exponentially since the beginning of the pandemic. In the United States, the politicization of vaccine debates reached levels never seen before. And the reputation of the FDA – the market gatekeeper for COVID-19 vaccines – has sustained tremendous damage during the pandemic, with public health experts criticizing the data the agency relied on to issue emergency use authorizations on hydroxychloroquine, chloroquine and convalescent plasma, not to mention the FDA Commissioner’s overstating of data on convalescent plasma. As a result, the states of California, Washington, Oregon, Nevada and New York have announced their intention to conduct an independent review of any COVID-19 vaccines authorized or approved by the FDA. Similarly, citing distrust of the FDA, the National Medical Association announced a committee of Black doctors that will also review COVID-19 vaccines.

Against this complex backdrop, imagining that individuals or communities that experience profound distrust towards vaccines or the authorities that endorse vaccination will be moved by a $1,000 nudge seems disconnected from what is happening on the ground. Passing a law establishing a $1,000-1,500 “reward” for getting vaccinated against COVID-19 would likely accomplish marginally little in terms of increasing vaccine uptake. More importantly, though, it would likely cause great damage to both short- and long-term efforts to build trust around vaccines.

In discounting behavioral complexity, the idea of payment in exchange for vaccination is bound to be instrumentalized in anti-vaccine discourses. Consider the case of the current compensation system for vaccine-related injuries, currently in place not only in America, but also a part of the legal system of several other countries: its mere existence has been used by several commentators espousing anti-vaccine views to suggest that the government uses monetary compensation as a mechanism to impose a harmful practice (vaccination), hush up criticism of governmental action, and pursue deep-state or otherwise hidden agendas through vaccination. A law establishing compensation for COVID-19 vaccination would immediately join the ranks of the arguments traditionally presented to challenge the public health value of vaccines, or be spun as “proof” of vaccine-related conspiracies. And it would do so in ways that would outlive the pandemic.

Adding to instrumentalization issues, payment-based solutions to vaccination problems also reinforce paternalism towards lower-income individuals and communities, as well as racial minorities – again, groups that have endured the largest public and personal health toll of the COVID-19 pandemic. As expressly stated in the Litan formulation of the proposal, the amount offered by the government is meant to somehow override either the belief or inertia of an individual or family. Wealthier individuals, who might not consider $1,000 an amount that will influence their behavior, are presumably free to ignore the nudge, while poorer individuals and communities are theorized to switch behaviors in exchange for money. Because many of the poorer individuals and communities across the United States are disproportionately non-Caucasian, the vaccination “reward” taps into societal worldviews that indirectly differentiate populations – and individuals – according to their race and socioeconomic status.

These problems are not limited to the current pandemic. The ways in which we seek to build vaccine trust during the COVID-19 pandemic will help shape vaccine sentiments for years and possibly decades to come. They will affect public and individual responses to vaccines targeting other pathogens – and overall levels of vaccine trust were already waning before COVID-19. A monetary reward for vaccination will likely be instrumentalized to reinforce suspicion and conspiracism, while embodying a form of paternalism that misguidedly draws on socioeconomic and racial vulnerabilities. Yes, we face a problem of vaccine trust in the COVID-19 pandemic. Paying for vaccination, however, is not the solution.

 

Ana Santos Rutschman is an Assistant Professor of Law at Saint Louis University School of Law.

The post Why the Government Shouldn’t Pay People to Get Vaccinated Against COVID-19 appeared first on Bill of Health.

12/01/2020 - 8:37am

The American Journal of Bioethics is celebrating its 20th anniversary. To commemorate this event, join them on December 15, 2020, for a webinar reflecting on the past, present, and future of the field of bioethics. 

12/01/2020 - 7:00am

By Mary Ziegler

Once again, we’re talking about whether abortion counts as health care. The COVID-19 pandemic has sparked new efforts to limit access, from the government’s unwillingness to lift in-person requirements for medication abortion to the introduction of stay-at-home orders blocking access altogether. The campaign to frame abortion as a moral, not medical, issue began decades ago. The pandemic has revealed the broader stakes of this campaign — and what it might mean for access to care well after the worst of the pandemic is behind us.

For antiabortion leaders, there are obvious strategic reasons to insist that abortion is not health care. The stigma surrounding abortion is real and durable. Notwithstanding recent increases, many obstetric programs do not provide comprehensive abortion training (if they provide any training at all). A 2020 study in Plos One found that a majority of patients believed that they would be looked down upon “at least a little” for having had an abortion. This perceived stigma affects those refused abortions — and causes longer-term adverse mental health outcomes. Stigma has long been an effective tool for the antiabortion movement. The pandemic has done nothing to change that.

But, put in historical context, today’s effort to treat reproductive services as unessential means much more. That campaign is part of a broader agenda to undermine the idea of an autonomy-rooted abortion rights — and lay the groundwork for overturning Roe v. Wade.

The campaign to frame abortion as something other than health care gained ground with the Hyde Amendment, a measure banning Medicaid funding for abortion. Proponents of the ban argued that most patients treated abortion as a matter of convenience and therefore would suffer little if Congress cut funding. Even after the Hyde Amendment passed, the idea that abortion was not a real health service continued to defines the terms of debate. Each year, Congress battled about whether to allow any exceptions, such as for rape and incest. Lawmakers on both sides of the aisle dignified a small handful of abortions but branded the vast majority as frivolous.

Even with exceptions written in, the Hyde Amendment caused the number of Medicaid abortions to plummet dramatically. Patients dreaded an invasive and cumbersome process of self-justification and sometimes avoided even requesting aid. Treating most abortions as unessential also made it easier to argue for an outright ban. Even if lawmakers believed some procedures were justified, abortion foes insisted that women would lie about their true motives for ending a pregnancy.

The Hyde Amendment was a major success for the antiabortion movement. It was no surprise that the antiabortion movement tried to apply its logic more broadly. In the late 1980s and early 1990s, antiabortion leaders championed a bill banning all but a handful of “important” abortions, including those in cases of rape, incest, or a severe health threat. These laws served a political and legal aim, presenting abortion as immoral and delivering up a perfect opportunity to reverse Roe. So-called reasons bans are back, this time, prohibiting abortions chosen based on fetal race, sex, or disability.

Demands for religious liberty in the context of healthcare refusal also rely on the idea that both abortion and contraception are unessential. Conscience claims require courts to strike some balance between the moral qualms of the objector and the policy that offends them. Recently, religious conservatives have described ever more indirect forms of involvement as burdensome. In the context of the Affordable Care Act, this argument has worked because courts and politicians are willing to discount the importance of access to birth control in the first place. Treating other forms of care as unessential will make religious-liberty arguments even more effective.

Most visibly, the campaign to reverse Roe v. Wade relies on claims that abortion is not an essential medical service. In 1992, the Supreme Court turned away a request to reverse Roe, emphasizing that women and pregnant people relied on abortion to take advantage of new opportunities and achieve more equal lives. Ever since, major antiabortion groups like Americans United for Life have worked to show that patients cannot rely on abortion. Why? Abortion foes insist that far from counting as essential health care, the procedure causes everything from an increased risk of cancer to profound psychological distress.

The history of abortion makes one thing clear: no one intends the campaign to frame abortion or contraception as a moral issue — and an unessential service — to end with the COVID-19 pandemic. If antiabortion leaders succeed, those arguments will end with the complete dismantling of abortion rights, and it might not stop there.

 

Mary Ziegler is the Stearns Weaver Miller Professor at Florida State University College of Law and the author of Abortion and the Law in America: Roe v. Wade to the Present (Cambridge, 2020).

The post The COVID-19 Pandemic Reveals the Stakes of the Campaign Against Abortion appeared first on Bill of Health.

11/30/2020 - 1:37pm

By Dorit Rubinstein Reiss

As promising data emerges for COVID-19 vaccines in clinical trials, two manufacturers of these vaccines, Pfizer and Moderna, have submitted requests for Emergency Use Authorizations (EUA).

An EUA would allow vaccines to be used before full FDA approval, during the time that COVID-19 is an emergency.

The promise of a safe, effective vaccine offers a glimmer of hope not just for individuals around the world affected by the pandemic, but also for businesses large and small that have struggled with closures and public health-related changes to operations. A natural question that has emerged as private businesses contemplate a return to normalcy is whether they can mandate that employees and customers receive these vaccines authorized for emergency use.

In the past, members of FDA and CDC have said that the answer is no. However, the answer may not be so clear. This post looks at the relevant statutory provision to examine whether an EUA can accommodate mandates.

The provision is section 360bbb-3 (e)(1)(A)(ii)(III) of the Food, Drug and Cosmetics Act – 21 U.S.C. 564, “Authorization for medical products for use in emergencies,” which says:

(e)Conditions of authorization

(1)Unapproved product

(A)Required conditions

With respect to the emergency use of an unapproved product, the Secretary, to the extent practicable given the applicable circumstances described in subsection (b)(1), shall, for a person who carries out any activity for which the authorization is issued, establish such conditions on an authorization under this section as the Secretary finds necessary or appropriate to protect the public health, including the following:

(ii)Appropriate conditions designed to ensure that individuals to whom the product is administered are informed—

(III) of the option to accept or refuse administration of the product, of the consequences, if any, of refusing administration of the product, and of the alternatives to the product that are available and of their benefits and risks.

On its face, this seems to suggest that as part of the EUA, the Secretary of HHS should require, in the conditions of the EUA, that individuals be told that they can refuse the product – in this case the vaccine. This would imply that mandates are not allowed. This disclosure is a “required condition” of an EUA, which means the issue needs to be directly addressed in the authorization.

In support of this interpretation is the fact that this is how the FDA interprets the act, and agency interpretations are, in some cases, given deference by the courts (more on that below). (See this guidance document, at p. 24.)

Further supporting that interpretation is the fact that another statutory provision – 10 U.S.C. §1107a – allows the President to waive the requirement that people be told that they can accept or refuse the product for members of the armed forces, but only if  “…the President determines, in writing, that complying with such requirement is not in the interests of national security.” The specific waiver implies that for those not in the armed forces, and in other circumstances, the condition cannot be waived, and a mandate cannot be imposed.

There are also good policy reasons to suggest that for products approved on an Emergency Use Authorization, where full data from clinical trials is not yet available, a mandate is undesirable. Imposing a mandate when there is still substantial uncertainty about the risk/benefit profile of the product is much trickier than for a licensed product. With more uncertainty, people deserve a choice that is not too onerous, and having to choose between a job – especially in this economy – and a vaccine, or between, for example, flying abroad and a vaccine, may be a very onerous choice.

There are, however, arguments against this position.

First, the statute requires the Secretary to address the issue – but does not clearly say the Secretary has to allow consequences-free refusal of the product. In fact, because the statute says that the Secretary needs to inform people of the “consequences, if any, of refusing administration of the product,” it suggests that the Secretary has the discretion to allow such consequences – and hence, that the Secretary has discretion to permit mandates (something the Secretary may or may not do).

Second, while agencies do get deference to interpretation in some circumstances, in this case, the discretion to set the conditions is given to the Secretary, not the FDA, so there may be questions on whether the FDA’s position here deserves deference.

Further, the interpretation by FDA is embodied in a guidance document – not a formal rule. In administrative law terms, this is an interpretive rule. And such interpretive rules do not automatically get Chevron deference. The jurisprudence on when they do or do not get Chevron deference is very unclear – that is the line of cases not so fondly named “The Mead Mess” by scholars.

Third, following the FDA’s interpretation means the statute – in passing, and without saying it directly – would require the Secretary to prohibit private business across the United States from setting employment conditions or safety conditions. That’s not impossible – the federal government has the power to approve these products, fully or through an EUA, and the power to limit use is part of that approval – but it is also not obvious, because it is a large imposition on private business that may not always be justified. A discretionary grant of authority is just as reasonable.

Further, the policy arguments can also go both ways. The United States federal government arguably has mishandled limiting the spread of COVID-19. States have imposed limits on gatherings and business activity, and businesses have suffered. The federal government offered some support, but it is arguably less than what is needed. As a result, businesses are in a position where they may have been forced to close, and are likely losing customers that are afraid to venture in because of fears of infection. If vaccine mandates can improve safety, increase customer willingness to come in, and help businesses reopen, refusing them permission to impose mandates – while not giving them sufficient financial aid to stay afloat – is unfair. Finally, not all EUA are equal in terms of the level of data provided; in this case, the data behind the Moderna and Pfizer vaccines appears unusually promising. That might make allowing businesses to choose whether to impose mandates more reasonable.

What can we say, then?

At the very least, the act seems to require the Secretary to expressly address, in the conditions of the EUA, whether or not people may refuse vaccines, and what may be consequences of refusal.

That means that as part of granting an EUA, the Secretary has to directly address whether private businesses may impose mandates, and with what exceptions. This will not be an area where businesses have the usual freedom to impose conditions.

There is an argument that the Secretary does not have the discretion to allow mandates. But there is an argument, too, that the Secretary has the discretion to allow businesses to require vaccines and impose consequences for refusal.

I think there is enough ambiguity here to allow the Secretary discretion. But whether or not the Secretary should allow mandates is a policy question, and a hard one.

The post Under an EUA, Can Businesses Require Employees and Customers to Get Vaccinated? appeared first on Bill of Health.

11/30/2020 - 12:00pm

The Health Law Policy, Bioethics, and Biotechnology Workshop provides a forum for discussion of new scholarship in these fields from the world’s leading experts.

The workshop is led by Professor I. Glenn Cohen, and presenters come from a wide range of disciplines and departments.

In this video, Gabriel Scheffler gives a preview of his paper, “Health Care Reform and Two Conceptions of the Right to Health Care,” which he will present at the Health Law Policy workshop on November 30, 2020. Watch the full video below:

The post Two Conceptions of the Right to Health Care: Video with Gabriel Scheffler appeared first on Bill of Health.

11/30/2020 - 9:24am

By Allison M. Whelan*

The COVID-19 pandemic has given renewed importance and urgency to the need for racial and gender diversity in clinical trials.

The underrepresentation of women in clinical research throughout history is a well-recognized problem, particularly for pregnant women. This stems, in part, from paternalism, a lack of respect for women’s autonomy, and concerns about women’s “vulnerability.” It harms women’s health as well as their dignity.

Over the years, FDA rules and guidance have helped narrow these gaps, and recent data suggest that women’s enrollment in clinical trials that were used to support new drug approvals was equal to or greater than men’s enrollment. Nevertheless, there is still progress to be made, especially for pregnant women. In the context of COVID-19 research, one review of 371 interventional trials found that 75.8% of drug trials declared pregnancy as an exclusion criteria, a concerning statistic given that recent data suggest that contracting COVID-19 during pregnancy may increase the risk of preterm birth.

But if we probe further, we see that not all women are treated equally in the context of medical research. Rather than being viewed as needing protection, women of color have long been subjected to unethical and exploitative medical experiments and procedures. This spans from experimentation during human enslavement carried out by doctors like James Marion Sims who, although often touted as “the father of modern gynecology,” abused and terrorized Black women with excruciatingly painful gynecological procedures; to Henrietta Lacks, whose cells were taken without her consent and continue to be sold and used in the development of countless medical advancements; to recent allegations of medical abuse and forced sterilizations of women held at an immigration detention center in Georgia.

Excluding women from clinical trials is problematic, but including a subset of women in ways that are unethical, exploitative, and harmful is just as problematic, if not more so. There are many potential consequences, including a hesitancy to enroll in research and distrust in the medical products developed through research.

This has particular import during the COVID-19 pandemic because of the disproportionate number of cases, hospitalizations, and deaths among people of color. And although FDA has been adamant that it “will not cut corners” when reviewing COVID-19 products such as vaccines, there is significant concern about whether the U.S. population, particularly people of color, will be willing to get vaccinated. For example, one study found that only 17% of Black adults would “definitely get” vaccinated, compared to 37% of white adults. And another poll found that only 14% of Black Americans and 34% of Latinx Americans mostly or completely trust that a vaccine will be safe.

Scholars have long documented such distrust and its many consequences. Thus, these concerns are not new—even though they seem to be systemically undervalued and ineffectively addressed.

Until we better address and mitigate the consequences of history, a problematic cycle will continue—trials will be inadequately diverse, people of color will question whether they are being used as “guinea pigs” and whether medical products are safe and effective for their communities, and health care disparities will remain.

The COVID-19 pandemic provides a prime opportunity to reignite important discussions and the search for solutions. The political and social atmosphere in which the development of COVID-19 drugs is taking place has undermined Americans’ trust in the process, particularly among people of color. Myriad factors play a role in whether, and to what extent, people of color feel they can trust COVID-19 research, treatments, vaccines, and the government’s overall response to the pandemic, all of which must be addressed to defeat COVID-19 and the many other health care disparities plaguing our nation.

Where do we go from here? These issues do not have a single cause and thus cannot have a single solution. Any solution must be multifaceted and include changes that target clinical trials specifically, as well as much broader societal changes. A few ideas, discussed in greater detail in a forthcoming issue of the Cornell Law Review, include:

  • Statutory and regulatory mandates.
  • FDA guidance/policy.
  • Private and public funding/monetary incentives to support and reward outreach, education, and recruitment/enrollment of diverse clinical trial populations.
  • Greater transparency and de-politicization of the drug approval process.
  • Broader societal/structural changes.

The COVID-19 pandemic has shined a brighter light on already-known disparities in medicine, such as access to and enrollment in clinical trials and trust in medicine, and has given renewed importance and urgency to these issues. This article has barely scratched the surface of the significance of these issues and potential solutions to consider. These issues are not new, but the COVID-19 pandemic has made them all the more salient. Distrust in medicine and the government will make it difficult, if not impossible, to defeat this virus. There is thus no time like the present to revisit these issues with a renewed sense of passion, purpose, and urgency.

*Allison M. Whelan, J.D., M.A., is an attorney at Covington & Burling LLP, Washington D.C. The views expressed in this article are the author’s own and do not represent those of any past, present, or future employer.

The post Unequal Representation: Race, Sex, and Trust in Medicine — COVID-19 and Beyond appeared first on Bill of Health.