**4. Addressing disparities in clinical trial accrual**

Policy initiatives have been implemented in an attempt to address a lack of diversity in clinical trials. The National Institute of Health (NIH) Revitalization Act of 1993 created a mandate for the appropriate inclusion of minorities in all NIH-funded research projects [26]. Twenty years following the implementation of this act, the accrual and reporting of minorities within clinical trials remains inadequate. A recent review shows that the reporting of race/ethnicity ranged from 1.5% to 58% with only 20% of RCT's in high impact journals reporting subgroup analyses by race/ethnicity [27]. The failure of the 1993 Revitalization Act to address disparities in minority clinical trial accrual may be related to it only impacting NIH funded studies. From 1990 to 2000, a period that includes implementation of the 1993 NIH Revitalization Act, 61.5% of cancer clinical trials only used funding from NIH or other federal funding resources [2]. During this same time period, the percentage of trials only using pharmaceutical funding was 13.2% [2]. However, in the next decade (2001–2010), the percentage of cancer clinical trials relying solely on NIH or federal funding decreased to 35.9% and the percentage of trials relying solely on pharmaceutical funding increased to 50.4% [2]. Therefore, the research landscape has changed with respect to primary funding resources. Any policy attempting to impact how research is done must consider the primary source of funding for clinical trial research today.

The 1997 Food and Drug Administration Modernization Act (FDAMA) [28, 29] is an example of policy recognizing funding sources and utilizing it to implement positive change in accrual for clinical trials. This law included a "Pediatric Exclusivity Provision" which provided an additional 6 months of patent protection, or marketing exclusivity in return for performing studies specified by the FDA [28]. The provision for economic incentives was then extended by the "Best Pharmaceuticals for Children Act of 2002" [28, 30]. Following this, "The Pediatric Research Equity Act" in 2003 allowed the FDA to require pediatric studies of certain drugs and biological agents [31]. The purpose of these laws was to provide financial motivation for pharmaceutical companies to engage in pediatric clinical research. Children, similar to minorities the elderly, the poor, and women, were a group often not included in clinical trials. In return for their willingness to provide drug labeling information on children and increase delivery of biologics for children to the market, pharmaceutical companies received financial benefits. These laws resulted in critical changes in drug labeling for pediatric patients because unique pediatric dosing is often necessary because of the growth and maturational stages of pediatric patients [31]. Furthermore, since the implementation of these incentive programs, a majority of biologics (vaccines, anti-toxins, and insulin) approved include pediatric information in their labeling [32].

Provider characteristics also contribute to the lack of diversity within clinical trials. Provider attitudes towards a patients age, comorbidities [20, 21], physician perception of mistrust of researchers [21, 22], and lack of physician awareness of clinical trials [21, 23] all contribute to disparities in clinical trial research. Furthermore, studies targeting under-represented minorities (Hispanics and blacks) have found that provider miscommunication including lack of compassion, lack of respect, and perceived mistrust have all contributed to minorities hesitating to engage in clinical trial research [21]. The legacy of the Tuskegee Syphilis experiment still resonates among blacks in the United States when confronted with issues around clinical trial research. The Tuskegee study involved 400 African American males with syphilis who were systematically denied treatment from 1932 to 1972 even though a known treatment existed [24, 25]. This was the longest nontherapeutic experiment on human beings in medical history [25]. These men were largely illiterate and uninformed about the risks and benefits of this study despite the existence of US policy to protect clinical research subjects [24]. The social and political significance of the US Public Health Service performing unethical research on African Americans following the Civil Rights Movement of the 1960s was monumental and continues to contribute to the mistrust among African Americans for clinical research today [24, 25].

Policy initiatives have been implemented in an attempt to address a lack of diversity in clinical trials. The National Institute of Health (NIH) Revitalization Act of 1993 created a mandate for the appropriate inclusion of minorities in all NIH-funded research projects [26]. Twenty years following the implementation of this act, the accrual and reporting of minorities within clinical trials remains inadequate. A recent review shows that the reporting of race/ethnicity ranged from 1.5% to 58% with only 20% of RCT's in high impact journals reporting subgroup analyses by race/ethnicity [27]. The failure of the 1993 Revitalization Act to address disparities in minority clinical trial accrual may be related to it only impacting NIH funded studies. From 1990 to 2000, a period that includes implementation of the 1993 NIH Revitalization Act, 61.5% of cancer clinical trials only used funding from NIH or other federal funding resources [2]. During this same time period, the percentage of trials only using pharmaceutical funding was 13.2% [2]. However, in the next decade (2001–2010), the percentage of cancer clinical trials relying solely on NIH or federal funding decreased to 35.9% and the percentage of trials relying solely on pharmaceutical funding increased to 50.4% [2]. Therefore, the research landscape has changed with respect to primary funding resources. Any policy attempting to impact how research is done must consider the primary source of funding for clinical trial research today. The 1997 Food and Drug Administration Modernization Act (FDAMA) [28, 29] is an example of policy recognizing funding sources and utilizing it to implement positive change in accrual for clinical trials. This law included a "Pediatric Exclusivity Provision" which provided an additional 6 months of patent protection, or marketing exclusivity in return for performing studies specified by the FDA [28]. The provision for economic incentives was then extended by the "Best Pharmaceuticals for Children Act of 2002" [28, 30]. Following this, "The Pediatric Research Equity Act" in 2003 allowed the FDA to require pediatric studies of certain drugs

**4. Addressing disparities in clinical trial accrual**

6 Clinical Trials in Vulnerable Populations

A major reason for concern with financial incentives for pharmaceutical companies to engage in pediatric clinical research is monetary. There is concern that PHARMA could reap great financial reward from the patent extensions, and many question the ethics of this in return for doing what is deemed to be the morally correct thing to do [28]. Complicating the debate is the fact that the financial data is mixed. Li et al. [28] performed a cohort study of nine drugs granted pediatric exclusivity. They found that exclusivity did not guarantee a financial windfall with the distribution of net economic return for 6 months of exclusivity varying widely [net return ranged from (−)\$8.9 million to (+)\$507.9 million] [28]. However, at times, it appears that the financial incentives provided are disproportionate to the cost of the research being done because the profit ratio for certain blockbuster medications (anti-hypertensives), can be as high as 17:1 [28, 33–35].

Within 10 years of implementation of the Pediatric Exclusivity Act in 1997, there were more than 300 studies conducted and more than 115 products with labeling changes to account for pediatric use [31, 33, 34]. However, many of the drugs studied were not considered important targets among the pediatric population [33, 34]. The pediatric exclusivity studies have also tended to focus on more profitable drugs and the drugs most frequently studied are more likely to be important targets for the adult population [33, 34, 36]. The literature also raises important concerns related to the quality of the clinical trials conducted under the exclusivity act [37]. The extensions for patents under the exclusivity act are granted regardless of the quality or outcome of the clinical trial and many of the results of studies done under the exclusivity act are not published [38].

The data on the use of financial incentives to increase clinical trial diversity is mixed with evidence of success in terms of new drug labeling and evidence of failure in terms of poor research quality and financial gains for industry. However, financial incentives to increase diversity in clinical trials may be a viable and aggressive means of eliminating these health disparities if implemented in a thoughtful and ethical way. Men make up more than twothirds of the population in clinical tests of cardiovascular devices [10]. African Americans and Hispanics respectively make up 12% and 16% of the US population. However, African Americans and Hispanics respectively make up only 5% and 1% of clinical trial participants [10]. There are clear benefits and costs associated with the use of financial incentives. These must be carefully considered and weighed when implementing any policy concerning financial incentives to increase clinical trial diversity.

An alternative to financial incentives to improve clinical trial diversity would be a stronger stance by governing and regulating bodies such as the NIH and FDA. The FDA recently implemented the "Food and Drug Administration Safety and Innovation act" (FDASIA). Section 907 of this act directs the FDA to investigate how well demographic subgroups (sex, age, race and ethnicity) in applications for medical are included in clinical trials; and if subgroup-specific safety and effectiveness data are available. While this does not create a mandate for inclusion like the NIH revitalization act of 1993, it is an important step forward in improving clinical trial accrual of under-represented populations. But, given the size of the disparity, a more aggressive stance is required. Laws implemented by the NIH and FDA 10 years ago would have had potentially significant effects on diversity within clinical trials because the government (NIH) was the major funder of clinical trials at that time. However, the major funder of clinical trials in the United States today is the pharmaceutical industry. If the goal is to create policies to change research behaviors in clinical trials today, those policies must take a stronger stance in terms of requirements for new drug approval or creatively and ethically consider what is valued by the main funder of clinical trials; industry. The literature has previously discussed in great detail the issue of financial incentives to increase clinical trial diversity. Now is the time to begin a discourse on the role of an FDA mandate on clinical trial diversity.

Finally, a core ethical issue in clinical research is justice. The lack of diversity in clinical trial participation represents an injustice. The clinical trial population should reflect the population of people who are affected by the disease being studied. Not doing so risks the possibility new therapies are only efficacious and safe in a small proportion of the population at risk for the disease. Or worse, as has historically been the case in hepatitis C, clinical trials could lack a reasonable representation of those individuals at greatest risk for the disease and who have the worse outcomes (African Americans). This system creates lower quality research and the knowledge gained from these clinical trials has less value when compared to novel findings in a clinical trials that truly reflected the genetic diversity of the disease process being studied. As has been the case in US history, to address social injustices, we must implement an aggressive policy to ensure that significant and positive progress is achieved. Here, we have discussed the potential for financial incentives as well as a FDA mandate in conjunction with appropriate product labeling to increase diversity in clinical trials. We urge a discourse on these issues because industry is now the major funder of clinical trials today and previous policies to address diversity in clinical trials have failed. Any cost associated with financial incentives or an FDA mandate pales in comparison to the cost of life lost because of unjust and

Scientific and Ethical Considerations for Increasing Minority Participation in Clinical Trials

http://dx.doi.org/10.5772/intechopen.70181

9

inferior clinical research trials.

Address all correspondence to: julius.wilder@duke.edu

healthcare system in crisis. Cancer. 2008;**112**(3):447-454

Lancet Infectious Diseases. 2005;**5**(9):558-567

State-of-the-Science Statements. 2002;**19**(3):1-46

Internal Medicine. 2006;**144**(10):705-714

Duke Division of Gastroenterology, Duke Clinical Research Institute, United States

[1] Colon-Otero G, Smallridge RC, Solberg LA, Jr., Keith TD, Woodward TA, Willis FB, et al. Disparities in participation in cancer clinical trials in the United States: A symptom of a

[2] Kwiatkowski K, Coe K, Bailar JC, Swanson GM. Inclusion of minorities and women in cancer clinical trials, a decade later: Have we improved? Cancer. 2013;**119**(16):2956-2963

[3] Shepard CW, Finelli L, Alter MJ. Global epidemiology of hepatitis C virus infection. The

[4] Armstrong GL, Wasley A, Simard EP, McQuillan GM, Kuhnert WL, Alter MJ. The prevalence of hepatitis C virus infection in the United States, 1999 through 2002. Annals of

[5] NIH consensus statement on management of hepatitis C: 2002. NIH Consensus and

**Author details**

Julius M. Wilder

**References**

A mandate on diversity by the FDA would be a powerful motivational factor for the pharmaceutical industry to increase clinical trial diversity. A FDA mandated minimum level of diversity (or diversity benchmark) within clinical trials could be applied to research within areas of medicine where there are known issues in terms of health disparities (hepatitis C, cardiovascular disease, diabetes). Through the creation of an expert panel, the need for specific emphasis on diversity within certain populations could be assessed. For example, an FDA or industry appointed expert panel on cardiovascular disease could mandate that phase 3 clinical trials on hypertension medications need to establish a minimum level of diversity in terms of African American participation, given the disparities that exist in hypertension control among African Americans. Those trials who successfully reach the diversity benchmark could be eligible for an expedited approval process and those that do not reach the benchmark would receive the equivalent of a black box warning concerning the lack of data in diverse populations. Exceptions could be made in those scenarios where earnest attempts at accruing diverse populations were attempted but unsuccessful.

The benefits and risks of improving diversity by any mandate or benchmark would have to be weighed. There is a potential for impeding clinical research due to the time required to achieve diversity. Furthermore, financial resources would be required to invest in the accrual of diverse populations. The expertise of thought leaders would be key to identifying those clinical trials which should have to reach diversity benchmarks as well as what precisely those diversity benchmarks should be. But achieving a consensus on what types of clinical trials require diversity benchmarks and what those benchmarks should be would require close collaboration between regulatory entities (FDA) and the pharmaceutical industry. Although there would be pros and cons with the implementation of such a policy, previous policies which have taken a less assertive stance have not addressed this important issue and the pharmaceutical industry has not shown a propensity for addressing the issue on their own.

Finally, a core ethical issue in clinical research is justice. The lack of diversity in clinical trial participation represents an injustice. The clinical trial population should reflect the population of people who are affected by the disease being studied. Not doing so risks the possibility new therapies are only efficacious and safe in a small proportion of the population at risk for the disease. Or worse, as has historically been the case in hepatitis C, clinical trials could lack a reasonable representation of those individuals at greatest risk for the disease and who have the worse outcomes (African Americans). This system creates lower quality research and the knowledge gained from these clinical trials has less value when compared to novel findings in a clinical trials that truly reflected the genetic diversity of the disease process being studied.

As has been the case in US history, to address social injustices, we must implement an aggressive policy to ensure that significant and positive progress is achieved. Here, we have discussed the potential for financial incentives as well as a FDA mandate in conjunction with appropriate product labeling to increase diversity in clinical trials. We urge a discourse on these issues because industry is now the major funder of clinical trials today and previous policies to address diversity in clinical trials have failed. Any cost associated with financial incentives or an FDA mandate pales in comparison to the cost of life lost because of unjust and inferior clinical research trials.
