Reevaluating Anemia Diagnosis: Moving Beyond Race-Based Standards in Medicine
Wingel Xue, B.S. and David S. Jones, M.D., Ph.D. Published June 4, 2025
7/9/202512 min read
Introduction
The diagnosis of anemia has long been influenced by race-based standards, which have significant implications for patient care. However, recent shifts in medical practice and research are challenging these norms, advocating for a more inclusive and scientifically grounded approach. This blog post aims to explore how the medical community is moving away from race-based standards in diagnosing anemia and the potential benefits of this transformation.
The Historical Context of Race in Medicine
Traditionally, medical recommendations and diagnostic criteria for anemia have been closely tied to racial categories. This practice stemmed from a historical context where race was viewed as a determinant of health disparities. For many years, certain hemoglobin thresholds were established based on the racial identification of patients, often perpetuating stereotypes and leading to disparities in care.
The issue with these race-based standards is multifaceted. Not only do they fail to reflect the biological variability within racial groups, but they also overlook other pertinent factors such as socioeconomic status, environmental influences, and individual health histories. By relying on race in categorizing anemia, healthcare professionals may inadvertently contribute to misdiagnosis and inadequate treatment for numerous patients.
Recent Developments in Anemia Diagnosis
Fortunately, a paradigm shift is underway in the medical community as healthcare providers increasingly recognize the limitations of race-based standards. Research has highlighted the significance of personalized medicine, which emphasizes the need for individualized care that accounts for a patient’s unique characteristics rather than their racial identity.
Newer studies suggest that factors such as sex, age, and specific health conditions should play a more critical role in diagnosing anemia. For instance, it has been found that while generalized thresholds can provide some guidance, they often overlook individual variability, leading to either overdiagnosis or underdiagnosis in diverse populations. As healthcare systems embrace a more patient-centered approach, the emphasis on race is diminishing, paving the way for improvement in clinical outcomes.
The Future of Anemia Diagnosis
Looking ahead, the future of anemia diagnosis appears promising as the medical community continues to challenge outdated practices. By shifting focus from race-based criteria to more holistic, data-driven methodologies, clinicians can better serve all populations, ensuring equitable healthcare access and outcomes.
Education plays a pivotal role in this evolution. As medical students and practitioners become more aware of the biases present in past diagnostic criteria, they will be better equipped to challenge these norms in practice. Implementing comprehensive training on the various factors influencing anemia will further facilitate this transition, steering the field toward more accurate and fair diagnostic methods.
In conclusion, the movement away from race-based standards in diagnosing anemia signifies an important step toward equitable health in medicine. By prioritizing individual patient characteristics, healthcare providers can enhance diagnostic accuracy and ultimately improve treatment outcomes for everyone affected by anemia.
In the decades since the human genome was first sequenced, scientists have repeatedly and boldly declared that “the concept of race has no genetic or scientific basis.” Yet medical and research communities continue to struggle with how best to understand race, human diversity, and their role in medicine. This question is complex, and it forces researchers to try to disentangle the social consequences of race from differences that might arise from genetic ancestry. Awareness of stark racial health disparities has motivated efforts to identify root causes and implement appropriate interventions. Debate over one contentious issue has been especially dynamic: the use of race-specific cutoffs and coefficients in clinical practice and their implications for health equity.
Analyses tracing the history of race adjustments in clinical algorithms have revealed how the legacy of slavery created false, yet deeply held, notions of racial biologic difference, as highlighted by the often-uncontroversial adoption of race adjustments in multiple medical specialties from the 1990s into the 2000s. One important case tells a different story: during this period, many professional committees debated whether to use race-specific thresholds for hemoglobin levels to diagnose anemia. Reviewing the same body of evidence, the Institute of Medicine (IOM, now the National Academy of Medicine), the Centers for Disease Control and Prevention (CDC), and numerous specialty organizations came to divergent conclusions about whether to stratify anemia thresholds according to race. Just as we can learn by examining how and why medicine introduced race into its tools, we can learn by studying the rarer cases in which U.S. experts chose to leave race out.
Analysis of the experience with anemia elucidates how a racial difference was characterized, became established in the scientific literature, and made its way to policy and clinical practice. The rich debates over race-specific hemoglobin thresholds reveal physicians’ changing conceptions of racial equity and provide insight into the factors that drove a particular clinical race adjustment to be hotly debated, rejected by some, and unevenly — and temporarily — implemented by others. In the end, American medicine turned away from race-specific standards for anemia because researchers came to understand that a simplistic Black–White dichotomy did not adequately reflect the complex distribution of hemoglobin levels in different populations.
Debating a Racial Difference: The Case of Hemoglobin
In the mid-1960s, with the poverty rate in the United States approaching 20%, national priorities converged on the War on Poverty. As part of this effort, public health authorities turned their attention to malnutrition. The CDC launched early iterations of the National Health and Nutrition Examination Survey (NHANES) to characterize national nutritional needs. With the national focus on nutrition and, for the first time, a publicly available data set containing comprehensive demographic, nutritional, and laboratory information, researchers analyzed trends across populations to identify areas of unmet need.
In 1973, epidemiologic analyses of childhood nutrition revealed that Black children’s serum hemoglobin levels were on average 0.5 g per deciliter lower than those of White children. Researchers were uncertain whether this disparity reflected racial differences in physiology or nutritional differences. Drawing on data from various state and national surveys, analysts found the same disparity even after stratifying people according to age, socioeconomic status, and self-reported iron intake.
Throughout the 1970s, other confirmatory work used different study populations and varied methods to control for hemoglobinopathies, iron deficiency, and socioeconomic status. All the analysts reported a race difference in both children and adults. Over the next decade, this difference in hemoglobin levels, which ranged from 0.5 to 1.0 g per deciliter, spurred proposals that researchers and clinicians should use separate, race-specific hemoglobin cutoffs to define anemia in epidemiologic screening and clinical practice.
The push for race-specific cutoffs reflected the prevailing and long-standing understanding that race was biologic. Race specificity was also framed as part of the campaigns for racial justice that emerged from the civil rights movements in the 1960s that sought to direct attention to health problems faced by Black Americans. Researchers worried that if lower hemoglobin levels reflected “normal” biologic variation, a uniform hemoglobin cutoff would cause overdiagnosis of anemia in Black populations. Such overdiagnosis, in turn, would bring a range of harms, including unnecessary iron supplementation and the potential psychological toll of inappropriately being labeled ill. At a national level, advocates also argued that preventing overtreatment reduced “considerable financial burden placed on either the government or the individual.”
But arguments for race-specific cutoffs were met by strong protests. As the discussions gained prominence in the late 1970s, scientists who doubted that racial categories were biologically distinct challenged the rigor of the existing literature on hemoglobin differences. Critics pointed out that even those previous analyses that had attempted to control for socioeconomic status and diet had not done so adequately. When researchers reanalyzed data sets to account for various aspects of poverty, they found that differences between racial groups were, in fact, no longer significant. Given findings that cast doubt on whether any “true” racial difference existed, they argued that there was no justification for implementing a race-specific threshold. If lower hemoglobin levels in Black children reflected anemia (e.g., from malnutrition), then separate cutoffs would both normalize untreated anemia in Black populations and “entrap fluid sociological constructs and present them as true biological assemblages.” Meanwhile, people who advocated for the race adjustment because it would save money were criticized for supporting a reform that would take government aid from Black children and redirect it to non-Black children.
The discussion of anemia thresholds was particularly important because anemia, as defined by low hemoglobin or hematocrit levels, was (and remains) a qualifying criterion for nutritional risk under the Special Supplemental Nutrition Program for Women, Infants, and Children (WIC). Researchers knew that changing what counted as a “normal” hemoglobin level would have consequences for WIC eligibility. Some acknowledged this effect directly. As George M. Owen and Anita Yanochik-Owen wrote in 1977, “If separate norms for Black and White children were developed…there would be a reduction in the proportion of Black infants and children considered to be anemic and at nutritional risk and who therefore necessarily require special food supplementation or medical intervention.”
With the completion of NHANES I and II in 1980, researchers hoped that this new, large national data set would resolve the debate about whether racial differences in hemoglobin levels existed. Analyses of the NHANES data again consistently revealed a racial gap in hemoglobin values, but its magnitude ranged from 0.2 to 1.0 g per deciliter, and interpretations of the data continued to be contested.26 Whereas some researchers viewed the average difference as generalizable, other analysts argued that the skewed distribution of lower hemoglobin values in Black adults indicated that the difference was attributable to a subgroup of people with anemia (i.e., people with sickle cell trait or other hemoglobinopathies) and was “not the result of a generalized reduction among the entire black population.” Therefore, it would have been inappropriate to use a race-adjusted cutoff that would affect all Black people, not just those with the trait.
The fact that national data on serum hemoglobin levels did not provide conclusive evidence pushed investigators toward functional analyses of varying hemoglobin levels. A new body of work tracked how serum hemoglobin levels corresponded to clinical outcomes in pregnant women. The research showed that among those with lower hemoglobin levels, Black women had fewer adverse fetal outcomes than White women. This finding was interpreted as “indicating lower optima for Hgb and Hct in blacks,” implying that lower hemoglobin values were physiologically normal for Black populations rather than reflecting anemia. Other people who analyzed the test characteristics of hemoglobin cutoffs for epidemiologic screening argued that race-adjusted hemoglobin cutoffs for anemia provided better sensitivity and specificity than a uniform cutoff. But despite extensive discussion through the 1980s, experts did not reach a stable consensus.
Diverging Verdicts: The CDC and Other Professional Committees
The debates over race-adjusted hemoglobin levels became prominent enough, especially because they affected WIC eligibility for women and children, that the CDC took up the question in the late 1980s and early 1990s. Researchers involved in the studies that established a racial difference in hemoglobin levels presented their case to CDC administrators, arguing that separate standards should be implemented for Black and White women. After a series of meetings, discussions ultimately stalled. CDC experts could not reach consensus about the key question of whether racial differences reflected biologic (e.g., genetic) variation or environmental differences (Jackson RT: personal communication). Researchers noted that it remained unclear whether the lower average hemoglobin levels seen in Black people reflected a subset of the population that had lower hemoglobin levels owing to medical conditions other than anemia. If that were the case, it would not make sense to impose a lower anemia cutoff for all Black people.
Without conclusive evidence regarding racial differences in hemoglobin levels, clinicians and researchers continued to use a uniform cutoff. In 1992, the CDC asked the IOM to convene an expert panel to evaluate the effectiveness of the CDC’s nutritional interventions for iron-deficiency anemia over the previous two decades. Taking average values from NHANES II, the IOM committee reported a racial hemoglobin gap of 0.3 g per deciliter among children and 0.8 g per deciliter among adults. It recommended implementing race-specific cutoffs, with a hemoglobin threshold for anemia of 11.0 g per deciliter for White children and 10.7 g per deciliter for Black children, with similar adjustments to the cutoffs for iron supplementation. For adult women, a similar downward adjustment of 0.8 g per deciliter was made.
The IOM committee did not address the extensive literature arguing against biologic racial differences. Instead, it cited NHANES data and the evidence of improved sensitivity and specificity of separate hemoglobin cutoffs for anemia screening. Although the committee acknowledged that the race-adjusted cutoff “could be seen as racially stigmatizing,” it concluded that “the advantage of fewer false-positive diagnoses is a strong argument for making an appropriate downward adjustment in hemoglobin and hematocrit cutoff values.”
This conclusion was not the last word. In 1998, the CDC again reviewed evidence compiled by the IOM committee and convened a separate expert panel to reassess recommendations on surveillance for and treatment of iron-deficiency anemia. Although the CDC committee acknowledged the risk of over diagnosing iron-deficiency anemia in Black populations when using a uniform hemoglobin cutoff, the CDC once again did not recommend race-specific thresholds. As the committee explained, “[T]he reason for this disparity in distributions by race has not been determined.” In addition, during committee discussions, it was again decided that the left-skewed distribution of national data on hemoglobin levels in Black populations reflected undiagnosed hemoglobinopathies (Yip R: personal communication). Though a race-adjusted hemoglobin cutoff was not recommended, the committee still recommended that “health-care providers should be aware of the possible difference in the positive predictive value of anemia screening for iron deficiency among blacks and whites and consider using other iron status tests…for their black patients.”
The conflicting sets of guidelines issued by the IOM and the CDC set the stage for both international organizations and U.S. medical specialty associations to produce their own divergent recommendations regarding clinical consideration of the hemoglobin disparity. New data from NHANES encouraged the World Health Organization (WHO) to update its screening recommendations for iron-deficiency anemia, and in 2001 the WHO recommended that “for populations of African extraction,” hemoglobin values would require a downward adjustment of 1.0 g per deciliter. A 2004 update stratified anemia diagnosis according to ethnic group even more granularly, providing a table of hemoglobin cutoffs for African, East Asian, Hispanic, and Japanese Americans, as well as for Jamaican girls and people from Indonesia, Thailand, Vietnam, and Greenland. In 2024, however, an updated WHO guideline recommended against race-adjusted cutoffs because of a lack of evidence regarding the mechanism underlying hemoglobin differences and the difficulty of operationalizing “genetic ancestry/ethnicity/race” as a variable.
In the United States, citing the IOM’s recommendations, the American Academy of Pediatrics (AAP) recommended an adjustment of 0.3 g per deciliter for Black children in the 4th edition of its Pediatric Nutrition Handbook in 1998. Similarly, in 2008, the American College of Obstetricians and Gynecologists (ACOG) published a practice bulletin reiterating the IOM’s recommendation that the hemoglobin cutoff for anemia should be lowered by 0.8 g per deciliter in Black pregnant women. In 2009, researchers again reported a functional difference in the NHANES data: the “haemoglobin threshold below which mortality rises significantly” was lower in non-Hispanic Black people than in non-Hispanic White people and Mexican Americans. They called on physicians to adopt race-specific standards for anemia diagnosis. But when the AAP reconsidered its 1998 decision, it instead removed the recommendation for a race-specific adjustment from the 7th edition of its nutrition handbook, published in 2013. ACOG also reversed its race-specific guideline, in an updated practice bulletin in 2021.
Meanwhile, when the National Kidney Foundation (NKF) updated its clinical guidelines for treating anemia in patients with chronic kidney disease in 2006, it did not adopt a race-adjusted cutoff. Echoing the CDC’s reasoning, the NKF guidelines stated that because no mechanism for the racial difference had been discovered, it was possible that the observed difference was attributable to undiagnosed anemia in Black populations.
Finally, although recommendations on screening and treatment for iron-deficiency anemia would have fallen within the purview of the American Society of Hematology (ASH), the organization never addressed the debates on racial differences in hemoglobin levels. Internal fact gathering at ASH confirmed that no official recommendations were ever published on the issue (Frustace P: personal communication).
Defining Normal Ranges Today
The current controversies about the use of race in medicine have shed light on how race adjustments were adopted in clinical algorithms. Though these adjustments are often discussed as examples of misinformed practices resulting from outdated notions of race, the story of anemia reflects a subtler process. First, confronted with empirical evidence of population-level differences in hemoglobin levels, many expert groups actively debated what that difference might mean. Was it evidence of a race-specific difference that justified race-specific thresholds? Was it evidence of the influence of a confounding factor (e.g., nutritional status)? Or was it evidence of diversity within a “racial” group (e.g., some Black people have low hemoglobin levels because of sickle cell trait or thalassemia, and that affects the population average)? The answer to those questions would determine whether race-specific thresholds for anemia were appropriate.
Second, between 1990 and 2008, different groups (the CDC, IOM, AAP, ACOG, NKF, and others) reached different answers to those questions, leading to divergent policies. During that period, a combination of nuanced data analysis, advocacy against the race adjustment, and the convictions of key stakeholders in decision-making committees all converged to cast doubt on the necessity of race correction.
Race-specific practices should not be understood simply as residual artifacts of past beliefs. Instead, they often resulted from active debate and decisions in cases where careful study and discourse could have produced different outcomes. As new genomic techniques reshape our understandings of ancestry, this history offers an important reminder of the importance of maintaining a critical perspective on the ways in which social groupings both influence and are influenced by scientific research. The history of anemia thresholds highlights the reality that practices in medicine, rather than arising from objective, rational science, are historically contingent and often reflect the varying convictions of well-positioned people who have disproportionate effects on critical decisions.
Finally, although understandings of race have changed drastically since the 1970s, the fundamental question of how to group populations in a clinically meaningful way remains unresolved. Although race-adjusted hemoglobin cutoffs have been reconsidered and are no longer recommended by either the AAP or ACOG, discussions about when and how to stratify normal ranges are ongoing. For example, consideration of the Duffy-null phenotype, a red-cell antigen variant that is more prevalent in Black populations than White populations and results in neutropenia without increased risk of infection, has raised questions about whether to establish a Duffy-null–specific normal range. Similarly, instead of proposing race-specific thresholds for anemia, which inevitably rely on simplistic and socially contingent categories, physicians can increasingly use our growing ability to detect genetic variation to generate more precise stratification of risk. As clinicians continue to navigate new understandings of genetic ancestry and its consequences, history reminds us of the stakes of debates over the best ways to achieve health equity.