LaraMag

Header
collapse
...
Home / Education / Economic / Bias in the Methods Used to Make Decisions Regarding Medical Care

Bias in the Methods Used to Make Decisions Regarding Medical Care

2023-01-01  Maliyah Mah

How people of cooler, Latinx people, and low-income individuals are denied access to healthcare

Sometimes the most unexpected settings reveal implicit racial bias in medical care. Take, for instance, the clinical decision tools that are currently utilized to play an essential part in the testing, diagnosing, and treatment of today's patients.

Racial Bias in Medical Care
 

These tools feature algorithms, which are step-by-step methods that are typically automated, for determining parameters such as the risk of heart disease, the necessity for a chest X-ray, and the dosage of a prescribed medication. It is possible to employ artificial intelligence to search through medical records and billing systems in order to create the necessary data sets.

On the surface, each of these considerations appears to be quite objective. But recent research has demonstrated that the data analysis that is employed in these algorithms can be skewed in significant ways against particular ethnic and socioeconomic groups. [Citation needed] [Citation needed] This can have a variety of repercussions, including negative effects on the quantity and quality of medical treatment that is provided to persons in these groups.

KEY TAKEAWAYS

A significant part of the process of testing, diagnosing, and treating patients in today's world is carried out with the assistance of medical decision support systems that are based on algorithms.

When determining a person's medical condition based on their medical spending, it is possible to incorrectly judge the severity of poor and minority patients' illnesses. This occurs when lower medical spending reflects a lack of access to medical care rather than a lack of need for medical treatment.

As a result of the body mass index (BMI) algorithm, which is used to diagnose patients as overweight or obese, there is now an atmosphere of weight-shaming and distrust between patients and doctors. This is due to the fact that more Black women are now classified as overweight than Hispanic and White women.

Input data and outcomes are now being screened for bias based on race, ethnicity, income, gender, and age. This is being done in order to identify discrepancies and make necessary adjustments to algorithmic processes.

The Most Critical Patients Are Affected by Racial Bias

A study conducted in 2019 on an algorithm that is frequently used by hospitals and insurers in the United States to allocate additional health management help revealed that the algorithm routinely discriminates against people of African descent.

When both ethnic groups had the same level of illness, the decision tool was more likely to recommend White individuals to care-management programmed for complicated medical requirements than it was to refer Black people to such programmers.

The fundamental cause of the bias was connected to the fact that the algorithm assigned risk scores to individuals based on the amount of money they had spent on medical care in the preceding year. The working hypothesis was that identifying patients with higher prices would also identify those with the greatest medical requirements.

On the other hand, many Black patients have less access to medical treatment, less ability to pay for medical care, and less trust in the medical care that they do receive than White patients who have the same illness. Their reduced out-of-pocket medical expenses did not, in this particular case, accurately predict their state of health.

In order to meet the complicated requirements of the sickest patients, care-management programmed employ a high-touch approach, which may include phone calls, visits to the patient's home by nurses, and the prioritization of doctor appointments. It has been demonstrated that the programmed enhance results, reduce the number of trips to emergency rooms and hospitalization's, and cut down on overall medical expenses. Because the programmer themselves are so costly, participation in them is limited to only those individuals who have the highest risk scores. It is possible that scoring methods that discriminate against the sickest Black patients when it comes to receiving this care are a key element in the elevated mortality risk that Black people face from a variety of ailments.

An Analysis of the Role of Race in Kidney Disease

It is possible for an algorithm to be biased even if it does not take race into account; nevertheless, several tools do take race into account on purpose. Consider the eGFR score, which evaluates the state of one's kidneys and is used to establish whether or not one requires a kidney transplant. Researchers found that Black people had, on average, higher amounts of creatinine (a consequence of muscle breakdown) than White people did in a study that was conducted in 1999 and was responsible for setting the criteria for the eGFR score. The researchers speculated that the higher levels were the result of Blacks having greater total muscle mass. Because of this, they changed the scoring, which, in essence, meant that persons of African descent needed a lower eGFR value than people of European descent in order to be classified with end-stage renal disease. As a direct result of this, people of African descent have been required to wait until a more advanced stage of their kidney illness before they are eligible for treatment.

Recently, a student at the University of Washington School of Medicine in Seattle who is studying medicine and public health made the observation that eGFR levels are not reliable when it comes to assessing the severity of renal disease in patients who are Black. She campaigned for and was successful in having race removed from the algorithm. UW Medicine was in agreement that the use of race as a variable in medical diagnostic tools was inefficient and did not adhere to the standards of scientific rigout.

The National Kidney Foundation and the American Society of Nephrology have come together to form a joint task group to explore the use of race in eGFR, and they intend to offer an initial recommendation about its usage before the year 2020 comes to a close.

The Body Mass Index, as well as the Bias

Even the most basic medical decision-making instrument that does not take race into account might nonetheless reflect social bias. For instance, the body mass index (BMI) is derived from a computation that involves multiplying a person's weight by their height. Patients' underweight, overweight, and obese status can be determined with its help.

The National Institutes of Health established a connection between an individual's body mass index (BMI) and the definition of obesity in 1985. In 1998, an expert panel established guidelines based on BMI, and as a result, 29 million Americans who had previously been classified as normal weight or just overweight were moved into the overweight and obese categories.

According to the Body Mass Index (BMI), the majority of people of all three races (Blacks, Hispanics, and Whites) are currently either overweight or obese. The percentages of obese males in 2018 ranged from 31.2% to 34.2%, and they were about the same for black men, Hispanic men, and white men. However, according to BMI, the following percentages of women are classified as obese:

44.2%—Black

35.4%—Hispanic

28.7%—White

An environment marked by an emphasis on body shaming and mistrust

The categorization of such huge percentages of populations as overweight or obese has contributed to an environment that stigmatizes weight and fosters mistrust between patients and their physicians. People who are overweight often feel that their doctors do not address the health issues or concerns that led them to seek medical attention in the first place. Instead, medical professionals point the finger at the patient's weight as the cause of the patient's health problems and recommend weight loss as the remedy. Because of this, Black and Hispanic patients are more likely to avoid visiting healthcare practitioners, and as a result, they may miss opportunities to prevent problems or detect them at an earlier stage.

In addition, it is turning out to be more and more obvious that being overweight or obese does not necessarily indicate an issue with one's health. Those who are obese have an increased risk of developing a number of significant diseases and ailments, including COVID-19 hospitalization, high blood pressure, heart disease, stroke, type 2 diabetes, and a variety of other diseases.

However, heavier weight persons had superior survival rates for a variety of different illnesses, including recovery from serious injuries, cancer, and the operation of the heart.

Fresh, upgraded recommendations from Canada

In point of fact, the new obesity guidelines for Canadian clinicians that were issued in August of 2020 emphasize that medical professionals should avoid classifying patients based solely on BMI. According to the new recommendations, a person should only be given a diagnosis of obesity if their overall body weight has a negative impact on their physical health or mental wellness. Treatment should take a holistic approach rather than focusing only on weight loss. The guidelines also mention the following statement: "People living with obesity confront severe bias and stigma, which contribute to increased morbidity and mortality independent of weight or body mass index."

Eliminating Unfair Prejudices in Decision-Making Tools

There are many different kinds of algorithms, and medical ones aren't the only ones that can be prejudiced. As an illustration, Amazon discontinued using a recruitment tool that displayed bias towards women in the year 2018. The technology learned to favor male applicants by studying ten years' worth of data on Amazon's recruiting practices during a time period in which the company employed mostly men. During this time period, the tool was in operation.

Machine learning is frequently applied in the medical field and depends heavily on electronic health records. Patients who are low-income or members of a minority may receive care that is fragmented and be seen at various facilities. They are more likely to be seen in teaching clinics, which may have less precise data input or clinical reasoning due to the nature of the clinic's work. In addition, there is a possibility that patients will be unable to access online patient portals and record their outcomes. As a direct consequence of this, the medical records of these individuals can contain inaccurate or missing information. Because of this, it's possible that the algorithms that power machine learning will wind up excluding low-income and minority patients from data sets as well as the necessary medical care.

The increase in knowledge of biases in healthcare algorithms that has occurred over the past few years is certainly cause for celebration. The data that is being inputted as well as the results are being examined for bias related to race, ethnicity, income, gender, and age. Once disparities have been identified, the algorithms and data sets can be modified to become more objective.

 

 


2023-01-01  Maliyah Mah