Q-Rant! Portland Business Journal’s Sensationalized Headline: “White males have the easiest time getting doctor’s appointments”


banks being banksA Slow News Day.
 
After a good week, it’s Friday, November 4, I’m closing up shop, looking forward to a relaxing weekend of doing absolutely nothing. As I start to turn off my computer, I make a last minute scan of emails. A Portland Business Journal feed caught my eye; it screamed:

White males have the easiest time getting doctor’s appointments

Even though I would be late getting to the gym, I had to find out how it was that I, being one of the ‘usual suspects’, had an easier time getting a doctor’s appointment than others in the queue. Short of an emergency, I assumed I had to wait in line just like everyone else, regardless of gender, race, or ethnicity. I don’t remember calling the doctor’s office for an appointment and reception telling me: “Before we can set your appointment, we need to know your race and/or ethnicity.”

Needing to know more, I reluctantly sat back down to read the article, just so I could get another public dousing of white guilt going into the weekend.  In short, here’s my take-away from this article: It was a slow news day on the old medical beat, so what better way to sex it up than to run a sensational headline that demands readers link to the article to see why white males get to take cuts in line.

After the headline, the author, Elizabeth Hayes, Staff Reporter for the Portland Business Journal, led off with the following attention-grabbing statement [which is gross over-generalization, as I  address below]:

A white, male patient has a much greater chance of getting a doctor’s appointment than a Hispanic woman on Medicaid.

That was the finding of a new study by economists from Portland State University and Oakland University.

Hmmm. Where’s the study?  What was the methodology? My interest is piqued – I need to know more!  Did the Business Journal writer actually read the study that reached such an outrageous conclusion?  If it was published, it must be available somewhere online. Perhaps the writer was on deadline, and simply didn’t have a chance to vet the study to see if the its conclusions accurately reflected the title-grabbing headline she selected for it.

Well, so much for sourcing a story before publishing it.  After a little research, here’s what I know: This article appears to have been simply taken (without attribution) from a piece in the Portland State News, November 30, 2015 (here). The PSU article did not link to the study either, even though its co-author is a professor at the same university.  But at least the headline was more sedate: “Research shows access to primary care doctors lacking for some.” Apparently, the sin of journalistic hyperbole is an acquired trait that manifests only after one is out of school, and writing for a living. Ms. Hayes, are you listening?

But the PSU article did shed a little more light into the methodology used for the study:

Using the patient profiles,[1] student researchers made phone calls to a national random sample of physicians to request information on appointment availability.

Gosh, that’s all they did? A bunch of PSU students who needed a few bucks for an evening at the Cheerful Tortoise,[2] called doctors’ offices across the country and pretended they wanted appointments? Sounds pretty simple – perhaps a little deceitful though; they used fake health profiles, fake names, fake insurance, and requested fake appointments. Hmmm, sounds like ‘pretexting’ to me.

With that kind of scientific precision, it’s no wonder the National Institute of Health gave the co-authors of this “Longitudinal Access to Physicians Study” $435,000 to blow through in two years.  Pretty heady stuff – certainly worthy of a headline accusing white males of having an easier time getting doctors’ appointments than the rest of the population.

Riddle Me This, Batman. How was the study able to conclude a relationship between gender/ race/ethnicity and appointment times? None of those criteria are part of any appointment scheduling procedures I’ve ever encountered. And even though it was written up in two newspapers, the co-authors of the study appear disinclined to share it with the reporters, or explain how they reached their single-most sensational conclusion – if, in fact, that was their conclusion.

The Story Behind The Story. Apparently, an associate professor at Portland State, Rajiv Sharma, as a part of his “publish or perish” blood oath taken upon entering academia, decided to create a “study” based not upon actual data or public record information, but upon fictionalized data [social scientists would probably prefer a less pejorative and fuzzier characterization, such as a “simulation”]. But I’m not a social scientist, and this is my blog post, so let’s call it what it is.

Since neither the Business Journal nor the PSU News published the study, I followed up at Economics Letters, the website the article was said to have been published in.

While I admittedly know nothing about scientific publications, I was surprised to learn that the Economic Letters website charges folks to “publish” their work…and then charges readers to download it. Not a bad gig.

The papers purport to be “peer reviewed” and not all are accepted by Economic Letters, and I do not suggest otherwise. The organization seems to have an impressive list of names associated with it.  But I just assumed  when someone’s academic work was “published”, that meant it was picked up in some noted industry journal, like JAMA or Lancet – not that one had to pay $65 to a website to get their articles posted online and call it “published”. But, I suppose it’s worth $65 to build up one’s curriculum vitae.

So, for the princely sum of $35.95, I got to peek behind the curtain to learn the metrics and methodology in this study that led to the sensational conclusion shouted out by the Business Journal as it approached its weekend deadline.

The Longitudinal[3] Access To Physicians Study. I read the entire report, its methodology and data, and will not address it here. [I did not attach a link to it, due to copyright concerns.] Some of it borders on the mind-numbing:

We estimated 3 regression models for both probability of appointment offers (logistic) and wait-to-appointment (linear on log-transformed days to appointment) to examine the effects of dividing non-white and Medicaid patients into successively finer sub-groups. Independent variables in all regressions included indicators for the month and day of week when the call was completed, as well as caller fixed effects. Data on wait-to-appointment were highly skewed and a Box–Cox analysis indicated that a logarithmic transformation would be appropriate.

In answer to my rhetorical question as to how the study was able to draw conclusions about wait-times based on gender, race or ethnicity, since that isn’t part of most telephone intake protocols, I learned that the co-authors (or their student researchers) created 18 fictional patient names that were selected based upon whether they were racial or ethnic sounding.  [Apparently profiling is OK if you’re an academic armed with a NIH grant.] And to avoid having to use “racial or ethnic sounding” voices, the student’s scripts had them ostensibly calling on behalf of their aunts and uncles.

Black and white first and last names were selected based upon whether they sounded, or were perceived, as black or white names. For Hispanic first names, they selected “ ….the highest ranked first names common among the 5 US cities,” in a 1998 list of the most frequent Hispanic names, and for last names they selected those “…where the highest proportion of householders self-identified as Hispanic in the 1990 census.”[4]

It strikes me that rather than drawing scientific conclusions based upon how a person’s name “sounds” or is “perceived”, the better approach might be to review actual numbers, which surely exist in the health care system today.  The PSU study implies otherwise, but Google seems to disagree: [See link here.]

Nowhere did the PSU study explain why its numbers, based upon fictional patients, with or without ethnic sounding names, was more reliable than real people and real statistics. I’m sure making up a study – excuse me – doing a simulation, is far easier and cheaper than actually accessing real records and real data for real results.

So How Did The White Guy Get To The Head Of The Line? First, let’s debunk the Business Journal’s headline: “White males have the easiest time getting doctor’s appointments” and its lead-off statement: “A white, male patient has a much greater chance of getting a doctor’s appointment than a Hispanic woman on Medicaid.” That’s not what the study said.

Actually, it said that a white male self-pay patient was able to get an appointment sooner than a Hispanic woman on Medicaid.  A “self-pay patient” pays cash; they may or may not have insurance, but they choose not to use it.  To me, the study’s statistic, if correct, says more about physician preferences for cash-paying patients over those using Medicaid, than whether the physician’s office was scheduling them based upon race, ethnicity or gender.  Of course, rather than vetting the PSU article, Ms. Hayes simply ran with the PSU story, but decided to sex it up by blaming “the white guy.” After all, she didn’t attach a link to the actual study, so her sensationalized headline could stand on its own, and (hopefully) weekend readers would take it at face value.

The stated goal of the study was to investigate disparities in access to health care, which the co-authors acknowledged already exists. Guess what? With a nice grant from the NIH, they found it! But just to distinguish their study from the many others that preceded it, they concluded that their results showed the disparities are much worse than previously believed. 

The Fatal Flaw. When you dig deeper, the Business Journal’s article discloses that white males with the reportedly “easiest” access, were paying cash – they were not using insurance.  Naturally, that made me wonder what the study found about women [with White, Black and Hispanic sounding names], who tried to schedule office visits and offered to self-pay. How did they fare compared to the officious white males?  Guess what? We’ll never know.

Why? Because the study did not test for them. The patient profiles consisted of the following groups: (1) All; (2) Male; (3) Female; (4) Black; (5) White; (6) Hispanic; (7) Those on Medicaid; (8) Those on Medicare; (9) Those who self-pay; (10) Those on private insurance; (11) White males who self-pay; (12) Hispanic females on private insurance; (13) Hispanic females on Medicaid; and (14) Black females on Medicaid. [Bold italics mine.]

Inexplicably not accounted for on the above list were Blacks and Hispanics (male or female) and White females who self-pay. Also missing were Black females on private insurance; Black females on Medicare; and Hispanic females on Medicare. What happened? Let’s hope the student researchers drew up the profiles, and perhaps the co-authors didn’t catch this. For whatever reason, important profiles were left out.

What’s wrong with the study’s approach? In my opinion, four things:

  • The authors are mixing apples, oranges, rutabagas, turnips, and almost every other fruit and vegetable in Mr. McGregor‘s garden. Profiles based upon type of insurance makes sense. But testing for race, ethnicity and gender at the same time, skews the results; it is unclear whether a longer wait-time is due to the type of insurance, or to race, ethnicity, gender profile, or a combination of each.
  • Moreover, as discussed below, based upon the study’s numbers, there is an assumption of bias, even though it was not testing for that, since the variations could be simply the result of Medicare and/or Medicaid quotas set by the randomly selected physicians.
  • Plus, it’s one thing to study wait-times based upon type of insurance; it’s another to throw in cash-paying patients, which again, can produce results skewed by each providers’ intake criteria.
  • And why test only for “white males” who self-pay, and ignore self-paying females, White, Black and Hispanic, in their own separate profiles?[5]

To put a finer point on my criticism of Ms. Hayes’ headline-grabbing conclusion: Isn’t it possible that had the study actually tested for wait-times for self-paying White, Black and Hispanic women, we would have seen what I suspect was the real message; i.e. health providers prefer (a) pre-paid cash patients over insurance, and (b) every other type of insurance over Medicaid – regardless of race or ethnicity? And if there was a disparity in the study’s numbers, would it have been statistically significant – i.e. within the study’s margin of error? 

In fact, if one disregards the wait-times, the PSU study tells us that:

Medicaid patients were less likely to be offered appointments (27%) than those with other types of insurance (Table 1). Medicare patients were offered appointments (52%) at almost twice the Medicaid rate and at about the same rate as patients with private insurance. Self-pay patients were offered appointments at the highest rate (61%) among the insurance groups.

Moreover, according to the study:

Blacks, whites and Hispanics were offered appointments, respectively, 45%, 50%, and 48% of the time. The racial/ethnic differences were not statistically significant.

Huh? To a layperson such as myself, this flies in the face of the Business Journal’s headline that “White males have the easiest time getting doctor’s appointments”. If appointment offers were essentially the same, regardless of race or ethnicity, isn’t it possible that wait-times are driven more by provider preferences about form of payment, schedules, and Medicaid quotes, than any bias about the race, ethnicity or gender?

One doesn’t need a $435,000 grant to figure out that: (a) Cash is king; and (b) Medicaid reimbursement is a huge headache.[6] That’s really what the PSU study says.

Conclusion.  The Business Journal article quotes Rajiv Sharma, one of the co-authors of the study:

What we find is a disparity in access we can attribute to the health system,” Sharma said. “It isn’t necessarily a conscious bias, but it may be things are going on that are subconscious. Simply realizing this type of bias exists may help to put in place procedures that mitigate it.

If you’re looking for “disparity”, especially when you’re paid $435,000 to find it, you probably will. But the co-author’s statement about “bias” is gratuitous – the study was intended to identify disparity in access – not bias. In fact, nowhere in the study did it identify bias as a causal factor for the results. The co-author [an economist, not a social psychologist] implies that his results show the existence of a “subconscious” bias.  But that conclusion came from the author alone, and was not discussed in the test results. Well, if you’re a hammer, everything looks like a nail.  

The Portland Business Journal writer’s weekend non-vetted, non-sourced, and non-researched story about a Longitudinal Access To Physicians Study, taken, without attribution, from a local college newspaper, did not say what she breathlessly claimed in the headline.  I would have hoped for more from the Fourth Estate. Posting a misleading headline for its sensationalism needlessly feeds the prejudices of so many who follow the mainstream media, hook, line and sinker.

And yes, I made it to the gym that night after reading Ms. Hayes’ article, ruminating on how I would respond over the weekend. This post was cathartic; I feel much better now…. ~PCQ

Epilogue. After finishing this post, I just noted that Ms. Hayes, not content to post her headline-grabbing article on the Portland Business Journal, has now posted it on the PSU News site, here.  If journalistic success is measured by page hits rather than page accuracy, she should go a long way.

_______________________________

[1] As the study makes clear, the patient profiles are fictional, as are the patient names.

[2] When I was a teaching assistant at the U of O, I once spilled beer over the student’s test paper I was correcting. So much for rigorous academic excellence….

[3] A good short definition of a “longitudinal study” is at the Institute for Work and Health: “A longitudinal study, like a cross-sectional one, is observational. So, once again, researchers do not interfere with their subjects. However, in a longitudinal study, researchers conduct several observations of the same subjects over a period of time, sometimes lasting many years.” [But wait! The PSU study is a simulation! It’s not following real people, and certainly not over a period of time. I must admit, “longitudinal study” does sound very impressive – until one looks it up and realizes this isn’t one.]  See, also, https://en.wikipedia.org/wiki/Longitudinal_study.

[4] Not sure why a 25-year old census would be used. There have been two more since then.

[5] Having no experience in statistics, and just making a WAG, it may be the white self-paying male profile is some sort of control group. But, if so, wouldn’t White, Black, Hispanic, and Female self-payors have to be included to get a true picture of any disparity for self-pays?

[6] For more: https://www.google.com/search?q=officious#q=Medicaid+reimbursement+headaches