I was excited, initially, when I found yet another recently published scholarly article on the COVID-19 pandemic gun buying spree of 2020. I have already noted an interesting study that uses NICS data to highlight how the COVID spree differs from other spikes in gun buying. And a study that compares new COVID gun buyers to other categories of people who did and did not buy guns from January to May 2020.
“Public perspectives on firearm sales in the United States during the COVID-19 pandemic” was published in October in the journal Injury Prevention. The authors are public health scholars. The data employed comes from Amazon mTurk during the last week of May 2020.
Looking at the article, my excitement faded quickly, for reasons I discuss below.

First, the entire premise of the article – that public perceptions of the extent of firearms purchasing matters – is questionable. And if you bother to read section 1.2 of the article on “Importance,” the authors do not even try to justify it.
Table 3 (below) reports the results of people’s OPINIONS of firearm sales since January 2020. Among the opinions requested: “Number of background checks have increased” and “online sales of firearms have increased.” Who honestly cares what public opinion says about these things?

That said, information about who is buying guns during the COVID-19 pandemic is important. Which leads to my second area of disappointment. The authors compare those who bought firearms in the pandemic to those who did not, but this does not distinguish enough among gun buyers. Pandemic gun buyers are a combination of existing gun owners getting +1 (or more), those who live in homes with guns but do not personally own one, and buyers who do not currently own guns or live in a home with guns. The differences between them are significant. Here they are lumped together, and as Table 2 (below) shows, in this mTurk sample 88% of those who bought firearms during the pandemic owned guns in 2019 or lived with someone who did.
And how do we square this with the question on firearms ownership in this same table? Given the number of respondents (n=263), it appears that these are mutually exclusive categories, so what are we to make of someone who bought a firearm during the pandemic but answered “I live with someone who owns 1 or more firearms” rather than “I own 1 or more firearms”? Are those all straw purchasers? They bought during the pandemic but already got rid of their gun?

As a consequence of these conflations, the demographic differences presented in Table 1 (below) don’t tell us much about the dynamics of gun purchasing during the COVID-19 pandemic, particularly in comparison to the early article I discussed on this same topic.
Moreover, this article more than the previous article I reviewed that used Amazon mTurk data really highlights the limitation of mTurk data for generalizing from the sample to the broader population. To wit: These authors find that fully two-thirds (67%) of their respondents who bought firearms during the pandemic were healthcare professionals.
Which is truly and literally unbelievable.

In an era when “personal experience/opinion” is recounted as scientific fact very little surprises me.
JACEPOPEN seems to be styling itself as a scholarly publication, but I can’t believe that this was peer-reviewed. I hope you forward some of your observations to the authors and the journal.
“Lies, Damn Lies, and Statistics.” I can’t remember the full text of Disraeli’s quote, but it definitely seems to apply in this case.
LikeLike
I think this article does more than simply recount personal experience/opinion as scientific fact. The authors do attempt to collect systematic data on their topic of interest. I think the main problem here is the authors don’t know enough about their topic to collect the right kind of data. Here is where the lack of diversity among researchers with respect to guns becomes a problem.
The problem extends to the peer review process. If peer reviewers fundamentally agree with the perspective of authors, then it will be harder for them to catch certain problems. I wrote about this before in connection with Jonathan Haidt’s work: https://guncurious.wordpress.com/2020/03/01/gun-studies-peer-review-and-jonathan-haidts-the-righteous-mind/
Good reasoning is a group not an individual accomplishment: “We should not expect individuals to produce good, open-minded, truth-seeking reasoning, particularly when self-interest or reputational concerns are in play. But if you put individuals together in the right way, such that some individuals can use their reasoning powers to disconfirm the claims of others, and all individuals feel some common bond or shared fate that allows them to interact civilly, you can create a group that ends up producing good reasoning as an emergent property of the social system” (Haidt, Righteous Mind, p. 105).
What is the implication of this perspective for peer review in a (social) scientific field of study? Haidt concludes: “This is why it’s so important to have intellectual and ideological diversity within any group or institution whose goal is to find truth (such as an intelligence agency or a community of scientists) or to produce good public policy (such as a legislature or advisory board).”
In addition, peer review is not perfect. It is simply meant to be better than the absence of peer review. Just this year I reviewed an article on guns that I thought was very good and recommended publication. But when I re-read the article after it was published I saw some problems that I didn’t catch in the peer review process. That happens, and when science works well, those problems will eventually be corrected by others working in the same area. But that happens best when there are people with diverse views working in the same area (see above).
LikeLike
Two thirds were medical professionals? Does that suggest a bias in who returned the questionnaires? Seems to me yet one more piece churned out by folks who are jumping on the “gun study” bandwagon where peer review is nonexistent.
The peer review process in “gun violence research” is often one hand washing the other. Many of the reviewers neither understand the problem nor do they have a bias-free perspective.
Indeed, peer pressure counts. Back when I was a geochemist rather than the Dark Side scientist I am now, I read an article on lead concentrations and isotopes in major river sediment loads and what it could tell us about continental processes. Somewhat surprisingly, there was no correction for twentieth century anthropogenic lead, which would put its own overprint onto the natural lead found in a river basin (see the work of Clair Patterson and students, for example). As an extreme example, when I did pollution fate and transport studies in Hawaii, there was barely a local lead fingerprint in the watersheds, as the watershed trace isotope signature was dominated, as we found out, by Asian aerosols deposited by the major wind belts and overprinted by anthropogenic sources (auto exhaust, lead paint, etc).
Anyway, I wrote a comment to the journal and it was sent by the editors to the two authors, a well known Harvard scholar and his Ph.D. student. I got a call from the scholar, which I took as attempting to intimidate me into withdrawing the letter. I talked to my mentor and he said “add me as second author” and I submitted it anyway and they replied; both letters were published. It was a good exchange but I thank my old mentor for backing me up as I was a brand new assistant professor and somewhat vulnerable to being screwed.
But that sort of exchange of ideas is how science should get done.
LikeLike
Was administered through Amazon MTurk, thought the respondents are clearly non-random/representative.
LikeLike