Trying to understand the vast proliferation of ‘citizen science’ projects is a Herculean task right now, with projects cropping up all over the place dealing with both online data analysis like that which concerns us here at the Zooniverse and with data collection and observation of the natural world via projects like iNaturalist. As the number of projects increases, so do questions about the effectiveness of these projects, and so does our desire to keep track of the impact all of the effort put into them is having.
These aren’t easy questions to answer, and an attempt to track the use of citizen science in the literature is made by Ria Follett and Vladimir Strezov, two researchers in the Department of Environmental Sciences at Macquarie University, in a recent paper published in the journal PLOS One. They look at papers including the words ‘citizen science’, and includes the surprising result that ‘online’ projects accounted for only 12% of their sample. They explain :
The missing articles dis- cussed discoveries generated using “galaxy zoo” data, rather than acknowledging the contribtions of the citizens who created this data.
This, to me, is pushing a definition to extremes. Every one of the ‘missing’ papers cited has a link to a list of volunteers who contributed; several have volunteers listed on the author list! To claim that we’re not ‘acknowledging the contribtions’ of volunteers because we don’t use the shibboleth ‘citizen science’ is ridiculous. Other Zooniverse projects, such as Planet Hunters, don’t even appear in the study for much the same reason, and it’s sad that a referee didn’t dig deeper into the limited methodology used in the article.
Part of the problem here is the age-old argument about the term ‘citizen science’. It’s not a description most of our volunteers would use of themselves, but rather a term imposed from the academy to describe (loosely!) the growing phenomenon of public participation in public research. In most of our Galaxy Zoo papers, we refer to ‘volunteers’ rather than ‘citizen scientists’ – and we believe strongly in acknowledging the contributions of everyone to a project, whatever term they choose to label themselves with.
3 thoughts on “The importance of acknowledgement”
I think most of the academic and peer-reviewed papers I look at, do not have the term “Citizen Science” in them.
Reading the original article, I would imagine that projects which “are initated and driven by the public”, are not peer-reviewed – this is hinted at by the comment “did not generally result in scientific publications”. (I am assuming that Scientific Publications are peer-reviewed, before publication.)
Is it possible that the term “Citizen Science” hints at amateur work and therefore the term is filtered out of academic papers?
Perhaps a good question to be asked of those reviewing academic papers, is whether the term “Citizen Science” has any bearing on their review?
Thought-provoking post, Chris; thanks.
Aside from whether ‘amateur’, ‘citizen scientist’, ‘volunteer’, or … is the best (or even a good) label, there’s another dimension to acknowledgement that you hint at: “several have volunteers listed on the author list!” (and I think there’s a PH paper whose lead author is an amateur).
How does it happen that some volunteers end up as named authors, but the vast majority do not?
In SpaceWarps, it’s because the PI invited some zooites to join the Science Team (some later dropped out, I think). In Green Peas, some voorwerje, and some overlaps papers, it was because some zooites did a huge amount of the ‘heavy lifting’. In the Voorwerp papers, it was a recognition of the discoverer. (There are likely other types)
And sometimes a zooite or three is explicitly named in the Acknowledgements section.
This is all highly informal and ad hoc, I suspect, not like the fairly well established practices re who among one’s professional colleagues gets to be an author.
Perhaps someone will do research on this, one day soon!
Actually, “informal and ad hoc” is a pretty good description of the “well-established practices” for assigning authorship among many of the professional scientific collaborations I’ve been in. I don’t think there’s an obvious double standard; I think it’s more that the detailed rules of authorship are fuzzy much of the time and can vary greatly from project to project and even paper to paper.