Science Learning via Participation in Online Citizen Science

My name is Dr. Karen Masters, and I’m an astronomer working at the University of Portsmouth. My main involvement with the Zooniverse over the last 8 years or so has been through my research into galaxy evolution making use of the Galaxy Zoo classifications (see Zooniverse Publication list), and as the Project Scientist for Galaxy Zoo I enjoy organizing science team telecons, and research meetings. I’ve also written many blog posts about galaxy evolution for the Galaxy Zoo blog.

Being involved in Galaxy Zoo has opened many interesting doors for me. I have always had a keen interest in science communication and science education. In fact, working with Galaxy Zoo has been a real pleasure because of the way it blurred the lines between astronomical research and public engagement.

A couple of years ago I was given the opportunity to get more formally engaged in researching how Galaxy Zoo (and other Zooniverse projects) contribute to science communication/education. A colleague of mine in the Portsmouth Business School, who is an expert in the economics of volunteering, led a team (which I was part of) which was successful in obtaining funding for a 3 year project to study the motivations of citizen scientists, including how scientific learning contributes to the motivations.  We call our project VOLCROWE.

The VOLCROWE survey, which ran in late March/early April of last year included a science quiz, which tested both general science knowledge, and knowledge specific to five different surveys. This meant that the data collected could be used to investigate, in a statistical sense, how much you are learning about scientific content while classifying on Zooniverse projects.

We collected complete responses to the survey from almost 2000 Zooniverse volunteers spread across Galaxy Zoo, Planet Hunters, Penguin Watch, Seafloor Explorer and Snapshot Serengeti.

The survey respondents certainly believed they were learning about science through their participation. When asked if they Zooniverse (i) lets them learn through direct hands on experience of scientific research; (ii) allows them to gain a new perspective on scientific research; or (iii) helps them learn about science, and overwhelming majority (more than 80% in all cases) agreed, or strongly agreed.

Masters_Figure5
Responses to questions about if the volunteers agreed that the Zooniverse…..

We were also able to find evidence in the survey responses that project specific science knowledge correlated positively with measures of active engagement in the project. Put plainly, people who classified more on a given project we found to know more about the scientific content of that project. We could use the scores from the general science quiz as a measure of unrelated scientific knowledge (which did not correlate with how much people classified) to claim that this correlation is causal – i.e. people are learning more about the science behind our projects the more time they spend classifying.

A different VOLCROWE publication, “How is success defined and measured in online citizen science? A case study of Zooniverse projects”, Cox et al. (2015), measured the success of Zooniverse projects in different metrics. In that work we demonstrated that projects could be scientifically successful (i.e. contribute to increased scientific output) without being very successful in public engagement. However, public engagement success without good scientific output was not found in any of the Zooniverse projects studied in Cox et al. (2015).  Four of our five projects in our Science Learning study were part of Cox et al. (2015;  Penguin Watch hadn’t launched at that time) and in Masters et al. (2016) we were able to show that the better projects did in public engagement success metrics, in general the stronger the correlation we found between scientific knowledge and time spent classifying. This does not seem too surprising, but it’s nice to show with data.

We concluded thus:

“Our results imply that even for citizen science project designed primarily to meet the research goals of a science team, volunteers are learning about scientific topics while participating. Combined with previous work (Cox et al. 2015) that suggested it is difficult for projects to be successful at public engagement without being scientifically successful (but not vice versa) this has implications for future design of citizen science projects, even those primarily motivated by public engagement aims. While scientific success will not alone lead to scientific learning among the user community, we argue that these works together demonstrate scientific success is a necessary (if not a sufficient) requirement for successful and sustainable public engagement through citizen science. We conclude that the best way to use citizen science projects to provide an environment that facilitates science learning is to provide an authentic science driven project, rather than to develop projects with solely educational aims.”

As you may know, authenticity is at the heart of the Zooniverse Philosophy, so it was really nice to find this evidence which backs that up. You know you can trust Zooniverse projects to make use of your classifications to make contributions to the sum of knowledge of humankind.

I also had great fun writing this up for publication, a process which involved me learning a great deal about what is meant by “Science Learning” in the context of research into science communication.

It was published today in the Journal of Science Communication, Special Edition in Citizen Science (Part II). You can also read the paper in full in the open access archive at: https://arxiv.org/abs/1601.05973.

Advertisements

What is Penguin Watch 2.0?

We’re getting through the first round of Penguin Watch data- it’s amazing and it’s doing the job we wanted, which is to revolutionise the collection and processing of penguin data from the Southern Ocean – to disentangle the threats of climate change, fishing and direct human disturbance. The data are clearly excellent, but we’re now trying to automate processing them so that results can more rapidly influence policy.

In “PenguinWatch 2.0”, people will be able to see the results of their online efforts to monitor and conserve Antarctica’s penguins colonies. The more alert among you will notice that it’s not fully there yet, but we’re working on it!

We have loads of ideas on how to integrate this with the penguinwatch.org experience so that people are more engaged, learn more and realise what they are contributing to!

unnamed

For now, we’re doing this the old-fashioned way; anyone such as schools who want to be more engaged, can contact us (tom.hart@zoo.ox.ac.uk) and we’ll task you with a specific colony and feedback on that.

Primary School Zooniverse Volunteers

Recently my class of 8-9 year old kids from ZŠ Brno, Jihomoravské náměstí (a primary school in the Czech Republic) took part in several Zooniverse projects.

ysh_P1080256

First, they were just talking about their dreams – what they would like to achieve in life. Mostly, they wanted to become a sports star or music celebrity, but some actually considered becoming a scientist!

Then they were introduced to the Zooniverse and citizen science. Fascinated by the idea than they can actually contribute to real science (so someone’s dream can come true), they dived into the list of projects on the Zooniverse website. All the cover images and project names were really attractive to them, sadly, only two projects are available in Czech. Anyway, the first project they started – Snapshots at Sea – was in English only. This project focusing on marine animals, especially cetaceans, is very simple though. The only task is to say whether there are any animals present in the picture. They learned the English question very quickly and classified over 200 images on their own. They asked various questions about those fascinating animals and looked hungry for more answers. Initially, they didn’t want to stop classifying, but when they heard the name of the following project to try – Penguin Watch, they were totally into it!

ysh_P1080211

This project, available in Czech, shows wintery images of remote locations in Antarctica, usually crowded with nesting penguins. The tasks here are to mark adult penguins, chicks, or their eggs, and any predators, if present. They took turns marking, trying to mark at least 30 penguins as quickly as possible so they could see another image. They couldn’t wait to find an egg. And after only 9 images they succeeded!

They were curious about Antarctica, as well as about penguins. They wondered, why it is so cold there, and how are long polar days and nights likely to happen. Answering their last question would have been a great step to lead into trying a space project, as many of them are available on Zooniverse. But, they decided to try another wildlife project, Chimp & See, a project monitoring wild animals in Africa, especially chimpanzees and their behaviour. This project wasn’t as easy for them, as they were asked to identify unfamiliar animals in short video clips (they had to learn their names in English during classification) and then to describe their behaviour using a list of options. Surprisingly, they didn’t mind the language barrier much. After a short while, all of them were standing in front of the screen and everyone wanted to touch it! They seemed to be totally hooked.

ysh_P1080287

The researchers from Chimp&See were so kind to offer them the chance to choose a name for a currently unidentified juvenile chimp, captured on 4 different video sequences! The kids were really excited by such an opportunity and suggested a lot of names to choose from. They were voting in the end and all agreed on a single name – Kibu!

When the lesson ended, many of them asked to create their own accounts, so they could participate on their own from home. Next time, we are going to try Plankton Portal and Floating Forests.

Zooniverse projects are really a great opportunity for kids to learn about nature, they bring them to the real science, and not to forget, they are great fun!

 

__
By Zuzana Macháčková, a primary school teacher in Brno and Zooniverse volunteer.

Darren (DZM) New Horizons

Dear Zooniverse community,

I have some news to break to everyone. I’ve accepted a new position at a different company, and while it’s an extremely exciting opportunity for me, it does mean that I have to step away from the Community Builder role here.

This is a bittersweet announcement for me, because as exciting as my new job is for my career, I’ve truly loved my time at the Zooniverse, helping to grow this community and our platform and getting to know so many incredible volunteers, researchers, and staff.

However, I do want to emphasize that this is definitely not goodbye! I couldn’t possibly leave completely—there are so many projects here that I enjoy doing as much as you guys do, and so many exciting developments in the pipeline that I want to see pan out. I’m not going anywhere; instead, I’m becoming one of you: a Zooniverse volunteer. I won’t be your liaison anymore, or a source for reporting your needs, but I’ll continue to be your colleague in people-powered research.

The Zooniverse is growing and changing at an incredible rate right now, and has been for much of my time here over the past 14 months. Overall, I’m blown away by what you’ve all helped us to accomplish. Projects are being launched and completed quickly, and our new research teams are more attuned to volunteers’ needs than ever before. I’ve long believed that the launch of the Project Builder would begin a process of exponentially expanding the scope of the Zoo, and we are definitely beginning to see that happening. I can’t wait to find out, along with the rest of you, what the next chapter of this story has in store for us all.

Thank you all for everything, and I’ll be seeing you all around!

Yours in people-powered research,

Darren “DZM” McRoy

Special note from the ZooTeam — Thank you Darren for all your hard work over the years! We’re so excited for you and this new opportunity. And we very much look forward to continuing to build and strengthen the relationships between our volunteers, research teams, and the Zooniverse team. Thank you all for your contributions! Onward and upward.

The importance of acknowledgement

Trying to understand the vast proliferation of ‘citizen science’ projects is a Herculean task right now, with projects cropping up all over the place dealing with both online data analysis like that which concerns us here at the Zooniverse and with data collection and observation of the natural world via projects like iNaturalist. As the number of projects increases, so do questions about the effectiveness of these projects, and so does our desire to keep track of the impact all of the effort put into them is having.

These aren’t easy questions to answer, and an attempt to track the use of citizen science in the literature is made by Ria Follett and Vladimir Strezov, two researchers in the Department of Environmental Sciences at Macquarie University, in a recent paper published in the journal PLOS One. They look at papers including the words ‘citizen science’, and includes the surprising result that ‘online’ projects accounted for only 12% of their sample. They explain :

The missing articles dis- cussed discoveries generated using “galaxy zoo” data, rather than acknowledging the contribtions of the citizens who created this data.

This, to me, is pushing a definition to extremes. Every one of the ‘missing’ papers cited has a link to a list of volunteers who contributed; several have volunteers listed on the author list! To claim that we’re not ‘acknowledging the contribtions’ of volunteers because we don’t use the shibboleth ‘citizen science’ is ridiculous. Other Zooniverse projects, such as Planet Hunters, don’t even appear in the study for much the same reason, and it’s sad that a referee didn’t dig deeper into the limited methodology used in the article.

Part of the problem here is the age-old argument about the term ‘citizen science’. It’s not a description most of our volunteers would use of themselves, but rather a term imposed from the academy to describe (loosely!) the growing phenomenon of public participation in public research. In most of our Galaxy Zoo papers, we refer to ‘volunteers’ rather than ‘citizen scientists’ – and we believe strongly in acknowledging the contributions of everyone to a project, whatever term they choose to label themselves with.

Chris

Lost Classifications

We’re sorry to let you know that at 16:29 BST on Wednesday last week we made a change to the Panoptes code which had the unexpected result that it failed to record classifications on six of our newest projects; Season Spotter, Wildebeest Watch, Planet Four: Terrains, Whales as Individuals, Galaxy Zoo: Bar Lengths, and Fossil Finder. It was checked by two members of the team – unfortunately, neither of them caught the fact that it failed to post classifications back. When we did eventually catch it, we fixed it within 10 minutes. Things were back to normal by 20:13 BST on Thursday, though by that time each project had lost a day’s worth of classifications.

To prevent something like this happening in the future we are implementing new code that will monitor the incoming classifications from all projects and send us an alert if any of them go unusually quiet. We will also be putting in even more code checks that will catch any issues like this right away.

It is so important to all of us at the Zooniverse that we never waste the time of any of our volunteers, and that all of your clicks contribute towards the research goals of the project. If you were one of the people whose contributions were lost we would like to say how very sorry we are, and hope that you can forgive us for making this terrible mistake. We promise to do everything we can to make sure that nothing like this happens again, and we thank you for your continued support of the Zooniverse.

Sincerely,

The Zooniverse Team

One line at a time: A new approach to transcription and art history

Today, we launch AnnoTate, an art history and transcription project made in partnership with Tate museums and archives. AnnoTate was built with the average time-pressed user in mind, by which I mean the person who does not necessarily have five or ten minutes to spare, but maybe thirty or sixty seconds.

AnnoTate takes a novel approach to crowdsourced text transcription. The task you are invited to do is not a page, not sentences, but individual lines. If the kettle boils, the dog starts yowling or the children are screaming, you can contribute your one line and then go attend to life.

The new transcription system is powered by an algorithm that will show when lines are complete, so that people don’t replicate effort unnecessarily. As in other Zooniverse projects, each task (in this case, a line) is done by several people, so you’re not solely responsible for a line, and it’s ok if your lines aren’t perfect.

Of course, if you want trace the progression of an artist’s life and work through their letters, sketchbooks, journals, diaries and other personal papers, you can transcribe whole pages and documents in sequence. Biographies of the artists are also available, and there will be experts on Talk to answer questions.

Every transcription gets us closer to the goal of making these precious documents word searchable for scholars and art enthusiasts around the world. Help us understand the making of twentieth-century British art!

Get involved now at anno.tate.org.uk

Sunspotter Citizen Science Challenge Update: Zooniverse Volunteers Are Overachievers

An apology is owed to all Zooniverse volunteers; We incredibly underestimated the Zooniverse Community’s ability to mobilize for the Sunspotter Citizen Science Challenge. You blew our goal of 250,000 new classifications on Sunspotter in a week out of the water!  It took 16 hours to reach 250,000 classifications.  I’ll say that again, 16 hours!

By 20 hours you hit 350,000 classifications. That’s an 11,000% increase over the previous day. By the end of the weekend, the total count stood at over 640,000.

Let’s up the ante, shall we? Our new goal is a cool 1,000,000 classifications by Saturday September 5th.  That would increase the total number of classifications since Sunspotter launched in February 2014 by 50%!

Thank you all for contributing!

P.S. Check out the Basics of a Solar Flare Forecast on the Sunspotter blog from science team member Dr. Sophie Murray.

Sunspotter Citizen Science Challenge: 29th August – 6th September

Calling all Zooniverse volunteers!  As we transition from the dog days of summer to the pumpkin spice latte days of fall (well, in the Northern hemisphere at least) it’s time to mobilize and do science!

Sunspotter Citizen Science Challenge

Our Zooniverse community of over 1.3 million volunteers has the ability to focus efforts and get stuff done. Join us for the Sunspotter Citizen Science Challenge! From August 29th to September 5th, it’s a mad sprint to complete 250,000 classifications on Sunspotter.

Sunspotter needs your help so that we can better understand and predict how the Sun’s magnetic activity affects us on Earth. The Sunspotter science team has three primary goals:

  1. Hone a more accurate measure of sunspot group complexity
  2. Improve how well we are able to forecast solar activity
  3. Create a machine-learning algorithm based on your classifications to automate the ranking of sunspot group complexity
Classifying on Sunspotter
Classifying on Sunspotter

In order to achieve these goals, volunteers like you compare two sunspot group images taken by the Solar and Heliospheric Observatory and choose the one you think is more complex.  Sunspotter is what we refer to as a “popcorn project”.  This means you can jump right in to the project and that each classification is quick, about 1-3 seconds.

Let’s all roll up our sleeves and advance our knowledge of heliophysics!

Measuring Success in Citizen Science Projects, Part 2: Results

In the previous post, I described the creation of the Zooniverse Project Success Matrix from Cox et al. (2015). In essence, we examined 17 (well, 18, but more on that below) Zooniverse projects, and for each of them combined 12 quantitative measures of performance into one plot of Public Engagement versus Contribution to Science:

Public engagement vs Contribution to science : the success matrix
Public Engagement vs Contribution to Science for 17 Zooniverse projects. The size (area) of each point is proportional to the total number of classifications received by the project. Each axis of this plot combines 6 different quantitative project measures.

The aim of this post is to answer the questions: What does it mean? And what doesn’t it mean?

Discussion of Results

The obvious implication of this plot and of the paper in general is that projects that do well in both public engagement and contribution to science should be considered “successful” citizen science projects. There’s still room to argue over which is more important, but I personally assert that you need both in order to justify having asked the public to help with your research. As a project team member (I’m on the Galaxy Zoo science team), I feel very strongly that I have a responsibility both to use the contributions of my project’s volunteers to advance scientific research and to participate in open, two-way communication with those volunteers. And as a volunteer (I’ve classified on all the projects in this study), those are the 2 key things that I personally appreciate.

It’s apparent just from looking at the success matrix that one can have some success at contributing to science even without doing much public engagement, but it’s also clear that every project that successfully engages the public also does very well at research outputs. So if you ignore your volunteers while you write up your classification-based results, you may still produce science, though that’s not guaranteed. On the other hand, engaging with your volunteers will probably result in more classifications and better/more science.

Surprises, A.K.A. Failing to Measure the Weather

Some of the projects on the matrix didn’t appear quite where we expected. I was particularly surprised by the placement of Old Weather. On this matrix it looks like it’s turning in an average or just-below-average performance, but that definitely seems wrong to me. And I’m not the only one: I think everyone on the Zooniverse team thinks of the project as a huge success. Old Weather has provided robust and highly useful data to climate modellers, in addition to uncovering unexpected data about important topics such as the outbreak and spread of disease. It has also provided publications for more “meta” topics, including the study of citizen science itself.

Additionally, Old Weather has a thriving community of dedicated volunteers who are highly invested in the project and highly skilled at their research tasks. Community members have made millions of annotations on log data spanning centuries, and the researchers keep in touch with both them and the wider public in multiple ways, including a well-written blog that gets plenty of viewers. I think it’s fair to say that Old Weather is an exceptional project that’s doing things right. So what gives?

There are multiple reasons the matrix in this study doesn’t accurately capture the success of Old Weather, and they’re worth delving into as examples of the limitations of this study. Many of them are related to the project being literally exceptional. Old Weather has crossed many disciplinary boundaries, and it’s very hard to put such a unique project into the same box as the others.

Firstly, because of the way we defined project publications, we didn’t really capture all of the outputs of Old Weather. The use of publications and citations to quantitatively measure success is a fairly controversial subject. Some people feel that refereed journal articles are the only useful measure (not all research fields use this system), while others argue that publications are an outdated and inaccurate way to measure success. For this study, we chose a fairly strict measure, trying to incorporate variations between fields of study but also requiring that publications should be refereed or in some other way “accepted”. This means that some projects with submitted (but not yet accepted) papers have lower “scores” than they otherwise might. It also ignores the direct value of the data to the team and to other researchers, which is pretty punishing for projects like Old Weather where the data itself is the main output. And much of the huge variety in other Old Weather outputs wasn’t captured by our metric. If it had been, the “Contribution to Science” score would have been higher.

Secondly, this matrix tends to favor projects that have a large and reasonably well-engaged user base. Projects with a higher number of volunteers have a higher score, and projects where the distribution of work is more evenly spread also have a higher score. This means that projects where a very large fraction of the work is done by a smaller group of loyal followers are at a bit of a disadvantage by these measurements. Choosing a sweet spot in the tradeoff between broad and deep engagement is a tricky task. Old Weather has focused on, and delivered, some of the deepest engagement of all our projects, which meant these measures didn’t do it justice.

To give a quantitative example: the distribution of work is measured by the Gini coefficient (on a scale of 0 to 1), and in our metric lower numbers, i.e. more even distributions, are better. The 3 highest Gini coefficients in the projects we examined were Old Weather (0.95), Planet Hunters (0.93), and Bat Detective (0.91); the average Gini coefficient across all projects was 0.82. It seems clear that a future version of the success matrix should incorporate a more complex use of this measure, as very successful projects can have high Gini coefficients (which is another way of saying that a loyal following is often a highly desirable component of a successful citizen science project).

Thirdly, I mentioned in part 1 that these measures of the Old Weather classifications were from the version of the project that launched in 2012. That means that, unlike every other project studied, Old Weather’s measures don’t capture the surge of popularity it had in its initial stages. To understand why that might make a huge difference, it helps to compare it to the only eligible project that isn’t shown on the matrix above: The Andromeda Project.

In contrast to Old Weather, The Andromeda Project had a very short duration: it collected classifications for about 4 weeks total, divided over 2 project data releases. It was wildly popular, so much so that the project never had a chance to settle in for the long haul. A typical Zooniverse project has a burst of initial activity followed by a “long tail” of sustained classifications and public engagement at a much lower level than the initial phase.

The Andromeda Project is an exception to all the other projects because its measures are only from the initial surge. If we were to plot the success matrix including The Andromeda Project in the normalizations, the plot looks like this:

success matrix with the andromeda project making all the others look like public engagement failures
And this study was done before the project’s first paper was accepted, which it has now been. If we included that, The Andromeda Project’s position would be even further to the right as well.

Because we try to control for project duration, the very short duration of the Andromeda Project means it gets a big boost. Thus it’s a bit unfair to compare all the other projects to The Andromeda Project, because the data isn’t quite the same.

However, that’s also true of Old Weather — but instead of only capturing the initial surge, our measurements for Old Weather omit it. These measurements only capture the “slow and steady” part of the classification activity, where the most faithful members contribute enormously but where our metrics aren’t necessarily optimized. That unfairly makes Old Weather look like it’s not doing as well.

In fact, comparing these 2 projects has made us realize that projects probably move around significantly in this diagram as they evolve. Old Weather’s other successes aren’t fully captured by our metrics anyway, and we should keep those imperfections and caveats in mind when we apply this or any other success measure to citizen science projects in the future; but one of the other things I’d really like to see in the future is a study of how a successful project can expect to evolve across this matrix over its life span.

Why do astronomy projects do so well?

There are multiple explanations for why astronomy projects seem to preferentially occupy the upper-right quadrant of the matrix. First, the Zooniverse was founded by astronomers and still has a high percentage of astronomers or ex-astronomers on the payroll. For many team members, astronomy is in our wheelhouse, and it’s likely this has affected decisions at every level of the Zooniverse, from project selection to project design. That’s starting to change as we diversify into other fields and recruit much-needed expertise in, for example, ecology and the humanities. We’ve also launched the new project builder, which means we no longer filter the list of potential projects: anyone can build a project on the Zooniverse platform. So I think we can expect the types of projects appearing in the top-right of the matrix to broaden considerably in the next few years.

The second reason astronomy seems to do well is just time. Galaxy Zoo 1 is the first and oldest project (in fact, it pre-dates the Zooniverse itself), and all the other Galaxy Zoo versions were more like continuations, so they hit the ground running because the science team didn’t have a steep learning curve. In part because the early Zooniverse was astronomer-dominated, many of the earliest Zooniverse projects were astronomy related, and they’ve just had more time to do more with their big datasets. More publications, more citations, more blog posts, and so on. We try to control for project age and duration in our analysis, but it’s possible there are some residual advantages to having extra years to work with a project’s results.

Moreover, those early astronomy projects might have gotten an additional boost from each other: they were more likely to be popular with the established Zooniverse community, compared to similarly early non-astronomy projects which may not have had such a clear overlap with the established Zoo volunteers’ interests.

Summary

The citizen science project success matrix presented in Cox et al. (2015) is the first time such a diverse array of project measures have been combined into a single matrix for assessing the performance of citizen science projects. We learned during this study that public engagement is well worth the effort for research teams, as projects that do well at public engagement also make better contributions to science.

It’s also true that this matrix, like any system that tries to distill such a complex issue into a single measure, is imperfect. There are several ways we can improve the matrix in the future, but for now, used mindfully (and noting clear exceptions), this is generally a useful way to assess the health of a citizen science project like those we have in the Zooniverse.

Note: Part 1 of this article is here.