Zooniverse-Based Activities for Undergraduates Are Here!

Our pilot-tested, research validated, Zooniverse-based activities for undergraduates are here and are ready for widespread use in your undergraduate science classrooms! These activities are 75-90 minutes long and are intended for use in introductory, undergraduate courses for non-science majors (or upper-level high school courses). These activities have been developed for use in either in-person courses or online courses through Google Docs. 

Geology/Biology/Environmental Science 101 with Floating Forests

In this activity, students learn about kelp forests in Tasmania in order to conduct an investigation into how marine ecosystems are impacted by small increases in ocean warming. Students use data generated by fellow citizen scientists in order to see how climate change has affected kelp forests specifically in Tasmania, Australia. In part one, students interpret graphs to draw conclusions about the relationship between greenhouse gas emissions and temperature, as well as learn about long term trends in Earth’s climate. Part two is intended to familiarize students with the Floating Forests platform. First, students practice classifying on a curated image set with a corresponding answer key. They will then be tasked with classifying images on the actual Floating Forests project. Part three uses data gathered by Floating Forests volunteers to introduce Tasmania, Australia as a case study of an ecosystem affected by climate change. 

Astronomy 101 with Planet Hunters

This is another three-part activity where students learn about the discovery and characterization of planetary systems outside of our Solar System. 

In part one, students use a lecture tutorial-style approach to learn about planetary transits and transit light curves. Students learn how important planetary properties such as orbital period and size can be approximated from specific features in a transit light curve. In the second part of this activity, students practice identifying transits (or dips) in a curated set of actual light curves. They will then receive feedback regarding whether or not they identified the transits successfully. Once the students have practiced, they classify on Planet Hunters – TESS, the current iteration of the Planet Hunters Project. Students get the opportunity to observe actual TESS light curves, and help the Planet Hunters research team identify potential planetary transits in those light curves. Finally, the activity concludes with a data driven investigation where students are presented with the complex research question, ‘Is our Solar System unique?’, and they will have to interpret data representations derived from the NASA Exoplanet Archive to form their own conclusion. 

A Little More About These Activities…

The Floating Forests and Planet Hunters-based classroom activities have been pilot tested with nearly 3,000 students across 14 colleges and universities. Survey data collected from participating students showed that completing either one of these two activities had statistically significant (positive) impacts on students’ ability to use data and evidence to answer scientific questions, on their ability to contribute in a meaningful way to science, and on their understanding that citizen science is a valuable tool that can be used to increase engagement in science. More than 70% of students claimed that these activities inspired them to come back and classify on additional Zooniverse projects! The results of these findings are being published in the Astronomy Education Journal (Simon et al., 2022, in review) and the Journal of Geophysics Education (Rosenthal et al., 2022, in prep). 

Additional feedback from pilot instructors indicated that these activities were easy to implement into new or existing introductory science courses. A few of our favorite instructor comments:

  1. “Being able to see and analyze the data and help with the entire research analysis process – students were very interested in that. They appreciated that it was real data. This is a real research project.” 
  2. “Well, there’s not enough time for me to say all the good things that I could say about Zooniverse. I think the benefit to the community, just the broader public, has been enormous. So I think these activities are fantastic, and sharing them, not only with colleges, but with high school and middle school educators, I think would be really beneficial. They’re fantastic.” 

The full activities and corresponding activity-synopses are available on the Zooniverse Classrooms Page (https://classroom.zooniverse.org)! The development and assessment of these activities were part of a larger NSF-funded effort, Award #1821319, Engaging Non-Science Majors in Authentic Research through Citizen Science. A final activity based around the Zooniverse project Planet Four will be coming soon! 

Also at classroom.zooniverse.org are two additional sets of materials, created through previous efforts:

  • Wildcam Labs
    • Designed for 11-13 year olds
    • The interactive map allows you to explore trail camera data and filter and download data to carry out analyses and test hypotheses. 
    • An example set of lessons based around Wildcam Labs, focused on using wildlife camera citizen science projects to engage students in academic language acquisition
    • Funded by HHMI and the San Diego Zoo
  • Astro101 with Galaxy Zoo
    • Designed for undergraduate non-major introductory astronomy courses
    • Students learn about stars and galaxies through 4 half-hour guided activities and a 15-20 hour research project experience in which they analyze real data (including a curated Galaxy Zoo dataset), test hypotheses, make plots, and summarize their findings. 
    • Funded by NSF

For both Wildcam Labs and Astro101 with Galaxy Zoo, instructors can set up private classrooms, invite students to join, curate data sets, and access guided activities and supporting educational resources. 

Science Scribbler: Key2Cat Update from Nanoparticle Picking Workflow

Science Scribbler: Key2Cat Update from Nanoparticle Picking Workflow

Hi!

This is the Science Scribbler Team with some exciting news from our latest project: Key2Cat! We have been blown away by the incredible support of this community – hundreds of you have taken part in the Key2Cat project (https://www.zooniverse.org/projects/msbrhonclif/science-scribbler-key2cat) and helped to pick nanoparticles in our electron microscopy images of catalyst nanoparticles. In just 1 week, over 50,000 classifications were completed on 10,000 subjects and 170,000 nanoparticles and clusters were found!

Thank you for this huge effort!

We went through the data and prepared everything for the next step: classification. Getting the central coordinates of our nanoparticles and clusters with the correct class will allow us to improve our deep learning approach. But before getting into the details of the next steps, let’s recap what has been done so far using the gold on germanium (Au/Ge) data as an example.

PICKING CATALYST PARTICLES

In the first workflow, you were asked to pick out both nanoparticles and clusters using a marking tool, which looked something like this:

As you might have realized, each of the images was only a small piece of a whole image. We tiled the images so that they wouldn’t be so overwhelming and time-consuming for an individual volunteer to work with. We also built in some overlap between the tiles so that if a nanoparticle fell on the edge in one image, it would be in the centre in another. Each tile was then shown to 5 different volunteers so that we could form a consensus on the centres of nanoparticles and clusters.

CRUNCHING THROUGH THE DATA

With your enormous speed, the whole Au/Ge dataset (94 full size images) was classified in just a few days! We have collected all of your marks and sorted them into their corresponding tiles. If we consider just a single tile that has been looked at by 5 volunteers, this is what the output data looks like:


With some thinking and coding we can recombine all the tiles that make up a single image, including the marks placed by all volunteers that contributed to the image:

Recontructed marked image

Wow, you all are really good at picking out the catalyst particles! Seeing how precisely all centres have been picked out in this visualisation is quite impressive. You may notice that there are more than 5 marks per nanoparticle – this is because of the overlap that we mentioned earlier. When taking the overlap into consideration, this means that each nanoparticle should be seen (at least partially!) by 20 volunteers.

The next step is to combine all of the marks to find a consensus centre point for each nanoparticle so that we have one set of coordinates to work with. There are numerous ways of doing this. One of the first that has given us good results is an unsupervised k-means algorithm [1]. This algorithm looks at all of the marks on the image and tries to find clusters of marks that are close to each other. It then joins these marks up into a single mark by finding a weighted average of their placements. You can think of it like tug-of-war where the algorithm finds the centre point because more marks are pulling it there.  

Reconstructed image with centroids of marks

As you can see, the consensus based on your marks almost perfectly points at the centres of individual nanoparticles or nanoparticle clusters. We don’t yet know from this analysis if the nanoparticle is a part of a cluster or not, and in some cases, we also get marks in areas which are not nanoparticles as shown in the orange and red boxes above. Since only small parts of the overall image were shown in the marking task, the artifact in the orange box was mistaken as a nanoparticle and in the case of the red box, there is a mark at the very edge and on a very small dot-like instance where some of you might have been suspicious about another nanoparticle. This is expected, especially since we asked volunteers to place marks if they were unsure – we wanted to capture all possible instances of nanoparticles in this first step!

REFINING THE DATA

This is the part where the second workflow comes into play. Using the marks from the first workflow, we createda new dataset showing just a small area around the mark to collect more information.In this workflow we ask a few questions to help identify exactly what we see at each of the marks


With this workflow, we hope to classify all the nanoparticles and clusters of both the Au/Ge and Pd/C catalyst systems, while potential false marks can be cleaned up! Once this is accomplished, we’ll have all the required inputs to improve our deep learning approach.

We’re currently collecting classifications on the Au/Ge data and will soon switch over to the Pd/C data, so if you have a few spare minutes, we would be very happy if you left some classifications in our project! https://www.zooniverse.org/projects/msbrhonclif/science-scribbler-key2cat/classify

-Kevin & Michele


Got your interest? Do you have questions? Get in touch!

Talk: https://www.zooniverse.org/projects/msbrhonclif/science-scribbler-key2cat/talk

References:

[1]: M. Ahmed, R. Seraj, S. M. S. Islam, Electronics (2020), 9 (8), 1295.

A Sky Full of Chocolate Sauce: Citizen Science with Aurora Zoo

by Dr. Liz MacDonald and Laura Brandt

Viewing the aurora in person is a magnificent experience, but due to location (or pesky clouds) it’s not always an option. Fortunately, citizen science projects like Aurorasaurus and Zooniverse’s Aurora Zoo make it easy to take part in aurora research from any location with an internet connection. 

The Aurorasaurus Ambassadors group was excited to celebrate Citizen Science Month by inviting Dr. Daniel Whiter of Aurora Zoo to speak at our April meeting. In this post we bring you the highlights of his presentation, which is viewable in full here

To ASK the Sky for Knowledge

Far to the north on the Norwegian island of Svalbard, three very sensitive scientific cameras gaze at a narrow patch of sky. Each camera is tuned to look for a specific wavelength of auroral light, snapping pictures at 20 or 32 frames per second. While the cameras don’t register the green or red light that aurora chasers usually photograph, the aurora dances dynamically across ASK’s images. Scientists are trying to understand more about what causes these small-scale shapes, what conditions are necessary for them to occur, and how energy is transferred from space into the Earth’s atmosphere. ASK not only sees night-time aurora, but also special “cusp aurora” that occur during the day but are only visible in extremely specific conditions (more or less from Svalbard in the winter.)

Still from Dr. Whiter’s presentation. The tiny blue square on the allsky image (a fisheye photo looking straight up) represents the field of view of the ASK cameras. The cameras point almost directly overhead. 

The setup, called Auroral Structure and Kinetics, or ASK, sometimes incorporates telescopes, similar to attaching binoculars to a camera. Project lead Dr. Daniel Whiter says, “The magnification of the telescopes is only 2x; the camera lenses themselves already provide a small field of view, equivalent to about a 280mm lens on a 35mm full frame camera. But the telescopes have a large aperture to capture lots of light, even with a small field of view.”

The challenge is that ASK has been watching the aurora for fifteen years and has amassed 180 terabytes of data. The team is too small to look through it all for the most interesting events, so they decided to ask for help from the general public. 

Visiting the Aurora Zoo

Using the Zooniverse platform, the Aurora Zoo team set up a project with which anyone can look at short clips of auroras to help highlight patterns to investigate further. The pictures are processed so that they are easier to look at. They start out black and white, but are given “false color” to help make them colorblind-friendly and easier for citizen scientists to work with. They are also sequenced into short video clips to highlight movement. To separate out pictures of clouds, the data is skimmed by the scientists each day and run through an algorithm.

Aurora Zoo participants are then asked to classify the shape, movement, and “fuzziness,” or diffuse quality, of the aurora. STEVE fans will be delighted by the humor in some of the options! For example, two of the more complex types are affectionately called “chocolate sauce” and “psychedelic kaleidoscope.” So far, Aurora Zoo citizen scientists have analyzed 7 months’ worth of data out of the approximately 80 months ASK has been actively observing aurora. Check out Dr. Whiter’s full presentation for a walkthrough on how to classify auroras, and try it out on their website!

Some of the categories into which Zooniverse volunteers classify auroral movement. Credit: Dr. Daniel Whiter.

What can be learned from Aurora Zoo is different from other citizen science projects like Aurorasaurus. For example, when several arc shapes are close to one another, they can look like a single arc to the naked eye or in a photo, but the tiny patch of sky viewed through ASK can reveal them to be separate features. These tiny details are also relevant to the study of STEVE and tiny green features in its “picket fence”.

Early (Surprising!) Results

Aurora Zoo participants blew through the most recent batch of data, and fresh data is newly available. The statistics they gathered show that different shapes and movements occur at different times of day. For example, psychedelic kaleidoscopes and chocolate sauce are more common in the evening hours. The fact that the most dynamic forms show up at night rather than in the daytime cusp aurora reveals that these forms must be connected to very active aurora on the night side of the Earth. 

Aurora Zoo participants also notice other structures. Several noted tiny structures later termed “fragmented aurora-like emissions,” or FAEs. Because of the special equipment ASK uses, the team was able to figure out that the FAEs they saw weren’t caused by usual auroral processes, but by something else. They published a paper about it, co-authored with the citizen scientists who noticed the FAEs. 

Still from Dr. Whiter’s presentation, featuring FAEs and Aurora Zoo’s first publication.

What’s next? Now that Aurora Zoo has a lot of classifications, they plan to use citizen scientists’ classifications to train a machine learning program to classify more images. They also look forward to statistical studies, and to creating new activities within Aurora Zoo like tracing certain shapes of aurora. 

STEVE fans, AuroraZoo hasn’t had a sighting yet. This makes sense, because ASK is at a higher latitude than that at which STEVE is usually seen. However, using a similar small-field technique to examine the details of STEVE has not yet been done. It might be interesting to try and could potentially yield some important insights into what causes FAEs.

Citizen Science Month, held during April of each year, encourages people to try out different projects. If you love the beautiful Northern and Southern Lights, you can help advance real aurora science by taking part in projects like Aurora Zoo and Aurorasaurus

About the authors of this blog post: Dr. Liz MacDonald and Laura Brandt lead a citizen science project called Aurorasaurus. While not a Zooniverse project, Aurorasaurus tracks auroras around the world via real-time reports by citizen scientist aurora chasers on its website and on Twitter. Aurorasaurus also conducts outreach and education across the globe, often through partnerships with local groups of enthusiasts.  Aurorasaurus is a research project that is a public-private partnership with the New Mexico Consortium supported by the National Science Foundation and NASA. Learn more about NASA citizen science here

Engaging Faith-based Communities in Citizen Science through Zooniverse

Engaging Faith-based Communities in Citizen Science through Zooniverse was an initiative designed to broaden participation in people-powered research (also referred to as citizen science) among religious and interfaith communities by helping them to engage with science through Zooniverse. Citizen science is a powerful way to build positive, long-term relationships across diverse communities by “putting a human face” on science and scientists. Participating in real scientific research is a great way to learn about the process of science as well as the scientists who conduct research.

The Engaging initiative provided models for how creative partnerships can be formed between scientific and religious communities that empower more people to become collaborators in the quest for knowledge. It included integrating Zooniverse projects into seminary classes as well as adult, youth, and intergenerational programs of religious communities; and promoting Zooniverse among interfaith communities concerned with environmental justice. Among other things, the project’s evaluation highlighted the need for scientists to do a better job of engaging with religious audiences in order to address racial and gender disparities in science. I encourage Zooniverse research teams to check out the series of short videos recently released by the AAAS Dialogue on Science, Ethics, and Religion to help scientists engage more effectively with communities of faith. By interacting personally with these communities and helping to “put a human face” on science, you may not only increase participation in your research projects, but help in the effort to diversify science in general.

Despite the difficulties imposed by the pandemic, I’m encouraged by what the Engaging initiative achieved, and the possibilities for expanding its impact in the future! The summary article of this project was published on March 28, 2022 by the AAAS Dialogue on Science, Ethics, and Religion.

Grace Wolf-Chase, Ph.D.

The project team thanks the Alfred P. Sloan Foundation for supporting this project. Any opinions, findings, or recommendations expressed are those of the project team and do not necessarily reflect the views of the Sloan Foundation.

Fun with IIIF

In this blog post, I’ll describe a recent prototyping project we (Jim O’Donnell: front-end developer; Sam Blickhan: Project Manager) carried out with our colleagues at the British Library (Mia Ridge, who I’m also collaborating with on the Collective Wisdom project) to explore IIIF compatibility for the Zooniverse Project Builder. You can read Mia’s complimentary blog post here.

History & context

While Zooniverse supports projects working with a number of different data formats (aka ‘subjects’), including video and audio, far and beyond the most frequently used data are images. Images are easy enough to drag and drop into our simple uploader (a feature of the Project Builder for adding data to your project) to create groups of subjects, or subject sets. If you want to upload your subjects with their associated metadata, however, things become slightly more complex. A subject manifest is a data table that allows you to list image file names alongside associated metadata. By including a manifest with your images to upload, the metadata will remain associated with those images within the Zooniverse platform. 

So, what happens if you already have a manifest? Can you upload any type of manifest into Zooniverse? What if you’re working with a specific set of standards? 

IIIF (pronounced “triple eye eff”) stands for International Image Interoperability Framework. It is a set of standards for image and A/V delivery across the web, from servers to different web environments. It supports viewing of images as well as interaction, and uses manifests as a major structural component. 

If you’re new to IIIF, that’s okay! To understand the work we did, you’ll need three IIIF definitions, all reproduced here from https://iiif.io/get-started/how-iiif-works/:

Manifest: the prime unit in IIIF which lists all the information that makes up a IIIF object. It communicates how to display your digital objects, and what information to display about them, including structure, to varying degrees of complexity as determined by the implementer. (For example, if the object is a book of illustrations, where each illustrated page is a canvas, and there is one specific order to the arrangement of those pages).

Canvas: the frame of reference for the display of your content, both spatial and temporal (just like a painting canvas for two-dimensional materials, or with an added time dimension for a/v content).

Annotation: a standard way to associate different types of content to whatever is on your canvas (such as a translation of a line or the name of a person in a photograph. In the IIIF model, images and other presentation content are also technically annotations onto a canvas). For more detail, see the Web Annotation Data Model.

What we did

For this effort, we worked with Mia and her colleagues at the British Library on an exploratory project to see if we could create a proof of concept for Zooniverse image upload and data export which was IIIF compatible. If successful, these two prototypes could then form the basis for an expanded effort. We used the British Library In The Spotlight Zooniverse project as a testing ground.

Data upload

First, we wanted to figure out a way to create a Zooniverse subject set from a IIIF manifest. We figured the easiest approach would be to use the manifest URL, so Jim built a tool that imports IIIF manifests via a URL pasted into the Project Builder (see image below).

This is an experimental feature, so it won’t show up in your Zooniverse project builder ‘Subject Sets’ page by default. If you want to try it out, you can preview the feature by adding subject-sets/iiif?env=production to your project builder URL. For example, if your project number is #xxx, you’d use the URL https://www.zooniverse.org/lab/xxx/subject-sets/iiif?env=production

To create a new subject set, you simply copy/paste the IIIF manifest URL into the box at the top of the image and click ‘Fetch Manifest’. The Zooniverse uploader will present a list of metadata fields from the manifest. The tick box column at the far right allows you to flag certain fields as ‘Hidden’, meaning they won’t be shown to volunteers in your project’s classification interface. Once you’ve marked everything you want to be ‘Hidden’, you click ‘Create a subject set’ to generate the new subject set from the IIIF manifest. 

Export to manifest with IIIF annotations

In the second phase of this experiment, we explored how to export Zooniverse project results as IIIF annotations. This was trickier, because the Zooniverse classification model requires multiple classifications from different volunteers, which are then typically aggregated together after being downloaded from the platform.

To export Zooniverse results as IIIF annotations, therefore, we needed to include a step that runs the appropriate in-house offline aggregation code, then convert the data to the appropriate IIIF annotation format. Because the aggregation step is necessary to produce a single annotation per task, this step is project- and workflow-specific (whereas the IIIF Manifest URL upload works for all project types). For this effort, we tested annotation publishing on the In The Spotlight Transcribe Dates workflow, which uses a simple free-text entry task. The other In The Spotlight workflow has a slight more complex task structure (rectangle marking task + text entry sub-task), which we’re hoping to be able to add to the technical documentation soon.

IIIF Technical Coordinator Glen Robson created a demo for viewing the In The Spotlight annotations in Mirador, which you can explore here: https://glenrobson.github.io/iiif_stuff/zooniverse/partof/ 

Full details and technical documentation are available at https://github.com/zooniverse/iiif-annotations.

Next steps & ways to get involved

Now, we need your feedback! The next steps for this work will include identifying community needs and interest – would you use these tools for your Zooniverse project? What features look useful (or less so)? Your feedback will help us determine our next steps. Mostly, we want to know who our potential audiences are, what task types they would most want to use, and what sort of comfort level they have, particularly when it comes to running the annotations code (from “This is great!” to “I don’t even know where to start!”). There are a lot of possible routes we could take from here, and we want to make sure our future work is in service of our project building community.

Try out the In The Spotlight project and help create real data for testing ingest processes.

Get in touch!

Finally, a massive “Thank you!” to the British Library for funding this experiment, and to Glen Robson and Josh Hadro at IIIF for their feedback on various stages of this experiment.

Web Developer Internship – Oxford 2022

The Zooniverse team in Oxford, UK, is looking for a web developer intern to join us in summer 2022. If you’re looking to learn how to build websites and apps with a team of friendly developers, or if you just want an opportunity to flex your extant coding skills in an environment that loves scientific curiosity, then come have some tea with us!

The team here in the Zooniverse want to welcome more folks into the world of software development, and in turn, we want to learn from the unique ideas and experiences you can share.

You can find the full job details at https://jobs.zooniverse.org/#oxford-web-developer-internship . Note that you don’t need any existing software development skills to apply, just a genuine interest in learning.

THE RESULTS ARE IN – Grant’s Great Leaving Challenge

The time has come to announce the winners of Grant’s Great Leaving Challenge! Many thanks to all who submitted classifications for our four featured projects over the past week. Your efforts have absolutely wowed us at the Zooniverse – not only did you meet the 100,000 classifications goal, you blew right through it. All in all, you submitted a whopping 293,692 classifications – nearly 3x our goal!

This classification challenge was a massive push forward for the projects involved, and the research teams are incredibly grateful. Grant himself was touched – he had this to say about the results of his namesake challenge:

“Over the last decade I’ve constantly been blown away by the amazing effort and commitment from Zooniverse volunteers, and yet again they have surpassed all expectations! I want to thank them for all they have done, both for this challenge, and over the entire lifetime of the project. THANK YOU!”

Here’s some data to back up just how successful this challenge was:

Figure 1. The x-axes show each day the challenge ran, while the y-axes mark the percent change in classifications from the week prior. For example, this means that for Penguin Watch, there was a 100% increase in classifications on Tuesday March 22nd compared to Tuesday March 15th.

Figure 2. Here, each plot shows the date on the x-axis and the total number of classifications for that day on the y-axis. The shaded areas indicate which days were part of the challenge, and the non-shaded white areas prior are data from the preceding week. Note that the y-axes are unequal across plots because they’ve been scaled to fit their own data.

While, in this case, I do really think the figures speak for themselves, here are some highlights:

Just two days into the challenge, daily classifications for Dingo? Bingo! more than doubled compared to one week prior. A short two days later, they reached a 300% increase from the same day the previous week. All in all, Dingo? Bingo! volunteers submitted an incredible 112,505 classifications!

Planet Hunters NGTS volunteers rode a hefty 200% increase in classifications for the first two days of the challenge. On the fifth day, they peaked at an incredible 300% increase! Overall, volunteers submitted a whopping 115,388 classifications over the course of the 6 day challenge. Remarkable!

Penguin Watch volunteers readily doubled classifications from the week prior, with a peak on the fourth day when classifications were up more than 200% from the preceding week. By the end of the challenge, volunteers had submitted a grand total of 55,787 classifications!

On day two of the challenge, Weather Rescue at Sea volunteers submitted an astonishing 350% more classifications than one week prior. On the final two days of the challenge, classifications were up by nearly 400% from the preceding week! Overall, volunteers submitted an awesome 10,012 classifications.

When pulling together this data, we were just absolutely amazed by how much effort the volunteers put into Grant’s Great Leaving Challenge. What an awesome example of the power of citizen science. From all of us at the Zooniverse and from the project teams who took part in the challenge – thank you. This has been such a fun way to send off Grant, who will be greatly missed by all!

Grant’s Great Leaving Challenge

If you subscribe to our newsletters, the name “Grant” probably sounds familiar to you. Grant (our Project Manager and basically the ‘backbone of the Zooniverse’) has been with us for nearly 9 years, and with a heavy heart we’re sad to report he’s finally moving on to his next great adventure.

To mark his departure, we’ve announced “Grant’s Great Leaving Challenge”. The goal of this challenge is to collect 100,000 new classifications for the four Featured Projects on the homepage. Starting yesterday, if you submit at least 10 classifications total for these projects your name will automatically be entered to win one of three prizes. Importantly, you must be logged-in while classifying to be eligible for the draw. The challenge will end on Sunday, March 27th at midnight (GMT), and the winners will be announced on Tuesday, March 29th.

While we aren’t divulging what the prizes are, it might tempt you to hear that they’ll be personalised by Grant himself…

Read on to learn about the four featured projects, and what you can do to help them out.

Penguin Watch
Penguins – globally loved, but under threat. Research shows that in some regions, penguin populations are in decline; but why? Begin monitoring penguins to help us answer this question. With over 100 sites to explore, we need your help now more than ever!

Planet Hunters NGTS
The Next-Generation Transit Survey have been searching for transiting exoplanets around the brightest stars in the sky. We need your help sifting through the observations flagged by the computers to search for hidden worlds that might have been missed in the NGTS team’s review. Most of the planets in the dataset have likely been found already, but you just might be the first to find a new exoplanet not known before!

Dingo? Bingo!
The Myall Lakes Dingo Project aims to develop and test non-lethal tools for dingo management, and to further our understanding and appreciation of this iconic Australian carnivore. We have 64 camera-traps across our study site, and need your help to identify the animals they detect – including dingoes.

Weather Rescue at Sea
The aim of the Weather Rescue At Sea project is to construct and extended the global surface temperature record back to the 1780s, based on the air temperature observations recorded across the planet. This will be achieved by crowd-sourcing the recovery (or data rescue) of the weather observations from historical ship logbooks, station records, weather journals and other sources, to produce a longer, and more consistent dataset of global surface temperature.

Let’s send Grant off with a bang. Happy classifying!

Happy Year of the Tiger!

The 1st of February marked the start of Chinese New Year/Lunar New Year celebrations, so we here at the Zooniverse team wanted to wish everyone a happy and prosperous Year of the Tiger!

This year, we’d like to share a fun side project one of our developers (Shaun) created for the Chinese New Year: a small video game where you try to lead a big striped cat to an exit with a laser pointer. While the Zooniverse team takes our scientific work very seriously, we also enjoy doing some really goofy stuff in our free time.

Chinese New Year 2022 - Year of the Tiger greeting card. A man, in a Chinese New Year outfit, is distracting a tiger with a laser pointer. A woman, in the back, attempts to save some vases from being broken. Links to the CNY game-card at https://shaunanoordin.com/cny2022/
Disclaimer: please don’t try to actually play laser tag with real life tigers. 🐅

🎮 Play online at https://shaunanoordin.github.io/cny2022/ or at https://shaunanoordin.com/cny2022/ on any modern web browser.

If you too enjoy programming video games, you can take a look at the source code at https://github.com/shaunanoordin/cny2022 . And hey, if you just enjoy programming in general, be sure to check out https://github.com/zooniverse/ to see what the developers are doing to create a better Zooniverse experience.

Gong Xi Fa Cai (恭喜發財) everyone, and thanks for being part of the Zooniverse! ✨

Engaging Crowds: new options for subject delivery & interaction

Since its founding, a well-known feature of the Zooniverse platform has been that volunteers see (& interact with) image, audio, or video files (known as ‘subjects’ in Zooniverse parlance) in an intentionally random order. A visit to help.zooniverse.org provides this description of the subject selection process:

[T]he process for selecting which subjects get shown to volunteers is very simple: it randomly selects an (unretired, unseen) subject from the linked subject sets for that workflow.

https://help.zooniverse.org/next-steps/subject-selection/

For some project types, this method can help to avoid bias in classification. For other project types, however, random subject delivery can make the task more difficult.

Transcription projects frequently use a single image as the subject-level unit. These images most often depict a single page of text (i.e., 1 subject = 1 image = 1 page of text). Depending on the source material being transcribed, that unit/page is often only part of a multi-page document, such as a letter or manuscript. In these cases, random subject delivery removes the subject (page) from its larger context (document). This can actually make successful transcription more difficult, as seeing additional uses of a word or letter can be helpful for deciphering a particular hand.

Decontextualized transcription can also be frustrating for volunteers who may want greater context for the document they’re working on. It’s more interesting to be able to read or transcribe an entire letter, rather than snippets of a whole.

This is why we’re exploring new approaches to subject delivery on Zooniverse as part of the Engaging Crowds project. Engaging Crowds aims to ‘investigate the practice of citizen research in the heritage sector‘ in collaboration with the UK National Archives, the Royal Botanic Garden Edinburgh, and the National Maritime Museum. The project is funded by the UK Arts & Humanities Research Council as one of eight foundational projects in the ‘Towards a National Collection: Opening UK Heritage to the World‘ program.

As part of this research project, we have designed and built a new indexing tool that allows volunteers to have more agency around which subject sets—and even which subjects—they want to work on, rather than receiving them randomly.

The indexing tool allows for a few levels of granularity. Volunteers can select what workflow they want to work on, as well as the subject set. These features are currently being used on HMS NHS: The Nautical Health Service, the first of three Engaging Crowds Zooniverse projects that will launch on the platform before the end of 2021.

Subject set selection screen, as seen in HMS NHS: The Nautical Health Service.

Sets that are 100% complete are ‘greyed’ out, and moved to the end of the list — this feature was based on feedback from early volunteers who found it too easy to accidentally select a completed set to work on.

In the most recent iteration of the indexing tool, selection happens at the subject level, too. Scarlets and Blues is the second Engaging Crowds project, featuring an expanded indexing tool from the version seen in HMS: NHS. Within a subject set, volunteers can select the individual subject they want to work on based on the metadata fields available. Once they have selected a subject, they can work sequentially through the rest of the set, or return to the index and choose a new subject.

Subject selection screen as seen in Scarlets and Blues.

On all subject index pages, the Status column tells volunteers whether a subject is Available (i.e. not complete and not yet seen); Already Seen (i.e. not complete, but already classified by the volunteer viewing the list); or Finished (i.e. has received enough classifications and no longer needs additional effort).

A major new feature of the indexing tool is that completed subjects remain visible, so that volunteers can retain the context of the entire document. When transcribing sequentially through a subject set, volunteers that reach a retired subject will see a pop-up message over the classify interface that notes the subject is finished, and offers available options for how to move on with the classification task, including going directly to the next classifiable subject or returning to the index to choose a new subject to classify.

Subject information banner, as seen in Scarlets and Blues.

As noted above, sequential classification can help provide context for classifying images that are part of a group, but until now has not been a common feature of the platform. To help communicate ordered subject delivery to volunteers, we have included information about the subject set–and a given subject’s place within that set–in a banner on top of the image. This subject information banner (shown above) tells volunteers where they are within the order of a specific subject set.

Possible community use cases for the indexing tool might include volunteers searching a subject set in order to work on documents written by a particular author, written within a specific year, or that are written in a certain language. Some of the inspiration for this work came from Talk posts on the Anti-Slavery Manuscripts project, in which volunteers asked how they could find letters written by certain authors whose handwriting they had become particularly adept at transcribing. Our hope is that the indexing tool will help volunteers more quickly access the type of materials in a project that speak to their interests or needs.

If you have any questions, comments, or concerns about the indexing tool, please feel free to post a comment here, or on one of our Zooniverse-wide Talk boards. This feature will not be immediately available in the Project Builder, but project teams who are interested in using the indexing tool on a future project should email contact@zooniverse.org and use ‘Indexing Tool’ in the subject line. We’re keen to continue trying out these new tools on a range of projects, with the ultimate goal of making them freely available in the Project Builder.

Frequently Asked Questions: Indexing Tool + Sequential Classification

“Will all new Zooniverse projects use this method for subject selection and sequential classification?”

No. The indexing tool is an optional feature. Teams who feel that their projects would benefit from this feature can reach out to us for more information about including the indexing tool in their projects. Those who don’t want the indexing tool will be able to carry on with random subject delivery as before.

“Why can’t I refresh the page to get a new subject?”

Projects that use sequential classification do not support loading new subjects on page refresh. If the project is using the indexing tool, you’ll need to return to the index and choose a new page. If the project is not using the indexing tool, you’ll need to classify the image in order to move forward in the order of sequence. However, the third Engaging Crowds project (a collaboration with the Royal Botanic Garden Edinburgh) will include the full suite of indexing tool features, plus an additional ‘pagination’ option that will allow volunteers to move forwards and backwards through a subject set to decide what to work on see preview image below). We’ll write a follow-up to this post once that project has launched.

A green banner with the name of the subject set and Previous and Next buttons
Subject information banner, as seen in the forthcoming Royal Botanic Garden Edinburgh project.

“How do I know if I’m getting the same page again?”

The subject information banner will give you information about where you are in a subject set. If you think you’re getting the same subject twice, first start by checking the subject information banner. If you still think you’re getting repeat subjects, send the project team a message on the appropriate Talk board. If possible, include the information from the subject information banner in your post (e.g. “I just received subject 10/30 again, but I think I already classified it!”).

The Zooniverse Blog. We're the world's largest and most successful citizen science platform and a collaboration between the University of Oxford, The Adler Planetarium, and friends