In our Who’s who in the Zoo blog series we introduce you to some of the people behind the Zooniverse.
In this edition, meet Dr Mary Westwood, a recent addition to the Zooniverse team.
– Helen
Name: Mary Westwood
Location: University of Oxford, UK
Tell us about your role within the team
I joined the Zooniverse as a postdoctoral research assistant/project manager at the end of January 2022.
What did you do in your life before the Zooniverse?
I did a BSc and MSc in Biology at Wright State University in Ohio (where I’m from), then moved to the UK to do a PhD in Evolutionary Biology at the University of Edinburgh. Mostly I’m interested in how timing affects interactions between individuals, and towards the end of my PhD I started to dabble in bioacoustics and machine learning. Those last two topics are what led me to the Zooniverse.
What does your typical working day involve?
It varies a lot, but primarily I split my time between helping research teams get their projects up and running and doing my own research. I also get to write the weekly newsletters, which is a lot of fun.
How would you describe the Zooniverse in one sentence?
The innate curiosity and goodness of people put to very good use.
Tell us about the first Zooniverse project you were involved with
When I first checked out the Zooniverse, I wanted to see how bioacoustics projects were run on the platform. I can’t remember every project I looked into, but I do remember seeing HumBug and thinking what an incredible project it is.
Of all the discoveries made possible by the Zooniverse, which for you has been the most notable?
Research from the Penguin Watch team and volunteers has led to additional protections to marine protected areas, which is a really awesome outcome from a Zooniverse project.
What’s been your most memorable Zooniverse experience?
Best memory: all of the project launches, they’re a lot of fun.
Worst memory: mistakenly thinking I’d changed the background image of the entire Zooniverse website.
What are your top three citizen science projects?
I love them all equally.
What advice would you give to a researcher considering creating a Zooniverse project?
Just go for it. Start building a project, play around with setting up workflows. Delete them, start again. Don’t be afraid to reach out to us for help.
How can someone who’s never contributed to a citizen science project get started?
Browse which projects we’re hosting to see what sparks your interest. Download apps like iNaturalist and Merlin Bird ID – both awesome platforms which get you out into nature (win) and help science (double win).
Where do you hope citizen science and the Zooniverse will be in 10 years time?
Everywhere. Since discovering the Zooniverse, I can’t believe everyone doesn’t already know about it.
Is there anything in the Zooniverse pipeline that you’re particularly excited about?
I’m about to experience my first Zooniverse Team Meeting. Very excited to finally get together with all of the awesome people I’ve worked with remotely over the past six months.
When not at work, where are we most likely to find you?
Somewhere outdoors and with a pint, possibly also with a book or friends.
Do you have any party tricks or hidden talents?
My party trick is strong-arming any topic of conversation into a discussion about circadian rhythms.
You can check out Mary’s Zooniverse project here: The Cricket Wing
In this blog post, I’ll describe a recent prototyping project we (Jim O’Donnell: front-end developer; Sam Blickhan: Project Manager) carried out with our colleagues at the British Library (Mia Ridge, who I’m also collaborating with on the Collective Wisdom project) to explore IIIF compatibility for the Zooniverse Project Builder. You can read Mia’s complimentary blog post here.
History & context
While Zooniverse supports projects working with a number of different data formats (aka ‘subjects’), including video and audio, far and beyond the most frequently used data are images. Images are easy enough to drag and drop into our simple uploader (a feature of the Project Builder for adding data to your project) to create groups of subjects, or subject sets. If you want to upload your subjects with their associated metadata, however, things become slightly more complex. A subject manifest is a data table that allows you to list image file names alongside associated metadata. By including a manifest with your images to upload, the metadata will remain associated with those images within the Zooniverse platform.
So, what happens if you already have a manifest? Can you upload any type of manifest into Zooniverse? What if you’re working with a specific set of standards?
IIIF (pronounced “triple eye eff”) stands for International Image Interoperability Framework. It is a set of standards for image and A/V delivery across the web, from servers to different web environments. It supports viewing of images as well as interaction, and uses manifests as a major structural component.
If you’re new to IIIF, that’s okay! To understand the work we did, you’ll need three IIIF definitions, all reproduced here from https://iiif.io/get-started/how-iiif-works/:
Manifest: the prime unit in IIIF which lists all the information that makes up a IIIF object. It communicates how to display your digital objects, and what information to display about them, including structure, to varying degrees of complexity as determined by the implementer. (For example, if the object is a book of illustrations, where each illustrated page is a canvas, and there is one specific order to the arrangement of those pages).
Canvas: the frame of reference for the display of your content, both spatial and temporal (just like a painting canvas for two-dimensional materials, or with an added time dimension for a/v content).
Annotation: a standard way to associate different types of content to whatever is on your canvas (such as a translation of a line or the name of a person in a photograph. In the IIIF model, images and other presentation content are also technically annotations onto a canvas). For more detail, see the Web Annotation Data Model.
What we did
For this effort, we worked with Mia and her colleagues at the British Library on an exploratory project to see if we could create a proof of concept for Zooniverse image upload and data export which was IIIF compatible. If successful, these two prototypes could then form the basis for an expanded effort. We used the British Library In The Spotlight Zooniverse project as a testing ground.
Data upload
First, we wanted to figure out a way to create a Zooniverse subject set from a IIIF manifest. We figured the easiest approach would be to use the manifest URL, so Jim built a tool that imports IIIF manifests via a URL pasted into the Project Builder (see image below).
This is an experimental feature, so it won’t show up in your Zooniverse project builder ‘Subject Sets’ page by default. If you want to try it out, you can preview the feature by adding subject-sets/iiif?env=production to your project builder URL. For example, if your project number is #xxx, you’d use the URL https://www.zooniverse.org/lab/xxx/subject-sets/iiif?env=production
To create a new subject set, you simply copy/paste the IIIF manifest URL into the box at the top of the image and click ‘Fetch Manifest’. The Zooniverse uploader will present a list of metadata fields from the manifest. The tick box column at the far right allows you to flag certain fields as ‘Hidden’, meaning they won’t be shown to volunteers in your project’s classification interface. Once you’ve marked everything you want to be ‘Hidden’, you click ‘Create a subject set’ to generate the new subject set from the IIIF manifest.
Export to manifest with IIIF annotations
In the second phase of this experiment, we explored how to export Zooniverse project results as IIIF annotations. This was trickier, because the Zooniverse classification model requires multiple classifications from different volunteers, which are then typically aggregated together after being downloaded from the platform.
To export Zooniverse results as IIIF annotations, therefore, we needed to include a step that runs the appropriate in-house offline aggregation code, then convert the data to the appropriate IIIF annotation format. Because the aggregation step is necessary to produce a single annotation per task, this step is project- and workflow-specific (whereas the IIIF Manifest URL upload works for all project types). For this effort, we tested annotation publishing on the In The Spotlight Transcribe Dates workflow, which uses a simple free-text entry task. The other In The Spotlight workflow has a slight more complex task structure (rectangle marking task + text entry sub-task), which we’re hoping to be able to add to the technical documentation soon.
Now, we need your feedback! The next steps for this work will include identifying community needs and interest – would you use these tools for your Zooniverse project? What features look useful (or less so)? Your feedback will help us determine our next steps. Mostly, we want to know who our potential audiences are, what task types they would most want to use, and what sort of comfort level they have, particularly when it comes to running the annotations code (from “This is great!” to “I don’t even know where to start!”). There are a lot of possible routes we could take from here, and we want to make sure our future work is in service of our project building community.
Try out the In The Spotlight project and help create real data for testing ingest processes.
Mia and I are also part of the IIIF Slack community, so feel free to ping us there.
Finally, a massive “Thank you!” to the British Library for funding this experiment, and to Glen Robson and Josh Hadro at IIIF for their feedback on various stages of this experiment.
The Zooniverse team in Oxford, UK, is looking for a web developer intern to join us in summer 2022. If you’re looking to learn how to build websites and apps with a team of friendly developers, or if you just want an opportunity to flex your extant coding skills in an environment that loves scientific curiosity, then come have some tea with us!
The team here in the Zooniverse want to welcome more folks into the world of software development, and in turn, we want to learn from the unique ideas and experiences you can share.
The time has come to announce the winners of Grant’s Great Leaving Challenge! Many thanks to all who submitted classifications for our four featured projects over the past week. Your efforts have absolutely wowed us at the Zooniverse – not only did you meet the 100,000 classifications goal, you blew right through it. All in all, you submitted a whopping 293,692 classifications – nearly 3x our goal!
This classification challenge was a massive push forward for the projects involved, and the research teams are incredibly grateful. Grant himself was touched – he had this to say about the results of his namesake challenge:
“Over the last decade I’ve constantly been blown away by the amazing effort and commitment from Zooniverse volunteers, and yet again they have surpassed all expectations! I want to thank them for all they have done, both for this challenge, and over the entire lifetime of the project. THANK YOU!”
Here’s some data to back up just how successful this challenge was:
Figure 1. The x-axes show each day the challenge ran, while the y-axes mark the percent change in classifications from the week prior. For example, this means that for Penguin Watch, there was a 100% increase in classifications on Tuesday March 22nd compared to Tuesday March 15th.
Figure 2. Here, each plot shows the date on the x-axis and the total number of classifications for that day on the y-axis. The shaded areas indicate which days were part of the challenge, and the non-shaded white areas prior are data from the preceding week. Note that the y-axes are unequal across plots because they’ve been scaled to fit their own data.
While, in this case, I do really think the figures speak for themselves, here are some highlights:
Just two days into the challenge, daily classifications for Dingo? Bingo! more than doubled compared to one week prior. A short two days later, they reached a 300% increase from the same day the previous week. All in all, Dingo? Bingo! volunteers submitted an incredible 112,505 classifications!
Planet Hunters NGTS volunteers rode a hefty 200% increase in classifications for the first two days of the challenge. On the fifth day, they peaked at an incredible 300% increase! Overall, volunteers submitted a whopping 115,388 classifications over the course of the 6 day challenge. Remarkable!
Penguin Watch volunteers readily doubled classifications from the week prior, with a peak on the fourth day when classifications were up more than 200% from the preceding week. By the end of the challenge, volunteers had submitted a grand total of 55,787 classifications!
On day two of the challenge, Weather Rescue at Sea volunteers submitted an astonishing 350% more classifications than one week prior. On the final two days of the challenge, classifications were up by nearly 400% from the preceding week! Overall, volunteers submitted an awesome 10,012 classifications.
When pulling together this data, we were just absolutely amazed by how much effort the volunteers put into Grant’s Great Leaving Challenge. What an awesome example of the power of citizen science. From all of us at the Zooniverse and from the project teams who took part in the challenge – thank you. This has been such a fun way to send off Grant, who will be greatly missed by all!
If you subscribe to our newsletters, the name “Grant” probably sounds familiar to you. Grant (our Project Manager and basically the ‘backbone of the Zooniverse’) has been with us for nearly 9 years, and with a heavy heart we’re sad to report he’s finally moving on to his next great adventure.
To mark his departure, we’ve announced “Grant’s Great Leaving Challenge”. The goal of this challenge is to collect 100,000 new classifications for the four Featured Projects on the homepage. Starting yesterday, if you submit at least 10 classifications total for these projects your name will automatically be entered to win one of three prizes. Importantly, you must be logged-in while classifying to be eligible for the draw. The challenge will end on Sunday, March 27th at midnight (GMT), and the winners will be announced on Tuesday, March 29th.
While we aren’t divulging what the prizes are, it might tempt you to hear that they’ll be personalised by Grant himself…
Read on to learn about the four featured projects, and what you can do to help them out.
Penguin Watch Penguins – globally loved, but under threat. Research shows that in some regions, penguin populations are in decline; but why? Begin monitoring penguins to help us answer this question. With over 100 sites to explore, we need your help now more than ever!
Planet Hunters NGTS The Next-Generation Transit Survey have been searching for transiting exoplanets around the brightest stars in the sky. We need your help sifting through the observations flagged by the computers to search for hidden worlds that might have been missed in the NGTS team’s review. Most of the planets in the dataset have likely been found already, but you just might be the first to find a new exoplanet not known before!
Dingo? Bingo! The Myall Lakes Dingo Project aims to develop and test non-lethal tools for dingo management, and to further our understanding and appreciation of this iconic Australian carnivore. We have 64 camera-traps across our study site, and need your help to identify the animals they detect – including dingoes.
Weather Rescue at Sea The aim of the Weather Rescue At Sea project is to construct and extended the global surface temperature record back to the 1780s, based on the air temperature observations recorded across the planet. This will be achieved by crowd-sourcing the recovery (or data rescue) of the weather observations from historical ship logbooks, station records, weather journals and other sources, to produce a longer, and more consistent dataset of global surface temperature.
Let’s send Grant off with a bang. Happy classifying!
Since its founding, a well-known feature of the Zooniverse platform has been that volunteers see (& interact with) image, audio, or video files (known as ‘subjects’ in Zooniverse parlance) in an intentionally random order. A visit to help.zooniverse.org provides this description of the subject selection process:
[T]he process for selecting which subjects get shown to volunteers is very simple: it randomly selects an (unretired, unseen) subject from the linked subject sets for that workflow.
For some project types, this method can help to avoid bias in classification. For other project types, however, random subject delivery can make the task more difficult.
Transcription projects frequently use a single image as the subject-level unit. These images most often depict a single page of text (i.e., 1 subject = 1 image = 1 page of text). Depending on the source material being transcribed, that unit/page is often only part of a multi-page document, such as a letter or manuscript. In these cases, random subject delivery removes the subject (page) from its larger context (document). This can actually make successful transcription more difficult, as seeing additional uses of a word or letter can be helpful for deciphering a particular hand.
Decontextualized transcription can also be frustrating for volunteers who may want greater context for the document they’re working on. It’s more interesting to be able to read or transcribe an entire letter, rather than snippets of a whole.
As part of this research project, we have designed and built a new indexing tool that allows volunteers to have more agency around which subject sets—and even which subjects—they want to work on, rather than receiving them randomly.
The indexing tool allows for a few levels of granularity. Volunteers can select what workflow they want to work on, as well as the subject set. These features are currently being used on HMS NHS: The Nautical Health Service, the first of three Engaging Crowds Zooniverse projects that will launch on the platform before the end of 2021.
Subject set selection screen, as seen in HMS NHS: The Nautical Health Service.
Sets that are 100% complete are ‘greyed’ out, and moved to the end of the list — this feature was based on feedback from early volunteers who found it too easy to accidentally select a completed set to work on.
In the most recent iteration of the indexing tool, selection happens at the subject level, too. Scarlets and Blues is the second Engaging Crowds project, featuring an expanded indexing tool from the version seen in HMS: NHS. Within a subject set, volunteers can select the individual subject they want to work on based on the metadata fields available. Once they have selected a subject, they can work sequentially through the rest of the set, or return to the index and choose a new subject.
Subject selection screen as seen in Scarlets and Blues.
On all subject index pages, the Status column tells volunteers whether a subject is Available (i.e. not complete and not yet seen); Already Seen (i.e. not complete, but already classified by the volunteer viewing the list); or Finished (i.e. has received enough classifications and no longer needs additional effort).
A major new feature of the indexing tool is that completed subjects remain visible, so that volunteers can retain the context of the entire document. When transcribing sequentially through a subject set, volunteers that reach a retired subject will see a pop-up message over the classify interface that notes the subject is finished, and offers available options for how to move on with the classification task, including going directly to the next classifiable subject or returning to the index to choose a new subject to classify.
Subject information banner, as seen in Scarlets and Blues.
As noted above, sequential classification can help provide context for classifying images that are part of a group, but until now has not been a common feature of the platform. To help communicate ordered subject delivery to volunteers, we have included information about the subject set–and a given subject’s place within that set–in a banner on top of the image. This subject information banner (shown above) tells volunteers where they are within the order of a specific subject set.
Possible community use cases for the indexing tool might include volunteers searching a subject set in order to work on documents written by a particular author, written within a specific year, or that are written in a certain language. Some of the inspiration for this work came from Talk posts on the Anti-Slavery Manuscripts project, in which volunteers asked how they could find letters written by certain authors whose handwriting they had become particularly adept at transcribing. Our hope is that the indexing tool will help volunteers more quickly access the type of materials in a project that speak to their interests or needs.
If you have any questions, comments, or concerns about the indexing tool, please feel free to post a comment here, or on one of our Zooniverse-wide Talk boards. This feature will not be immediately available in the Project Builder, but project teams who are interested in using the indexing tool on a future project should email contact@zooniverse.org and use ‘Indexing Tool’ in the subject line. We’re keen to continue trying out these new tools on a range of projects, with the ultimate goal of making them freely available in the Project Builder.
“Will all new Zooniverse projects use this method for subject selection and sequential classification?”
No. The indexing tool is an optional feature. Teams who feel that their projects would benefit from this feature can reach out to us for more information about including the indexing tool in their projects. Those who don’t want the indexing tool will be able to carry on with random subject delivery as before.
“Why can’t I refresh the page to get a new subject?”
Projects that use sequential classification do not support loading new subjects on page refresh. If the project is using the indexing tool, you’ll need to return to the index and choose a new page. If the project is not using the indexing tool, you’ll need to classify the image in order to move forward in the order of sequence. However, the third Engaging Crowds project (a collaboration with the Royal Botanic Garden Edinburgh) will include the full suite of indexing tool features, plus an additional ‘pagination’ option that will allow volunteers to move forwards and backwards through a subject set to decide what to work on see preview image below). We’ll write a follow-up to this post once that project has launched.
Subject information banner, as seen in the forthcoming Royal Botanic Garden Edinburgh project.
“How do I know if I’m getting the same page again?”
The subject information banner will give you information about where you are in a subject set. If you think you’re getting the same subject twice, first start by checking the subject information banner. If you still think you’re getting repeat subjects, send the project team a message on the appropriate Talk board. If possible, include the information from the subject information banner in your post (e.g. “I just received subject 10/30 again, but I think I already classified it!”).
What’s interesting? Or rather, what’s most interesting? This most fundamental of questions isn’t one we often directly address when thinking about scientific data, when we’re usually concerned with classification or deriving some global property of the data. But interestingness is important – in my own work with large surveys of the Universe, how interesting a new object is – an exploding star, or a strange galaxy – may determine whether we point telescopes at it, or whether it will languish, unobserved, in a catalogue for decades.
Hanny’s Voorwerp – a light echo lit up by activity in a now-faded quasar – was found early in the Galaxy Zoo project, providing a timely reminder of the importance of finding the unusual things in large datasets!
We’ve learnt how important serendipitous discoveries can be from previous astronomical Zooniverse projects, ranging from Galaxy Zoo’s Green Peas to Boyajian’s Star, ‘the most interesting star in the Milky Way’ (even if it turns out not to host an alien megastructure. With new projects such as the Vera Rubin Observatory’s LSST survey nearly ready to provide an unprecedented flood of information, astronomers around the world are honing their techniques for getting the most out of such large datasets – but the problem of preparing for surprise has been neglected.
In part because it turns out it’s hard to get funding for a search for the unusual, where by definition I can’t say in advance what it is that we’ll find. I’m therefore very pleased the team have received a new grant from the Alfred P. Sloan Foundation to build on the Zooniverse to provide tools designed for serendipity. My hunch is that, as we’ve learnt from so many Zooniverse projects before, a combination of human and machine intelligence is needed for the task; while modern machine learning is good at finding the unusual, working out which unusual things are actually interesting is best left to human intuition and intelligence.
If we think about being ‘unusual’ and being ‘interesting’ as different axes, an interesting space on which to plot our data appears. Modern machine learning is best suited to finding the unusual – but most unusual things are boring artefacts.
The project won’t stop at astronomy. In combination with Prof Kate Jones‘ team at UCL and elsewhere, we’ll look for surprises in audio recordings from ecological monitoring projects, testing whether identifying rare events – such as gunshots – might contribute to assessments of the health of an ecosystem. (You might remember Kate – she ran the Bat Detective project on the Zooniverse) And with the Science Scribbler team (particually Michele Darrow and Mark Basham) based at the Rosalind Franklin Institute we’ll use the latest high resolution imaging to use these techniques to spot structures in cells.
PS If you have a PhD in a relevant scientific discipline, or in computer science, then we’re advertising a postdoc – see here for details, or get in touch via Twitter or email to discuss.
The volunteers on our Planet Hunters TESS project have helped discover another planetary system! The new system, HD 152843, consists of two planets that are similar in size to Neptune and Saturn in our own solar system, orbiting around a bright star that is similar to our own Sun. This exciting discovery follows on from our validation of the long-period planet around an evolved (old) star, TOI-813, and from our recent paper outlining the discovery of 90 Planet Hunters TESS planet candidates, which gives us encouragement that there are a lot more exciting systems to be found with your help!
Figure: The data obtained by NASA’s Transiting Exoplanet Survey Satellite which shows two transiting planets. The plot shows the brightness of the star HD 152843 over a period of about a month. The dips appear where the planets passed in front of the star and blocked some of its light from getting to Earth.
Multi-planet systems, like this one, are particularly interesting as they allow us to study how planets form and evolve. This is because the two planets that we have in this system must have necessarily formed out of the same material at the same time, but evolved in different ways resulting in the different planet properties that we now observe.
Even though there are already hundreds of confirmed multi-planet systems, the number of multi-planet systems with stars that are bright enough such that we can study them using ground-based telescopes remains very small. However, the brightness of this new citizen science found system, HD 152843, makes it an ideal target for follow-up observations, allowing us to measure the planet masses and possibly even probe their atmospheric composition.
This discovery was made possibly with the help of tens of thousands of citizen scientists who helped to visually inspect data obtained by NASA’s Transiting Exoplanet Survey Satellite, in the search for distant worlds. We thank all of the citizen scientists taking part in the project who continue to help with the discovery of exciting new planet systems and in particular to Safaa Alhassan, Elisabeth M. L. Baeten, Stewart J. Bean, David M. Bundy, Vitaly Efremov, Richard Ferstenou, Brian L. Goodwin, Michelle Hof, Tony Hoffman, Alexander Hubert, Lily Lau, Sam Lee, David Maetschke, Klaus Peltsch, Cesar Rubio-Alfaro, Gary M. Wilson, the citizen scientists who directly helped with this discovery and who have become co-authors of the discovery paper.
The paper has been published by the Monthly Notices of the Royal Astronomical Society (MNRAS) journal and you can find a version of it on arXiv at: https://arxiv.org/pdf/2106.04603.pdf.
Over the years a growing number of companies have included Zooniverse in their digital engagement and volunteer efforts, connecting their employee network with real research projects that need their help.
It’s been lovely hearing the feedback from employees:
“This was an awesome networking event where we met different team members and also participated in a wonderful volunteer experience. I had so much fun!”
“This activity is perfectly fitted to provide remote/virtual support. You can easily review photos from anywhere. Let’s do this again!”
“Spotting the animals was fun; a nice stress reliever!’
The impact of these partnerships on employees and on Zooniverse has been tremendous. For example, in 2020 alone, 10,000+ Verizon employees contributed over a million classifications across dozens of Zooniverse projects. With companies small to large incorporating Zooniverse into their volunteer efforts, this new stream of classifications has been a tremendous boon for helping propel Zooniverse projects towards completion and into the analysis and dissemination phases of their efforts. And the feedback from employees has been wonderful — participants across the board express their appreciation for having a meaningful way to engage in real research through their company’s volunteer efforts.
A few general practices that have helped set corporate volunteering experiences up for success:
Focus and choice: Provide a relatively short list of recommended Zooniverse projects that align with your company’s goals/objectives (e.g., topic-specific, location-specific, etc.), but also leave room for choice. We have found that staff appreciate when a company provides 3-6 specific project suggestions (so they can dive quickly into a project), as well as having the option to choose from the full list of 70+ projects at zooniverse.org/projects.
Recommend at least 3 projects: This is essential in case there happens to be a media boost for a given project before your event and the project runs out of active data*. Always good to have multiple projects to choose from.
Team building: Participation in Zooniverse can be a tremendous team building activity. While it can work well to just have people participate individually, at their own convenience, it also can be quite powerful to participate as a group. We have created a few different models for 1-hour, 3-hour, and 6-hour team building experiences. The general idea is that you start the session as a group to learn about Zooniverse and the specific project you’ll be participating in. You then set a Classification Challenge for the hour (e.g., as a group of 10, we think we can contribute 500 classifications by the end of the hour). You play music in the background while you classify and touch base halfway through to see how you’re doing towards your goal (by checking your personal stats at zooniverse.org) and to share interesting, funny, and/or unusual images you’ve classified. At the end of the session, you celebrate reaching your group’s Classification Challenge goal and talk through a few reflection questions about the experience and other citizen science opportunities you might explore in the future.
Gathering stats: Impact reports have been key in helping a company tell the story of the impact of their corporate volunteering efforts, both internally to their employee network and externally to their board and other stakeholders.
Some smaller companies (or subgroups within a larger company) manually gather stats about their group’s participation in Zooniverse. They do this by taking advantage of the personal stats displayed within the Zooniverse.org page (e.g., number of classifications you’ve contributed). They request that their staff register and login to Zooniverse before participating and send a screenshot of their Zooniverse.org page at the end of each session. The team lead then adds up all the classifications and records the hours spent as a group participating in Zooniverse.
If manual stats collection is not feasible for your company, don’t hesitate to reach out to us at contact@zooniverse.org to explore possibilities together.
We’ve also created a variety of bespoke experiences for companies who are interested in directly supporting the Zooniverse. Please email contact@zooniverse.org if you’re interested in exploring further and/or have any questions.
*Zooniverse project datasets range in size; everything from a project’s dataset being fully completed within a couple weeks (e.g., The Andromeda Project) to projects like Galaxy Zoo and Snapshot Serengeti that have run and will continue to run for many years. But even for projects that have data that will last many months or years, standard practice is to upload data in batches, lasting ~2-4 months. When a given dataset is completed, this provides an opportunity for the researchers to share updates about the project, interim results, etc. and encourage participation in the next cycle of active data.
What are “Yellowballs?” Shortly after the Milky Way Project (MWP) was launched in December 2010, volunteers began using the discussion board to inquire about small, roundish “yellow” features they identified in infrared images acquired by the Spitzer Space Telescope. These images use a blue-green-red color scheme to represent light at three infrared wavelengths that are invisible to our eyes. The (unanticipated) distinctive appearance of these objects comes from their similar brightness and extent at two of these wavelengths: 8 microns, displayed in green, and 24 microns, displayed in red. The yellow color is produced where green and red overlap in these digital images. Our early research to answer the volunteers’ question, “What are these `yellow balls’?” suggested that they are produced by young stars as they heat the surrounding gas and dust from which they were born. The figure below shows the appearance of a typical yellowball (or YB) in a MWP image. In 2016, the MWP was relaunched with a new interface that included a tool that let users identify and measure the sizes of YBs. Since YBs were first discovered, over 20,000 volunteers contributed to their identification, and by 2017, volunteers catalogued more than 6,000 YBs across roughly 300 square degrees of the Milky Way.
New star-forming regions. We’ve conducted a pilot study of 516 of these YBs that lie in a 20-square-degree region of the Milky Way, which we chose for its overlap with other large surveys and catalogs. Our pilot study has shown that the majority of YBs are associated with protoclusters – clusters of very young stars that are about a light-year in extent (less than the average distance between mature stars.) Stars in protoclusters are still in the process of growing by gravitationally accumulating gas from their birth environments. YBs that represent new detections of star-forming regions in a 6-square-degree subset of our pilot region are circled in the two-color (8 microns: green, 24 microns: red) image shown below. YBs present a “snapshot” of developing protoclusters across a wide range of stellar masses and brightness. Our pilot study results indicate a majority of YBs are associated with protoclusters that will form stars less than ten times the mass of the Sun.
YBs show unique “color” trends. The ratio of an object’s brightness at different wavelengths (or what astronomers call an object’s “color”) can tell us a lot about the object’s physical properties. We developed a semi-automated tool that enabled us to conduct photometry (measure the brightness) of YBs at different wavelengths. One interesting feature of the new YBs is that their infrared colors tend to be different from the infrared colors of YBs that have counterparts in catalogs of massive star formation (including stars more than ten times as massive as the Sun). If this preliminary result holds up for the full YB catalog, it could give us direct insight into differences between environments that do and don’t produce massive stars. We would like to understand these differences because massive stars eventually explode as supernovae that seed their environments with heavy elements. There’s a lot of evidence that our Solar System formed in the company of massive stars.
The figure below shows a “color-color plot” taken from our forthcoming publication. This figure plots the ratios of total brightness at different wavelengths (24 to 8 microns vs. 70 to 24 microns) using a logarithmic scale. Astronomers use these color-color plots to explore how stars’ colors separate based on their physical properties. This color-color plot shows that some of our YBs are associated with massive stars; these YBs are indicated in red. However, a large population of our YBs, indicated in black, are not associated with any previously studied object. These objects are generally in the lower right part of our color-color plot, indicating that they are less massive and cooler then the objects in the upper left. This implies there is a large number of previously unstudied star-forming regions that have been discovered by MWP volunteers. Expanding our pilot region to the full catalog of more than 6,000 YBs will allow us to better determine the physical properties of these new star-forming regions.
Volunteers did a great job measuring YB sizes! MWP volunteers used a circular tool to measure the sizes of YBs. To assess how closely user measurements reflect the actual extent of the infrared emission from the YBs, we compared the user measurements to a 2D model that enabled us to quantify the sizes of YBs. The figure below compares the sizes measured by users to the results of the model for YBs that best fit the model. It indicates a very good correlation between these two measurements. The vertical green lines show the deviations in individual measurements from the average. This illustrates the “power of the crowd” – on average, volunteers did a great job measuring YB sizes!
Stay tuned… Our next step is to extend our analysis to the entire YB catalog, which contains more than 6,000 YBs spanning the Milky Way. To do this, we are in the process of adapting our photometry tool to make it more user-friendly and allow astronomy students and possibly even citizen scientists to help us rapidly complete photometry on the entire dataset.
Our pilot study was recently accepted for publication in the Astrophysical Journal. Our early results on YBs were also presented in the Astrophysical Journal, and in an article in Frontiers for Young Minds, a journal for children and teens.
The Zooniverse Blog. We're the world's largest and most successful citizen science platform and a collaboration between the University of Oxford, The Adler Planetarium, and friends