For the second year in a row, we’re honoring the hundreds of thousands of contributors, research teams, educators, Talk moderators, and more who make Zooniverse possible. This second edition of Into the Zooniverse highlights another 40 of the many projects that were active on the website and app in the 2019 – 20 academic year.
In that year, the Zooniverse has launched 65 projects, volunteers have submitted more than 85 million classifications, research teams have published 35 papers, and hundreds of thousands of people from around the world have taken part in real research. Wow!
To get your copy of Into the Zooniverse: Vol II, download a free pdf here or order a hard copy on Blurb.com. Note that the cost of the book covers production and shipping; Zooniverse does not receive profit through sales. According to the printer, printing and binding take 4-5 business days, then your order ships. To ensure that you receive your book before December holidays, you can use this tool to calculate shipping times.
On 9 November 2020, a security researcher notified us of a cross-site scripting (XSS) vulnerability on our zoomapper application. This service hosts tile sets that are used to render maps for a small number of other Zooniverse applications, but is not connected to any critical Zooniverse infrastructure. This XSS vulnerability could have allowed users to execute malicious code on the zoomapper application in the browser.
We were able to remediate the vulnerability within hours of the report by disabling the browser GUI for zoomapper (see PR #6). The GUI had been turned on by default for the zoomapper app, but is not necessary to fulfill the app’s intended role.
Additional notes on the incident:
The vulnerability existed since the app was first deployed on September 15th 2020.
The vulnerability was located in the underlying Tileserver-GL dependency.
No Zooniverse user or project data was vulnerable or exposed by this vulnerability.
We’d like to thank Rachit Verma (@b43kd00r) for bringing this issue to our attention and for following responsible disclosure by reporting it to us in private, as requested on our security page.
Just over three years ago we launched the first Etch A Cell project (https://www.zooniverse.org/projects/h-spiers/etch-a-cell). The project was the first of its kind on the Zooniverse: never before had we asked volunteers to help draw around the small structures inside of cells (also known as ‘manual segmentation of organelles’) visualised with very high-powered electron microscopes. We even had to develop a new tool type on the Zooniverse to do this – a drawing tool for annotating images.
In this first Etch A Cell project, the organelle we asked Zooniverse volunteers to help examine was the nuclear envelope (as you can see shown in green in the image below). The nuclear envelope is a large membrane found within cells. It surrounds the nucleus, which is the part of the cell that contains the genetic material. It’s an important structure to study as it’s known to be involved in a number of diseases, including cancer, and it’s often the first structure research teams inspect in a new data set.
1. Zooniverse volunteers dedicated a huge amount of effort! Zooniverse volunteers submitted more than 100,000 segmentations across the 4000 images analysed in this first Etch A Cell project. Through this effort, the nuclear envelopes of 18 cells were segmented (shown below in green) from our original data block (shown below).
2. Volunteers were very good at segmenting the nuclear envelope. As you can see in the gif and images below, most classifications submitted for each image were really good! Manual segmentation isn’t an easy task to do, even for experts, so we were really impressed!
3. There’s power in a crowd! The image below shows an overlay of every single segmentation for one of the nuclei studied in Etch A Cell. As you can see, through the collective effort of Zooniverse volunteers, something beautiful emerges – by overlaying everyone’s effort like this, you can see the shape of the nuclear envelope begin to appear!
To make sense of all of this data, we developed an analysis approach that took all of these lines and averaged them to form a ‘consensus segmentation’ for each nuclear envelope. This consensus segmentation, produced through the collective effort of volunteers, was incredibly similar to that produced by an expert microscopist. You can see this in the image below: on the left (in yellow) you can see the expert segmentation of the nuclear envelope of one cell compared to the volunteer segmentation (in green). The top image shows a single slice from the cell, the bottom image shows the 3D reconstruction of the whole nuclear envelope.
4. Volunteer segmentations can be used to train powerful new algorithms capable of segmenting the nuclear envelope. We found that volunteer data alone, with no expert data at all, could be used to train computer algorithms to perform the task of nuclear envelope segmentation to a very high standard. In the gif below you can see the computer predicted nuclear envelope segmentation for each of the cells in pink.
5. Our algorithm works surprisingly well on other data sets. We ran this new algorithm on other datasets that had been produced under slightly different experimental conditions. Because of these differences, we didn’t expect the algorithm to perform very well, however, as you can see in the images below, it did a very good job at identifying the location of the nuclear envelope. Because of this transferability, members of our research team have already begun using this algorithm to aid their new research projects.
We’re so excited to share these results with you, our volunteer community, and the research communities we collaborate with, and we’re looking forward to building on these findings in the future. The algorithms we’ve been able to produce from this effort are already being used by research teams at the Crick, and we’ve already launched multiple new projects asking for your help to look at other organelles – The Etchiverse is expanding!
Now it’s even easier to contribute to science from your phone!
On any crowded public bus (before the pandemic), people sat next to each other, eyes fixed on their phones, smiling, swiping.
What were they all doing? Using a dating app, maybe. Or maybe they were separating wildcam footage of empty desert from beautiful birds. Maybe they were spotting spiral arms on faraway galaxies.
Maybe one of them was you!
We’ve loved seeing the participation in the Zooniverse through the mobile app (available for iOS and Android) over the past two years. So we made it even easier for you to do that wherever you swipe these days—a park bench, or maybe your home. (Please don’t swipe and drive).
Right now, you can go into the app and contribute to Galaxy Zoo Mobile, Catalina Outer Solar System Survey, Disk Detective, Mapping Historic Skies, Nest Quest Go, or Planet Four: Ridges. And we have more projects on the way!
What’s new in the app
When you update to version 2.8.2, you’ll notice a slick new look. At the very top, there’s now an “All Projects” category. This will show you everything available for mobile—with the projects that need your help the most sorted at the very top! You can also still choose a specific discipline, of course.
That’s it for features that are totally new, but a lot of features in this version are fixed. No more crashing when you tap on browser projects. A lot fewer project-related crashes. Animated gifs, which previously worked only on iOS, now also work on Android—so researchers can show you an image that changes over time.
What’s more—and you’ll never see this, but it’s important to us, the developers—we’ve made a lot of changes that help us keep improving the app. We have better crash reporting mechanisms and more complete automated testing. We also updated all of our documentation so that developers from outside our team can contribute to the app, too! We’d love to be a go-to open source project for people who are learning, or working in, React Native (the platform on which our app is built).
The full list of functionalities now includes:
Swipe (binary question [A or B.] response)
Single-answer question (A, B, or C)
Multi-answer question (any combination of A, B, and C.)
Rectangle drawing task (drawing a rectangle around a feature within a subject)
Multi-image subjects (e.g. uploading 2+ images as a single subject; users swipe up/down to display the different images)
Animated gifs as subjects
Subject auto-linking (automatically linking subjects retired from one workflow into another workflow of interest on the same project)
Push notifications (sending messages/alerts about new data, new workflows, etc., via the app)
Preview (an owner or collaborator on a project in development being able to preview a workflow in the ‘Preview’ section of the mobile app)
Beta Review (mobile enabled workflows are accessible through the ‘Beta Review’ section of the app for a project in the Beta Review process; includes an in-app feedback form)
Ability to see a list of all available projects, as well as filter by discipline (with active mobile app workflows listed at the top)
We also carried out a number of infrastructure improvements, including:
Upgrades to the React Native libraries we use
Created a staging environment to test changes before they are implemented in full production
Additional test coverage
Implemented bug reporting and tracking
Complete documentation, so open source contributors can get the app running from our public code repository
And a myriad of additional improvements like missing icons no longer crashing the app, improvements to the rectangle drawing task, etc.
Note: we will continue developing the app; this is just the end of this phase of effort and a great time to share the results.
If you’re leading a Zooniverse project and have any questions about where in the Project Editor ‘workflow’ interface to ‘enable on mobile’, don’t hesitate to email firstname.lastname@example.org. And/or if you’re a volunteer and wonder if workflow(s) on a given project could be enabled on mobile, please post in that project’s Talk to start the conversation with the research team and us. The more, the merrier!
Looking forward to having more projects on the mobile app!
A Few Stats of Interest:
Since Jan 1, 2020:
6.2 million classifications submitted via the app (that’s 7% of 86.7 million classifications total through Zooniverse projects)
18,000 installations on iOS + 17,000 on Android
Current Active Users (people who have used the app in the last 30 days):
1,800 on iOS + 7,700 on Android
Previous Blog Posts about the Zooniverse Mobile App:
When we initially launched our project on Zooniverse on VE Day 2018, our goal was to have all 65,000 pages of commentaries on war and military service written by soldiers in their own hands transcribed and annotated within a 2-year window – in triplicate, for quality-control purposes. We not only hit that milestone in May 2020, but last week we completed an additional 4th round.
Attracting 3,000-plus new contributors, this extension of the transcription drive took only six months. Beyond allowing more people to engage with these unique and revealing wartime documents, the added round is improving our final project output. Within the next week or so, our top Zooniverse transcribers will begin final, manual verification of these transcriptions and annotations, which have been cleaned algorithmically. If you are a consistent project contributor and interested in helping with final validation, please do let us know by signing up here.
As we move forward with the project, we have created a Farewell Talk board. Since we have had so many incredible contributors to The American Soldier, we would love to hear any parting words our volunteers would like to share with the team and with fellow contributors about your experiences or most memorable transcriptions.
We are so incredibly grateful for the international team of researchers, data and computer scientists, designers, educators, and volunteers who have gotten the project to where it is and in spite of the great upheaval. Thanks to their hard work and dedication, the project’s open-access website remains on track for a spring 2021 launch.
We look forward to sharing more news with you soon. Until then, be well and safe.
We’re very happy to announce a new partnership between NASA and our Zooniverse teams at the Adler Planetarium and the University of Minnesota. This new partnership advances and deepens our existing relationship and efforts with NASA. Our team will work together with NASA to create new opportunities for the Zooniverse volunteer community to engage and participate in projects that span the wide range of NASA’s science divisions: astrophysics, heliophysics, planetary science, and earth science.
This new NASA grant will enable new projects as well as provide support for our developers to maintain our research-enabling platform. This support is very welcome, and will help us share our platform with a growing number of scientists who want to unlock data from NASA’s missions, centers, and projects. We’re really looking forward to building and launching these new projects, but don’t worry — nothing else will change. The platform will still be a welcome home to a wide range of research and projects.
It’s been more than a decade now since the Zooniverse launched, and it’s exciting to have reached the point where the Zooniverse platform, research teams, and AMAZING community of volunteers are consistently recognized as valuable contributors and collaborators in research. The Zooniverse team is excited for this partnership and for the future ahead — here’s to lots more adventures to come!
Over the past several months, we’ve welcomed thousands of new volunteers and dozens of new teams into our community.
This is wonderful.
Because there are new people arriving every day, we want to take this opportunity to (re)introduce ourselves, provide an overview of how Zooniverse works, and give you some insight on the folks who maintain the platform and help guide research teams through the process of building and running projects.
Who are we?
The core Zooniverse team is based across three institutions:
Oxford University, Oxford UK
The Adler Planetarium, Chicago IL
The University of Minnesota-Twin Cities, Minneapolis MN
We also have collaborators at many other institutions worldwide. Our team is made up of web developers, research leads, data scientists, and a designer.
How we build projects
Research teams can build Zooniverse projects in two ways.
First, teams can use the Project Builder to create their very own Zooniverse project from scratch, for free. In order to launch publicly and be featured on zooniverse.org/projects, teams must go through beta review, wherein a team of Zooniverse volunteer beta testers give feedback on the project and answer a series of questions that tell us whether the project is 1) appropriate for the platform; and 2) ready to be launched. Anyone can be a beta tester! To sign up, visit https://www.zooniverse.org/settings/email. Note: the timeline from requesting beta review to getting scheduled in the queue to receiving beta feedback is a few weeks. It can then take a few weeks to a few months (depending on the level of changes needed) to improve your project based on beta feedback and be ready to apply for full launch. For more details and best practices around using the Project Builder, see https://help.zooniverse.org/getting-started/.
The second option is for cases where the tools available in the Project Builder aren’t quite right for the research goals of a particular team. In these cases, they can work with us to create new, custom tools. We (the Zooniverse team) work with these external teams to apply for funding to support design, development, project management, and research.
Those of you who have applied for grant funding before will know that this process can take a long time. Once we’ve applied for a grant, it can take 6 months or more to hear back about whether or not our efforts were successful. Funded projects usually require at least 6 months to design, build, and test, depending on the complexity of the features being created. Once new features are created, we then need additional time to generalize (and often revise) them for inclusion in the Project Builder toolkit.
Option 1: Project Builder
Have to work with what’s available (no customization of tools or interface design)
Option 2: Custom Project
Can take a longer time
Get the features you need!
Supports future teams who may also benefit from the creation of these new tools!
We hope this helps you to decide which path is best for you and your research goals.
In the beginning of April 2020, we were notified that subjects from one Zooniverse project were appearing in the subject set of a separate project where they did not belong. In our investigation of the issue, our team determined that this behavior was being caused by a Caesar configuration mistake that used an incorrect Subject Set ID. Project owners using Caesar were able to create Subject Rule Effects that added subjects to collections or subject sets, even without proper subject set editing permissions. We have rectified the issue surrounding Subject Rule Effects and eliminated this vulnerability, and would like to share the details for anyone who is interested.
The issue was raised by project lead James Perry (@JamesPerry), who reported that subjects that didn’t belong to his project were appearing in his subject sets. Due to a mistyped subject set ID in a Caesar `add_to_subject_set` effect for an unrelated project, that Subject Rule Effect was sending subjects from that project to one of James’s subject sets instead of the correct target.
Our immediate course of action was to fix the project impacted by the vulnerability, and push out a temporary code fix to prevent the vulnerability from being exploited.
To fix the affected project, we updated the incorrect subject set id for the project that was incorrectly sending subjects to the wrong project and removed the unwanted subjects from the set.
On April 3rd we deployed a temporary code fix to disable Subject Rule Effect creation and modification for all but admin users (see PR #1109). This change was communicated to affected teams that were most impacted by the change, and teams that reached out after seeing our notification banner or encountering a Caesar interface error.
On May 15th we pushed out a permanent fix that checked the user has permissions to send data to the target subject set or collection. Specifically, the updated validation code checks that the user has update permissions on the project the subject set or collection is linked to. (see PRs #1115, #1129 and #1131).
For anyone running their own hosted copy of Caesar, we recommend pulling these changes as soon as you’re able.
The following is an update from the SuperWASP Vairable Stars research team. Enjoy!
Welcome to the Spring 2020 update! In this blog, we will be sharing some updates and discoveries from the SuperWASP Variable Stars project.
What are we aiming to do?
We are trying to discover the weirdest variable stars!
Stars are the building blocks of the Universe, and finding out more about them is a cornerstone of astrophysics. Variable stars (stars which change in brightness) are incredibly important to learning more about the Universe, because their periodic changes allow us to probe the underlying physics of the stars themselves.
We have asked citizen scientists to classify variable stars based on their photometric light curves (the amount of light over time), which helps us to determine what type of variable star we’re observing. Classifying these stars serves two purposes: firstly to create large catalogues of stars of a similar type which allows us to determine characteristics of the population; and secondly, to identify rare objects displaying unusual behaviour, which can offer unique insights into stellar structure and evolution.
We have 1.6 million variable stars detected by the SuperWASP telescope to classify, and we need your help! By getting involved, we can build up a better idea of what types of stars are in the night sky.
What have we discovered so far?
We’ve done some initial analysis on the first 300,000 classifications to get a breakdown of how many of each type of star is in our dataset.
So far it looks like there’s a lot of junk light curves in the dataset, which we expected. The programme written to detect periods in variable stars often picks up exactly a day or a lunar month, which it mistakes for a real period. Importantly though, you’ve classified a huge number of real and exciting light curves!
We’re especially excited to do some digging into what the “unknown” light curves are… are there new discoveries hidden in there? Once we’ve completed the next batch of classifications, we’ll do some more to see whether the breakdown of types of stars changes.
An exciting discovery…
In late 2018, while building this Zooniverse project, we came across an unusual star. This Northern hemisphere object, TYC-3251-903-1, is a relatively bright object (V=11.3) which has previously not been identified as a binary system. Although the light curve is characteristic of an eclipsing contact binary star, the period is ~42 days, notably longer than the characteristic contact binary period of less than 1 day.
Spurred on by this discovery, we identified a further 16 candidate near-contact red giant eclipsing binaries through searches of archival data. We were excited to find that citizen scientists had also discovered 10 more candidates through this project!
Of the 10 candidate binaries discovered by citizen scientists, we were happy to be able to take spectroscopic observations for 8 whilst in South Africa, and we have confirmed that at least 2 are, in fact, binaries! Thank you citizen scientists!
Why is this discovery important?
The majority of contact or near-contact binaries consist of small (K/M dwarf) stars in close orbits with periods of less than 1 day. But for stars in a binary in a contact binary to have such long periods requires both the stars to be giant. This is a previously unknown configuration…
Interestingly, a newly identified type of stellar explosion, known as a red nova, is thought to be caused by the merger of a giant binary system, just like the ones we’ve discovered.
Red novae are characterised by a red colour, a slow expansion rate, and a lower luminosity than supernovae. Very little is known about red novae, and only one has been observed pre-nova, V1309 Sco, and that was only discovered through archival data. A famous example of a possible red nova is the 2002 outburst in V838 Mon. Astronomers believe that this was likely to have been a red nova caused by a binary star merger, forming the largest known star for a short period of time after the explosion.
So, by studying these near-contact red giant eclipsing binaries, we have an unrivalled opportunity to identify and understand binary star mergers before the merger event itself, and advance our understanding of red novae.
What changes have we made?
Since the SuperWASP Variable Stars Zooniverse project started, we’ve made a few changes to make the project more enjoyable. We’ve reduced the number of classifications needed to retire a target, and we’ve also reduced the number of classifications of “junk” light curves needed to retire it. This means you should see more interesting, real, light curves.
We’ve also started a Twitter account, where we’ll be sharing updates about the project, the weird and wacky light curves you find, and getting involved in citizen science and astronomy communities. You can follow us here: www.twitter.com/SuperWASP_stars
We still have thousands of stars to classify, so we need your help!
Once we have more classifications, we will be beginning to turn the results into a publicly available, searchable website, a bit like the ASAS-SN Catalogue of Variable Stars (https://asas-sn.osu.edu/variables). Work on this is likely to begin towards the end of 2020, but we’ll keep you updated.
We’re also working on a paper on the near-contact red giant binary stars, which will include some of the discoveries by citizen scientists. Expect that towards the end of 2020, too.
Otherwise, watch this space for more discoveries and updates!
We would like to thank the thousands of citizen scientists who have put time into this Zooniverse project. If you ever have any questions or suggestions, please get in touch.
If you, like many of us here at Zooniverse, have found yourself on more Zoom calls than ever these days, you may be looking for suitable images to use as Virtual Backgrounds. Look no further! We’ve compiled some of our favorite images from across the Zooniverse in a Zooniverse Collection.
How to do it
On Zooniverse During classification on Zooniverse, If you’ve come across a subject image (or video!) that you’d like to use in your Zoom background, finish your classification and choose Done & Talk. You can also add it to your Favorites or a Collection. You cannot save an image directly from the classification interface; images may only be saved from a subject’s Talk page (i.e. https://www.zooniverse.org/projects/michiganzoomin/michigan-zoomin/talk/subjects/9185490), so make sure you’re in the right place. Once there, you can right-click or control-click and choose Save Image As.
On Zoom In Zoom, sign in to your account and open Settings. Click ‘Virtual Background’ from the list on the left side. There, you’ll be able to upload your own images from your computer with the plus icon on the right side of the dialog box. Unless you actually have a green screen, leave the ‘I have a green screen’ tickbox un-ticked. Here’s a document from Zoom in case you’re having trouble.