Navigating the Future: Zooniverse’s Frontend Codebase Migration and Design Evolution

Dear Zooniverse Community,

We’re pleased to update you on an important development as we undergo a migration to a new frontend codebase over the course of 2024-2025. This transition brings a fresh and improved experience to our platform.

From a participant’s perspective, the primary changes involve project layout and styling, resulting in a more user-friendly interface. Importantly, these updates don’t impact your stats (e.g,. classification count), Collections, Favorites, etc.

To offer you a sneak peek, check out the updated design and layout on projects that have already migrated, such as:

If a project has a design similar to the examples above, it has migrated. Conversely, if it resembles the old design, like the Milky Way Project, it hasn’t migrated yet.

We value your feedback! If you encounter any difficulties or have suggestions as you’re participating in a project, please share them in the respective project’s Talk or within this general Announcements Talk thread and mention @support.

Wondering about the motivation behind this change? We built the new frontend codebase in order to ensure the robustness and stability of the Zooniverse platform, with key updates enhancing code maintenance, accessibility, and overall sustainability.

Here’s a breakdown of some of the improvements:

  • Breaking up the Code: We’ve modularized our code into independent, reusable libraries to enhance maintenance and overall sustainability.
  • Next.js for Server Side Rendering: By utilizing Next.js, we’re improving accessibility for participants worldwide, particularly those with lower internet speeds and bandwidth.
  • Classify Page Code Updates: We’ve refined elements such as workflows and the subject viewer to ensure improved robustness and sustainability of our codebase.
  • Authentication Library Updates: Keeping up with the latest standards, we’ve updated our authentication libraries to enhance security and user experience.
  • Integrated Code Testing: To maintain the long-term health of our technical products, we’ve integrated code testing throughout our development process. This mitigates against updates introducing bugs or other issues into the codebase, adhering to standard practices.

Thank you for being part of the Zooniverse community! Looking forward to many more groundbreaking discoveries and advances in research. Your classifications and participation in Talk make all of this possible. Thank you! 

Warm regards,

Laura Trouille, Zooniverse PI

Zooniverse Wins White House Open Science Award

I’m thrilled to share some exciting news with you all! Zooniverse, our beloved platform for people-powered research, has been honored by the White House Office of Science and Technology Policy (OSTP) as a champion of open science. 

The OSTP Year of Open Science Recognition Challenge awarded five different projects, including Zooniverse, as ‘Champions of Open Science’ for our work to promote open science to tackle a unique problem. To check out the full announcement and see the other winners, click here.

We’re deeply honored by this recognition. It underscores our commitment to Open Science through people-powered research, valuing the public’s diverse expertise and driving innovation beyond traditional boundaries. By democratizing access to scientific spaces and discovery, Zooniverse not only advances research, but also builds trust in science and fosters meaningful engagement within our global community. 

What makes Zooniverse truly special is the community that drives it forward. The Zooniverse team of devs and data scientists building the platform, the hundreds of researchers leading projects, and every single one of you who dedicates your time and expertise to advancing knowledge. Whether you’re classifying galaxies, transcribing historical documents, tagging penguins, or marking the structure of cells for cancer research, your contributions have made a tangible impact on research across a range of disciplines. And the results speak for themselves – over 400 peer-reviewed scientific publications, groundbreaking discoveries, and critical policy impacts have all stemmed from your collective efforts.

But it’s not just about the data or the publications – it’s about the connections we’ve forged along the way. Through Zooniverse, we’ve built a global community of curious minds, united by a shared passion for exploration and discovery. Together, we’ve championed the principles of open science, breaking down barriers to knowledge and fostering a spirit of collaboration that transcends borders and disciplines. And in doing so, we’ve not only advanced scientific research but also strengthened trust in science and empowered individuals to pursue their interests and passions.

So here’s to you, our incredible community of participants and researchers. Thank you for being a part of this extraordinary journey. Together, we’ll continue to champion the principles of openness, collaboration, and innovation in research, one classification and one Talk post at a time.

Laura

Zooniverse PI, VP Science Engagement, Adler Planetarium in Chicago

Snapshot Wisconsin Celebrates 50th Zooniverse Season!

Snapshot WI 50th Logo

What is Snapshot Wisconsin?

Snapshot Wisconsin is a community science project where the Wisconsin Department of Natural Resources (Wisconsin DNR) partners with volunteers to monitor wildlife using a statewide network of trail cameras. Volunteers host trail cameras, which are triggered by heat and movement, capturing pictures of passing animals. Located in the Great Lakes region of the United States, Wisconsin hosts a variety of habitats from coniferous forest to prairies. Wisconsin is home 65 species of native mammals, hundreds of other vertebrate species, and thousands of invertebrates and native plant species.

Snapshot and Zooniverse

Snapshot Wisconsin has collected over 85 million photos since its genesis in 2015. What started as a pilot project in a few Wisconsin counties has now grown to over 2,000 statewide camera hosts.  Online, Snapshot Wisconsin has enlisted the help of Zooniverse volunteers to make nearly 9.3 million classifications in the first 49 seasons on Zooniverse.

Snapshot Across the Globe

The first Snapshot Wisconsin Zooniverse season was launched on May 17th, 2016. Since then, Snapshot Wisconsin has continually brought in thousands of classifiers. From China to Mexico, from Russia to Brazil, online volunteers have donated almost 36,000 hours so far. The map below highlights the countries in which Snapshot classifiers call home!

World map with countries highlighted signifying global range of Snapshot WI volunteers

#Supersnaps

One way Snapshot moderators and experts encourage those thousands of hours of engagement is by promoting the use of #supersnaps! Zooniverse volunteers come across some fantastic images while classifying photos. Using the tag #supersnap, volunteers can nominate their favorite photos for consideration as the best photo of the month. Snapshot Wisconsin will also be celebrating on the Wisconsin Department of Natural Resources’ Instagram (@wi_dnr) by hosting a tournament to decide among a group of amazing Snapshot trail camera images which one deserves the title of SuperSnap! Vote daily January 22-26 in the @wi_dnr Instagram Story.

Here are a few examples of past #supersnaps:

A black bear mother and her cub
A black bear mother and her cub.
Whitetail deer selfie
Whitetail deer selfie.
The “Badger State” namesake
The “Badger State” namesake.

Get a group involved by hosting a Snap-a-thon!

Take the fun of Zooniverse to an even larger group of participants by hosting a Snap-a-thon! Snapshot Wisconsin Snap-a-thons are friendly competitions where a group of people tag animal photos on our crowdsourcing website, Zooniverse, to gather as many points as possible. Who can participate? Anyone familiar with Wisconsin wildlife and with operating a computer can participate. No need to be a wildlife expert: Zooniverse has a built-in field guide to help with more difficult classifications. You can find Snap-a-thon instructions under the ‘activities’ tab here!

Snapshot Wisconsin Scientific Products

One of Snapshot Wisconsin’s goals is to provide data needed to make wildlife management decisions. Thanks to thousands of online volunteers, the program’s millions of trail camera images are transformed into usable data. This data has been used for wildlife research and wildlife decision support by Wisconsin DNR scientists and interested university students and faculty.

The Snapshot Publication webpage has publications organized by topic, ranging from the temporal and spatial behavior of deer to predator-prey relationships. The valuable information gathered from these research projects helps build our understanding of local wildlife and support wildlife management decisions.

Snapshot Wisconsin Blog

Snapshot Wisconsin has its own project blog at blog.snapshotwisconsin.org where the team shares #supersnaps, project updates, team outreach, scientific findings, ecological tid-bits and more! For more information about the project, please visit our main project page, or get started classifying photos at our Zooniverse crowdsourcing site.

Thank You for 50 Great Seasons

Snapshot Wisconsin would like to thank their camera hosts and Zooniverse volunteers for the tremendous amount of work they do for Wisconsin’s natural resources. 50 Zooniverse seasons has certainly flown by for the team, but it is nonetheless a remarkable success that wouldn’t be possible without the dedication and passion of the project’s Zooniverse volunteers.

Happy New Year & YouTube livestream this Thursday

Happy New Year Everyone! We can’t thank you enough for making Zooniverse possible. Thank you, thank you, thank you!!!!

We have so much to celebrate from 2023. 

  • We welcomed our 2.5 millionth registered participant!
    • To date: 2.6 million registered participants from 190 countries
    • Top countries in 2023: US, UK, Germany, India, Canada, Australia
  • 400 Zooniverse projects publicly launched
    • 40 new projects in 2023 alone; ~90 active projects at any given time
    • Each led by a different research team. Zooniverse partners with hundreds of universities, research institutes, museums, libraries, zoos, NGOs, and more
  • 400 peer-reviewed publications (30 in 2023 alone)
  • 780 million classifications (65 million classifications in 2023 alone)
  • 5 million posts in the Zooniverse ‘Talk’ discussion forums (680K in 2023 alone)
  • 19.5 million hours of participation
    • 1.6 million hours in 2023 alone; equivalent to 780 FTEs

We welcome you to join us this Thursday for a YouTube LiveStream from 2:15pm-3:15pm CST (8:15pm GMT; Friday 1:15am in India) celebrating Zooniverse 2023 Milestones as part of a Press Conference for the American Astronomical Society Meeting happening this week in New Orleans.

Bonus: the Press Conference will include a slew of other astronomy related discoveries, mysteries, and intrigues. Connect via https://www.youtube.com/@AASPressOffice/streams (open to the public). Also, throughout the week we’ll post on https://twitter.com/the_zooniverse (with the hashtag #aas243) about our experiences at the conference. 

Milestones are great to celebrate, but we all know a deep magic is in the everyday moments – catching a penguin chick in the midst of a funny dance on Penguin Watch, hearing a coo that reminds you of your own little loves in Maturity of Baby Sounds, uncovering a lost genealogical clue in Civil War Bluejackets, connecting with someone from the other side of the globe who shares your interests in chimps and their fascinating behaviors through the Talk discussion forums, and more, and more. Wonderful if you’d like to share one of your everyday Zooniverse moments with us by tagging @the_zooniverse on X (formerly Twitter) or sharing via email at contact@zooniverse.org. Hearing your moments helps us better understand how the Zooniverse community creates meaning and impact from these experiences (and what we can do to nurture those moments). 

Wishing you a joyful and gentle 2024. Cheers to new beginnings and continued adventures together. 

Laura
Zooniverse PI, VP Science Engagement, Adler Planetarium in Chicago

‘Etch A Cell – Fat Checker’ – Project Update!

We are excited to share with you results from two Zooniverse projects, ‘Etch A Cell – Fat Checker’ and ‘Etch A Cell – Fat Checker Round 2’. Over the course of these two projects, more than 2000 Zooniverse volunteers contributed over 75 thousand annotations!

One of the core aims of these two projects was to enable the design and implementation of machine learning approaches that could automate the annotation of fat droplets in novel data sets, to provide a starting point for other research teams attempting to perform similar tasks.

With this in mind, we have developed multiple machine learning algorithms that can be applied to both 2D and 3D fat droplet data. We describe these models in the blog post below.

Machine learning model2D or 3D dataPublications to dateOther outputs
PatchGAN2Dhttps://ceur-ws.org/Vol-3318/short15.pdf https://github.com/ramanakumars/patchGAN https://pypi.org/project/patchGAN/
TCuP-GAN3D“What to show the volunteers: Selecting Poorly Generalized Data using the TCuPGAN”; Sankar et al., accepted in ECMLPKDD Workshop proceedings.https://github.com/ramanakumars/TCuPGAN/
UNet/UNet3+/nnUNet2D https://huggingface.co/spaces/umn-msi/fatchecker
An overview of the machine learning algorithms produced from the Etch A Cell – Fat Checker projects (described in this post).

Machine learning models for the segmentation of fat droplets in 2D data

Patch Generative Adversarial Network (PatchGAN)
Generative Adversarial Networks (GANs) were introduced in 2018 for the purpose of realistic learning of image-level features and have been used for various computer vision related applications. We implemented a pixel-to-pixel translator model called PatchGAN, which learns to convert (or “translate”) an input image to another image form. For example, such a framework can learn to convert a gray-scale image to a colored version.

The “Patch” in PatchGAN signifies its capability to learn image features in different sub-portions of an image (rather than just across an entire image as a whole). In the context of the Etch A Cell – Fat Checker project data, predicting the annotation regions of fat droplets is analogous to PatchGAN’s image-to-image translation task.

We trained the PatchGAN model framework on the ~50K annotations generated by volunteers in Etch A Cell – Fat Checker. Below we show two example images from the Etch A Cell: Fat Checker (left column) along with aggregated annotations provided by the volunteers (middle panel), and their corresponding 2D machine learning model (PatchGAN) predicted annotations.

We found that the PatchGAN typically performed well in learning the subject image to fat-droplet annotation predictions. However, we noticed that the model highlighted some regions potentially missed by the volunteers, as well as instances where it has underestimated some regions that the volunteers have annotated (usually intermediate to small sized droplets).

We made have made this work, our generalized PatchGAN framework, available via an open-source repository at https://github.com/ramanakumars/patchGAN and https://pypi.org/project/patchGAN/. This will allow anyone to easily train the model on a set of images and corresponding masks, or to use the pre-trained model to infer fat droplet annotations on images they have at hand.
 

UNet, UNet3+, and nnUNet
In addition to the above-mentioned PatchGAN network, we have also trained three additional frameworks for the task of fat droplet identification – UNet, UNet3+, and nnUNet.

UNet is a popular deep-learning method used for semantic segmentation within images (e.g., identifying cars/traffic in an image) and has been shown to capture intricate image details and precise object delineation. Its architecture is U-shaped with two parts – an encoder that learns to reduce the input image down a compressed “fingerprint” and a decoder which learns to predict the target image (e.g., fat droplets in the image) based on that compressed fingerprint. Fine-grained image information is shared between the encoder and decoder parts using the so-called “skip connections”. UNet3+ is an upgraded framework built upon the foundational UNet that has been shown to capture both local and global features within medical images.

nnUNet is an user-friendly, efficient, and state-of-the-art deep learning platform to train and fine-tune models for diverse medical imaging tasks. It employs a UNet-based architecture and comes with image pre-processing and post-processing techniques.

We trained these three networks on the data from the Fat Checker 1 project. Below, we show three example subject images along with their corresponding volunteer annotations and different model predictions. Between the three models, nnUNet demonstrated superior performance.

Screenshot 2023-11-16 at 12.37.02.png

Machine learning models for the segmentation of fat droplets in 3D data

Temporal Cubic PatchGAN (TCuP-GAN
Motivated by the 3D volumetric nature of the Etch A Cell – Fat Checker project data, we also developed a new 3D deep learning method that learns to predict the direct 3D regions corresponding to the fat droplets. To develop this, we built on top of our efforts of our PatchGAN framework and merged it with another computer vision concept called “LongShort-Term Memory Networks (LSTMs)”. Briefly, in recent years, LSTMs have seen tremendous success in learning sequential data (e.g., words and their relationship within a sentence) and they have been used to learn relationships among sequences of images (e.g., movement of a dog in a video).

We have successfully implemented and trained our new TCuP-GAN model on the Etch A Cell – Fat Checker data. Below is an example image cube – containing a collection of 2D image slices that you viewed and annotated – along with the fat droplet annotation prediction of our 3D model. For visual guidance, we show the middle panel where we reduced the transparency of the image cube shown in the left panel, highlighting the fat droplet structures that lie within.

Screenshot 2023-11-16 at 12.37.15.png

We found that our TCuP-GAN model successfully predicts the 3D fat droplet structures. In doing so, our model also learns realistic (and informative) signatures of lipid droplets within the image. Leveraging this, we are able to ask the model which 2D image slices contain the most confusion between lipid droplets and surrounding regions of the cells when it comes to annotating the fat droplets. Below, we show two example slices where the model was confident about the prediction (i.e., less confusion; top panel) and where the model was significantly confused (red regions in fourth column of the bottom panel). As such, we demonstrated that our model can help find those images in the data set that preferentially require information from the volunteers. This can serve as a potential efficiency step for future research teams to prioritize certain images that require attention from the volunteers.

Screenshot 2023-11-16 at 12.37.31.png

Integrating Machine Learning Strategies with Citizen Science

Several thousands of annotations collected from the Etch A Cell – Fat Checker project(s) and their use to train various machine learning frameworks have opened up possibilities that can enhance the efficiency and annotation gathering and help accelerate the scientific outcomes for future projects.

While the models we described here all performed reasonably well in learning to predict the fat droplets, there were a substantial number of subjects where they were inaccurate or confused. Emerging “human-in-the-loop” strategies are becoming increasingly useful in these cases – where citizen scientists can help with providing critical information on those subjects that require the most attention. Furthermore, an imperfect machine learning model can provide an initial guess which the citizens can use as a starting point and provide edits, which will greatly greatly reduce the amount of effort needed by individual citizen scientists.

For our next steps, using the data from the Etch A Cell – Fat Checker projects, we are working towards building new infrastructure tools that will enable future projects to leverage both citizen science and machine learning towards solving critical research problems.

Enhancements to the freehand drawing tool

First, we have made upgrades to the existing freehand line drawing tool on Zooniverse. Specifically, users will now be able to edit a drawn shape, undo or redo during any stage of their drawing process, automatically close open shapes, and delete any drawings. Below is a visualization of an example drawing where the middle panel illustrates the editing state (indicated by the open dashed line) and re-drawn/edited shape. The tool box with undo, redo, auto-close, and delete functions is also shown.

Screenshot 2023-11-16 at 11.56.40.png

A new “correct a machine” framework

We have built a new infrastructure that will enable researchers to upload machine learning (or any other automated method) based outlines in a compatible format to the freehand line tool such that each subject when viewed by a volunteer on a Zooniverse will be shown the pre-loaded machine outlines, which they can edit using the above newly-added functionality. Once volunteers provide their corrected/edited annotations, their responses will be recorded and used by the research teams for their downstream analyses. The figure below shows an example visualization of what a volunteer would see with the new correct a machine workflow. Note, that the green outlines shown on top of the subject image are directly loaded from a machine model prediction and volunteers will be able to interact with them.

Screenshot 2023-11-16 at 12.37.50.png

From Fat Droplets to Floating Forests

Finally, the volunteer annotations submitted to these two projects will have an impact far beyond the fat droplets identification in biomedical imaging. Inspired by the flexibility of the PatchGAN model, we also carried out a “Transfer Learning” experiment, where we tested if a model trained to identify fat droplets can be used for a different task of identifying kelp beds in satellite imaging. For this, we used the data from another Zooniverse project called Floating Forests.

Through this work, we found that our PatchGAN framework readily works to predict the kelp regions. More interestingly, we found that when our pre-trained model to detect fat droplets within Fat Checker project data was used as a starting point to predict the kelp regions, the resultant model achieved very good accuracies with only a small number of training images (~10-25% of the overall data set size). Below is an example subject along with the volunteer annotated kelp regions and corresponding PatchGAN prediction. The bar chart illustrates how the Etch A Cell – Fat Checker based annotations can help reduce the amount of annotations required to achieve a good accuracy.

Screenshot 2023-11-16 at 12.37.59.png

In summary: THANK YOU!

In summary, with the help of your participation in the Etch A Cell – Fat Checker and Etch A Cell – Fat Checker Round 2 projects, we have made great strides in processing the data and training various successful machine learning frameworks. We have also made a lot of progress in updating the annotation tools and building new infrastructure towards making the best partnership between humans and machines for science. We are looking forward to launching new projects that use this new infrastructure we have built!

This project is part of the Etch A Cell organisation

‘Etch A Cell – Fat Checker’ and ‘Etch A Cell – Fat Checker round 2’ are part of The Etchiverse: a collection of multiple projects to explore different aspects of cell biology. If you’d like to get involved in some of our other projects and learn more about The Etchiverse, you can find the other Etch A Cell projects on our organisation page here.

Thanks for contributing to Etch A Cell! – Fat Checker!

new avatar_3a.png

‘Etch A Cell – Demolition Squad’ – First Results!

Introducing Etch a Cell – Demolition Squad

Earlier this year, the team behind the Etch A Cell series of citizen science projects launched their latest project; ‘Etch A Cell – Demolition Squad’. Through this project, researchers based at the Francis Crick Institute (London, UK) have teamed up with Zooniverse volunteers to study ‘lysosomes’.

What are lysosomes?

Lysosomes are balloon-shaped organelles contain a range of enzymes that can break down biological substances including proteins, sugars, and fats. These enzymes allow lysosomes to function as the cell’s digestive system; lysosomes digest and degrade substances from both inside and outside of cells. This versatile functionality allows lysosomes to contribute to a range of biological tasks, including the breakdown of aged cellular components, the destruction of viruses, and the support of digestion during times of hunger. Lysosomes even play a role in the life cycle of cells by facilitating the removal of cells that have reached their expiry date. Etch A Cell – Demolition Squad has the aim of improving our ability to study lysosomes, to enable us to further understand how they contribute to the world of cellular dynamics.

An image from the first data set analysed in Etch A Cell – Demolition Squad.

What’s the aim of the project?

The team behind Etch A Cell work with a variety of research teams to study different aspects of biology using cutting-edge Electron Microscopes. With their remarkable magnification and resolution, these microscopes allow researchers to capture intricate images of tissues, cells, and molecules. These images can be used to provide us with a richer understanding of biology, which can help us understand the biological changes associated with health and disease. Recent developments in electron microscope technology have enabled automatic image collection, leading to a deluge of data. This influx of information is helping propel research forward, however, it has caused a bottleneck in data analysis pipelines – which is why Etch A Cell needs the help of Zooniverse volunteers!

How are Zooniverse volunteers helping?

To study the images generated by our microscopes we typically analyse them by ‘segmenting’ the features we’re interested in, which means to draw around the bits of the cell that we want to examine. In Etch A Cell projects, Zooniverse volunteers help with this segmentation task. In Etch A Cell – Demolition Squad volunteers were asked to help with segmenting lysosomes. These structures can be very difficult to spot, so an imaging technique was used that enables the selective labelling of the lysosomes with a marker that allows their easier identification. You can see an example image below – the marker is shown in the images as pink, so volunteers were asked to draw around the grey blobs labelled with pink. In the image below you can also see some green lines which show how one expert segmenter drew around the lysosomes in this image.

In Etch A Cell – Demolition Squad, volunteers were asked to draw around lysosomes. To make the lysosomes easier to spot they were labelled with a marker, that is shown in the image above as pink. This image also shows lysosome segmentations done by an expert in green.

What are the results so far?

Since Etch A Cell – Demolition Squad was launched in May 2023, this project has received more than 12,000 classifications from hundreds of Zooniverse volunteers! This has allowed all 792 images in the first project data set to be retired. We’ve now started taking a look at the data, and we’re really impressed with the segmentations submitted! Well done to everyone who has contributed!

Here’s a sneak peek at some of the data you’ve produced through your collective efforts:

Briefly, you can see from these images show that Zooniverse volunteers have done fantastic job at this challenging task: the collective volunteer segmentations (shown in green in panel F) correspond really well to the location of lysosomes in the image (shown in pink in image B). With thanks to Francis Crick Institute internship student Fatihat Ajayi for generating these images.

The images above show the raw data (A) and this data overlaid with the pink marker that highlights the lysosomes (B). Image B is the one Zooniverse volunteers would have seen in the project interface. All the volunteer segmentations submitted for this image are shown overlaid in image C. Image D also shows the volunteer segmentations together, but in this image the line drawings have been ‘filled in’ and stacked on top of each other to make it easier to see where the volunteers agree there is a lysosome (the more volunteers who annotated a region, the brighter white it shows). Image E shows where most of the volunteers agreed there were lysosomes in this image, and image F shows how this corresponds to the raw data image (A). From these images you can see how collectively, Zooniverse volunteers did a great job at drawing around the lysosomes.

What’s next for Etch A Cell – Demolition Squad?

In the near future, we will be uploading new batches of data that we will need your help to analyse. These new data sets may look slightly different, but the task off segmenting the lysosomes will remain the same.

This project is part of the Etch A Cell organisation

‘Etch A Cell – Demolition Squad’ is one of multiple projects produced by the Etch A Cell team and their collaborators to explore different aspects of cell biology. If you’d like to get involved in some of our other projects, you can find the other Etch A Cell projects on our organisation page here.

Thanks for contributing to Etch A Cell!

Who’s who in the Zoo – Ramana Sankar

Name: Ramana Sankar

Location: University of California, Berkeley

Tell us about your role within the team:

I’ve been with the Zooniverse team for two years now as a postdoc working with Lucy Fortson at University of Minnesota. My main role was to help with building human-machine interfaces for project teams on Zooniverse (particularly in the avenue of speeding up project completion rates and improving avenues serendipitous discovery), and also with providing data science assistance to projects.

What did you do in your life before the Zooniverse?

I did my Ph.D. at Florida Institute of Technology. My main focus was on studying the formation of thunderstorms on Jupiter. I was (and still am) interested in understanding more about the dynamics of the jovian atmosphere and how the fluid dynamics and chemical processes shape the cloud structures we see when we look at Jupiter. As part of this work, I also tried out a few deep learning approaches to reduce data from the JunoCam instrument and gained some machine learning and data science experience, which drew me to apply for the postdoctoral scholar position with the Zooniverse team at UMN.

What does your typical working day involve?

Most of my day is spent with code. I break my tasks into astrophysical research (focusing on Jupiter’s atmosphere), machine/deep learning (developing new models for the project teams that I am assisting) and working on aggregation utilities for project teams.

How would you describe the Zooniverse in one sentence?

A platform that reduces the barrier of entry for scientific pursuits

Tell us about the first Zooniverse project you were involved with

As part of my introduction to the team, a fellow postdoc at UMN and I was asked to create a test Zooniverse project to get a hang of using the Zooniverse project builder. We built the “Chi Square Kitties” project where we asked people to rate the fits of cats inside various containers. We scraped the web for cat images (there seemed to be a limitless supply) and asked a simple question of whether the cat inside the container was an underfit, overfit or a purr-fect fit!

Of all the discoveries made possible by the Zooniverse, which for you has been the most notable? (and why?)

I love the fact that the Gravity Spy team found a glitch class called “Air compressor” which was noise from a nearby A/C unit. This is the kind of discovery that is incredibly difficult (if not impossible) without having human eyes on the data. This is the kind of discovery which is fueled by Zooniverse!

What’s been your most memorable Zooniverse experience?

On the Jovian Vortex Hunter project, the volunteers came up with very creative names for some of the types of clouds that was observed. Some of my favourite ones are “red-compact-nursery” for subjects which contain very small red cyclones which seem to be forming in between the folded filamentary regions. These creative names (in my view) are actually much more useful than very descriptive scientific jargon, since it helps to create very interpretable labels based on what the feature looks like, rather than the underlying physics.

What are your top three citizen science projects? 

The best things about citizen science is that they cater to a very wide audience. Some projects can be highly accessible and can be done by basically anyone, while others require very technical expertise. There are a ton of amazing projects on Zooniverse which span this experience range, but I will pick some non-Zooniverse projects to highlight some of the reasons why I really love the citsci methodology:

1. JunoCam: the Juno spacecraft almost did not have a camera since it did not fit within the payload mass and budget constraints. Fortunately, a last minute decision was made to put a simple 4-channel camera (previously flight tested on the MSL) and not have a dedicated science team for the instrument. Instead, JunoCam relies on thousands of artists and hobbyists for downloading and processing the raw footage. JunoCam is a story to be told for the way it engages people, particularly artists, in scientific communication, and also how, now, there are several papers written on using the data processed by citizen scientists for new science!

2. PVOL (Planetary Virtual Observatory and Laboratory): This is a website for amateur observers to post their observation of giant planets to a centralized server which can be used by research to provide contextual information to any related studies. PVOL has helped fueled a lot of research into giant planet atmospheres, especially in the context of long cadence studies (longevity of features, periodicity of instabilities) and also in very short term (e.g., meteor impacts, etc.) This is a very useful way to use a very skilled hobby (astrophotography) for an extremely useful cause.

3. RadioJOVE: A project for interested parties to build and operate a simple multi-wavelength radio telescope. This was started as a way to observe Jupiter (and now other sources) in the radio wavelengths. What is amazing about this project is that it teaches a very niche skill (putting together and using a radio telescope) to non-scientists. Through the method of citsci, this project has made great strides in science communication and ways to inspire kids to pursue STEM degrees.

What advice would you give to a researcher considering creating a Zooniverse project?

Volunteers are not simply labelers. They do not replace a complicated algorithm, but instead provide value far beyond a simple classification. The greatest strength of Zooniverse is the fact that volunteers want to be engaged deeply in science, and that needs to be baked into the ethos of any Zooniverse project

How can someone who’s never contributed to a citizen science project get started?

Find your passion in science and search for local resources. For example, if you’re interested in ecology, there are probably conservation groups that could use your help with taking photos of local fauna and/or flaura. If you’re an astronomy enthusiasts, you can talk to the local planetarium, who can provide you with resources and ways to contribute. Finally, you can see if there are public seminars in their local university for departments related to their interest. These can help answer many of the questions you might have about the field.

Where do you hope citizen science and the Zooniverse will be in 10 years time?

I hope that the bridge between science and citizen science narrows further than it currently is. I hope that more research teams can use citizen science not only as a method to analyze and process data, but also as a way to drive science engagement and reduce the barrier of entry to research. I also hope that future spacecraft missions follow in the path laid by JunoCam and have a strong citsci component

Is there anything in the Zooniverse pipeline that you’re particularly excited about?

I’m really interested in how machine learning will play a role in Zooniverse. As the world heads deeper into the use of AI and automation, I am interested in seeing how Zooniverse will adopt these tools while maintaining efficiency, and more importantly, accuracy. Several projects on Zooniverse already use a variety of AI-based tools, so I am looking forward to seeing many more projects make use of these resources.

When not at work, where are we most likely to find you?

Either at home playing video games (catching up on my huge backlog which built up over grad school) or out hiking. I know these are polar opposites, but if I can get over the barrier of pulling away from my PC, I’m always ready to do a 5 mile hike!

Do you have any party tricks or hidden talents?

I learnt to sing Carnatic music, which is a classical South Indian music tradition, and I like to think I can hold a tune (although others can be the judge of that)!

A note from Chris Lintott

For more than a decade, I’ve been ending talks by looking forward to the rich data that will soon begin flowing from the Vera Rubin Observatory. The observatory’s magnificent new telescope, nearing its long-awaited first light on a mountaintop in Chile, will conduct a ten year survey of the sky, producing 30 TB of images and maybe ten million alerts a night, covering everything from asteroids to distant galaxies. 

A starry sky illuminates a hillside with a long building on the top. At one end is a square-ish telescope dome, which houses the telescope of the Vera Rubin Observatory.
The Vera Rubin Observatory Building, ready for action.

The Observatory has – from really early on – seen citizen science, and in particular working with the Zooniverse, as an important part of how they intend to make the most of this firehose of data. We’ve been working hard on things like making data easy to transfer across, designing tools for helping scientists building good projects and more. 

All of which is to say I’m very excited – as a Zooniverse person, and as an astronomer – about the opportunities ahead. To allow time for me to concentrate on making the most of it – and in particular, in learning how to find odd stuff alongside Zooniverse volunteers – I’ve decided it’s time for me to step down as Zooniverse Principal Investigator, a position I’ve held since…well, since before the Zooniverse existed. 

Chris Lintott, wearing a collared shirt, jumper and jeans, poses on a giant tardigrade throne.
Giving up the Principal Investigator’s Official Throne will be a hardship, but it’s time.

After 15 or so years, it’s well past time for other people to lead. Laura Trouille from the Adler Planetarium has been my co-PI since 2015, and will be taking over as Principal Investigator. Laura is brilliant, and has been the driving force behind much of what we’ve been doing for a long time. I’m confident in her, and along with her fabulous team, I’m excited to see where she leads this marvellous platform and project. With a growing partnership with NASA, new ideas in how to use Zooniverse in education and reaching new communities, and – of course – new projects launching each week, I genuinely think that we’re in a strong place to keep allowing everyone to participate in science. 

In terms of stepping down from this role, I’ll be more Joe Root than Stuart Broad (Note to Americans: This is a cricket reference) – there is a long-standing ‘joke’ that no-one actually leaves the Zooniverse – sticking around the team to serve as Senior Scientist, giving advice where needed and continuing to lead specific projects and efforts. The team in Oxford will work more closely than ever with the rest of the collaboration, and our plans for Rubin and the Zooniverse will be unaffected.

It is, however, strange to be contemplating a change after all this time. I’m immensely proud of the fact that once we stumbled onto the success of the original Galaxy Zoo (a story told in a very reasonably priced book…), we had the imagination not just to build another Galaxy Zoo (which we did – classifications still needed!) but a platform that covers such a wide variety of fields, and through which volunteers have contributed three-quarters of a BILLION classifications. Thanks to the support of all manner of organizations, I’m also very proud that we haven’t had to charge researchers for access to the Zooniverse; the only criteria that have ever mattered has been  whether a project will contribute to science and be welcomed by our volunteers. 

I’m so grateful to those who have lent their effort, energy and expertise to making a Zooniverse which is grander than I would ever have dared on my own. In particular: Arfon Smith, who created the original Zooniverse with me and took over from me as Director of Citizen Science at Adler, and Lucy Fortson, who has been the best grant-writer and collaborator I could wish for as well as putting up with my stresses more than anyone should have to. I’ll stop here, else I’ll end up listing all of the developers, researchers and volunteers who have made this unique project what it is.

Onwards! Let us make, and set the weather fair.

See you in the Zooniverse.

Chris Lintott

September 2023

Adventures of a Junior Designer

Guest post written by Keanu Glover, Junior Designer with the Zooniverse team at the Adler Planetarium in Chicago from June-August 2023. Prior to joining the Zooniverse, Keanu was an i.c.stars intern, an intensive 6-month program supporting pathways to economic mobility through IT and software engineering workforce skills training, job experience, and leadership development. After his role within the Zooniverse, Keanu will join the United Airlines Apprenticeship Program.

Pivoting into a new field of work brings forth so many different thoughts such as “can I do the task” “how will I fit in” “do I belong”? For me, transitioning from basketball and athletics to Tech and Design had me nervous at first but once I was submerged into the tech space, I realized the skills I developed playing sports transitioned seamlessly. When I got the opportunity to work for the Zooniverse at Adler Planetarium as a Junior Designer I realized I was given the opportunity to work for great people and contribute to a great situation. 

Before this opportunity I had never been to the Adler or even thought about what it was. Most of the time these types of opportunities are intimidating. Lucky for me it didn’t have that effect on me. I had prepared myself for whatever opportunities were for me! 

I was really excited, having the opportunity to learn something new, be a part of something bigger than myself, contributing to a real project that I can add to my portfolio, and also having the opportunity to build new relationships with experienced professionals.

During my interview with Laura Trouille (Zooniverse Co-PI and Adler VP of Science Engagement) as part of asking about what interested me in applying for the position, she asked, “Do you have past connections with the Adler or astronomy?” This reminded me of a story about moving countless times when I was young, driving across state lines. When the sun would set and the stars would show themselves, my mother used to listen to Janet Jackson and I used to look out the backseat window at the stars. While I looked out I would wonder if our new destination would be our last.

My first week working with the Zooniverse was different, for sure! I had just got out of the i.c.stars program where I was working 12 hour days for 4 months, building a data collection application with a small group of 5 people. All hands on deck as they say.

When I first met Sean Miller, the Zooniverse designer and my mentor for the summer, I learned that work doesn’t have to be as intense. For example my first day we sat down and he told me we would have a meeting in the morning and another in the afternoon. I asked him “what time, like 8am?” He looked at me and said “whoa no! We are going to meet at 10 am”. I was caught by surprise, but was so happy he said that. I quickly understood every company has different cultures. (The Zooniverse strives to schedule meetings between 10-4pm to accommodate different work schedules, commute times, work-life balance.)

For me I felt an energy working with the Zooniverse team. Everyone knows what they are supposed to do, and how every element of their roles are interconnected. For me coming in as a summer intern I didn’t want to disappoint my job placement program: i.c.stars, my family, or Sean, who hand picked me to join his team. 

Sean gave me all the tools and time I needed to get the most out of my time here at Zooniverse. He never made me feel less than for not knowing something; he was always patient, understanding, and an outstanding communicator. I think it’s safe to say that he enjoyed my company for the three month period. Sean is an amazing teacher with so many resources that helped prepare me for the project and the type of work I wanted to get into after the Zooniverse. The rest of the Zooniverse team were beyond accepting of me! They were always willing to explain and help me with things about the Zooniverse site and the Adler as a whole and when it wasn’t about work they always invited me to the beach for a walk or to have lunch.

I had never participated in Zooniverse before I joined the team. When I was tasked to research the platform my first thought was “this is really cool”. I always wondered if there was a platform that allowed people to feel like a true researcher. The platform reminded me of when I would have days off from school and I would watch the Travel Channel, History channel, and Animal Planet. When I was younger I would think it was the coolest thing to be a part of those experiences.

I picked up the concept of the site pretty quickly. I made over 80 different classifications, commented on other projects, and tried to build my own project using zooniverse.org/lab just so I could put myself in the volunteers’ shoes. The site was easy to use and was not complicated at all to navigate. The more I used it the more I started to think about features I could add and how I could help the Zooniverse platform evolve and grow. For example, Sean and I realized that on the ‘Recents’ page (where you can see your recent classifications for any given Zooniverse project) the participants’ recent classifications maxed out at 20. Now if I was a participant that has classified a lot, I might want to see more than 20 recent classifications. So one of the new features we decided to add was paginations and being able to to view and paginate through 100 recent projects. 

My main objective was to experience a professional design sprint for the Zooniverse “Recents Page”. Breaking down the project into three main phases. Discovery phase is where we deep dive into UI/UX research and gain better knowledge on why design is so important. The Definition phase is when I was able to set a foundation for my redesign, meaning  keeping things in scope and remembering to build what the page needs and how it fits into the bigger picture of page designs for Zooniverse, this was a challenge. I found during the discovery and definition phases researching other websites similar to Zooniverse was a big help and allowed me to build on ideas based on the functionality of these other websites and platforms. The Development phase is where we started to build the project. During this phase of the project I learned how critical it is to always have the volunteer in mind when designing new functionalities, but also making sure to keep features that already exist on the Zooniverse platform. The Retrospective phase allowed me to look at my progress and knowledge I gained this summer.  I hope my redesign of the ‘Recents’ page is well received, As I redesigned it with the volunteer in mind. I wanted to keep things similar but give it an upgrade on new features and provide a little more versatility in finding classified projects. Giving the volunteer what they need before they ask for it is something I wanted to do. This redesign will be great for super-users as well as new participants. Stay tuned for more info and announcements about the new Recents Page.

I have come a long way. This time working for Zooniverse is an experience that I will never forget. This experience was about me developing new skills and embracing new challenges. It was also about creating an opportunity for future interns to follow in my footsteps. Leaving a great impression was very important to me. Being professional, showing a great work ethic, and also being a leader. For anyone who has the opportunity to work for Zooniverse I think it is a great place to be and it provides the opportunity to learn and be a part of a community who will support your learning process. I loved my time here at Zooniverse and I wish I could stay! 

Museums, whales, and citizen science

Guest post from Eilidh O’Brien, Staff Scientist, Whales of Iceland Museum

Appropriately located in Reykjavík’s harbour district, Whales of Iceland is the largest museum dedicated to cetaceans in Europe. Much of the space inside is dedicated to life-sized models of the 23 species of whales, dolphins and porpoises that have been sighted in the waters around Iceland throughout history, some very common while others are very rare. When the museum was founded these models were the main focus of the exhibition: a chance for visitors to experience the true size of these gigantic marine animals, and to learn a little about each of the species on display. However, this focus is now evolving and Zooniverse is set to play an important part.

Iceland is a hotspot for cetaceans – and so, also for cetacean researchers! Some remarkable discoveries have been made here in the last decade, from the first recordings of humpback whales singing in their feeding grounds over winter, to the unusual antagonistic interactions between killer whales and long-finned pilot whales. We want to highlight this at Whales of Iceland so that our museum is not just a place to learn about cetaceans themselves, but also how scientists study these fascinating and complex animals, what this research has uncovered, and all the things that we still do not know!

In addition to learning about the research happening here in Iceland, we want to give visitors the opportunity to take part in some real scientific projects. So, thanks to Zooniverse, our newest exhibit will include a citizen science station where anyone can have a shot at being a scientist! We will feature a range of Zooniverse projects for visitors to choose from, giving them a variety of different marine mammal species and different aspects of wildlife ecology to learn about.

Our aim is to make Whales of Iceland a more interactive and thought-provoking experience. We hope that our museum will continue to offer visitors the chance to marvel at the size and beauty of these wonderful creatures, but also to engage with the natural world in ways they may not have before, and to feel that they have not just learned, but discovered.

This collaboration is still in its early stages. With the green light from Zooniverse Co-PI Dr. Laura Trouille, we have already launched a scaled-down version of what we hope the final exhibit will be, and it has been a really promising success! Museums provide a perfect platform for citizen science; we are a small museum relatively speaking, but our footfall in peak season can be more than 400 people in a day. That’s a lot of potential citizen scientists! In ecology, we would call this a mutualistic symbiosis – or, in other words, everyone wins! Our museum guests can provide valuable contributions to scientific projects all over the world, while at the same time gaining first-hand insight into the life of a whale researcher.

We are so excited to develop and expand our collaboration with Zooniverse, as well as other citizen science initiatives. Our finished research exhibit will be unveiled very soon – watch this space!

The Zooniverse Blog. We're the world's largest and most successful citizen science platform and a collaboration between the University of Oxford, The Adler Planetarium, and friends