Launch News: Community-Building Pages

At the Zooniverse, we strive to foster a vibrant community of software engineers, researchers, and everyday participants. Each week the Zooniverse volunteer community contributes over 1 million classifications across ~80 active projects. This collective effort has contributed to hundreds of publications. Many of you have experienced first-hand or heard about serendipitous discoveries through Talk or by reading a project’s results page. Your contributions make a real difference in advancing scientific research and discovery worldwide. 

To further encourage and support this sense of collective effort leading to discovery, we’re exploring additional pathways for people to connect. With newly implemented Group Engagement features, like-minded participants can connect, collaborate on projects, and work together toward shared goals.

We can’t wait to see the creative ways our community will make the most of this new Groups feature!

Introducing the new Zooniverse Groups community-building pages.

WHAT IS IT?

With the new Groups feature, you will be able to track collaborative achievements with friends and family, fellow science enthusiasts, educational groups, and more within the Zooniverse community. Track your stats and see which projects trend within your group.

HOW DOES IT WORK?

Creating a Group

Once you’re logged in to Zooniverse, you can select ‘Create New Group’ from the ‘My Groups’ panel in the zooniverse.org homepage. By creating a new group you are the admin of the group. 

Creating a new Zooniverse Group.

First, name your group. You can use any combination of characters including special characters and emojis. Next, select your group permissions. The private selection will only allow group members to access and view your group’s stats page. The public selection will make the group stats page viewable by anyone. Then, choose your members’ individual stats visibility. You can choose whether to never show each contributor’s stats, always show them, or only show them to members of the group. A group admin will always see these individual stats. Finally, click “Create New Group”. You will be brought to your new group stats page where you can then copy the join link and invite members. More on joining a Group below.

Using the Bar Chart

From the homepage click on any of your groups to view that group’s stats page. The bar chart will default to showing your stats for all contributors for all projects from the last 7 days.

Using the Zooniverse Groups bar chart.

To change the time range or projects use the dropdown menus above the chart. Note that if you change the dates, it will also change which projects are selectable based on your activity in that time period. The Hours tab shows a summary of the time spent across your group classifying subjects. 

Top Projects

These are your most classified projects for the selected time period. If you change the time frame, you can expect your top projects to update as well.

Top Contributors

Next is a list of top contributors (group members with the most classifications during the specified time period). You can see a more detailed view if you click ‘See all contributors and detailed stats’. This will bring you to a full list of contributors and their stats across all time. Clicking ‘Export all stats’ generates a .csv file. A future feature will be the ability to filter to specific time periods within this detailed stats page. 

Showing all Zooniverse Groups participants' stats.

Managing a Group

From the homepage click on any of your groups to view your group’s stats page. If you’re the admin for a group, you’ll have a ‘Manage Group’ option at the top of the group’s stats page. When you click on ‘Manage Group’, you will see the same settings as when you first created the group. You can change these admin settings at any time. You will also be able to manage the members of your group. Navigate to a member’s row and click on the 3-dot options menu. Here you can give admin access, remove admin access (if previously given), or remove a member. Note: as long as someone has a ‘Join Link’, they can always rejoin the group at any time. Press “Save changes” to return to your group. 

Managing your Zooniverse Group.

If you click ‘Deactivate Group’, this removes the group and its stats’ visibility (making the group unsearchable and unjoinable). Note: this does not delete the group from our internal Zooniverse database. 

Joining a Group

In order to join a group, the group admin or a group member will share the ‘Join Link’ for that group with you. The ‘Join Link’ is at the top of the group’s stats page. 

Using the Zooniverse Groups join link.

Once you have the link, simply click it to be added to the group. Note: you must be logged-in in order to join a group. Once you’ve joined, you’ll immediately be able to view your group’s stats page. 

At any time, you can view all of your groups by clicking ‘See all’ within the ‘My Groups’ panel in your zooniverse.org homepage. 

You may notice a few existing groups with alphanumeric names (e.g., 597C5881-3808-4DF7-B91A-D29E58E19FFC) in your groups list. These groups were created via our classroom.zooniverse.org portal for curricula such as Wildcam Labs or Galaxy Zoo 101. If you’re the group admin (indicated by the ‘admin’ label), you can click ‘Manage Group’ to give your group a more descriptive name. If you’re a group member, you can either click ‘Leave Group’ (if the class experience is complete) or ask your instructor (the admin) to rename the group. In future updates, we’ll enable naming groups directly within the classroom.zooniverse experiences.

Leaving a Group

From the homepage click on any of your groups. At the top of the group’s stats page, click ‘Leave group’. Note: you can rejoin a group at any time as long as you still have access to the unique Join Link.

Sharing a Group

If the admin of your group has set your group visibility settings to ‘public’, you’ll have the ‘Share Group’ option at the top of your group’s stats page. Clicking ‘Share group’ will copy a link to the public-facing view of your group’s stats page. This is different from a ‘Join Link’. Anyone with the ‘Share Group’ link will simply be able to view the group’s stats, but will not be added as a member of the group. 

UPD: Example of a Group

In November 2024 we interviewed members of PSR J0737-3039 – a Zooniverse group focused on space projects – to learn why and how Zooniverse contributors use this feature. Read the full interview.

JOIN THE CONVERSATION

We value your feedback! We’re keen to hear about your experiences with the new Groups feature. Please share in this Talk thread and mention @support if you are experiencing any issues.

Freshening up the Zooniverse Homepage

The Zooniverse has come a long way since beginning our journey together in 2009 – from the launch of the Project Builder to supporting diverse task types across the disciplines, including transcription, tagging, and marking. This fall, we’re continuing our frontend codebase migration and design evolution with a fresh, modern redesign to some of our main pages – this update focuses on freshening up our homepage.

What’s New?

  • Your Stats: Now, you can more easily track your progress and goals. See all your classification stats on one page and filter by project or time frame.
  • Volunteer Recognition: We heard you! Create personalized volunteer certificates right on the homepage. Perfect for students needing proof of volunteer hours!
  • Group Engagement: Create your group, set up goals and see the impact you’re making together. Great for families, teams, classrooms, or friends working on projects together.
  • Easy Navigation: Click the Zooniverse logo in the upper-left corner of any page to return to your homepage easily.

Read on for more details.

Zooniverse Redesigned Homepage

The zooniverse.org homepage serves a broad audience of new and returning volunteers, educators, and researchers. We believe the homepage should be a central hub where these different audiences can find the tools they need to make their Zooniverse experience satisfying and worthwhile. Now you’ll be able to pick up where you left off classifying, see your stats at a glance, and follow up on your last classifications to add them to a collection, favorite, or comment.

A common request over the years has been better tools for capturing individual and group impact. Thanks to support from NASA, we’ve been working hard to implement improved personal stats and new features that allow you to see the collective impact of your groups – whether you’re a family, a corporate team, a classroom, or simply a group of friends passionate about participating in projects together. We’ve made significant strides in bringing these functionalities to life.

Key features of your new homepage:

Personalized Statistics: We’re making it a little easier to keep track of your progress and goals. Now all of your real-time classification stats can be found on one page and you can filter by project or by a specific time frame. Access detailed information about your contributions, including the number of classifications, projects you’ve worked on, and your impact over time. 

Zooniverse Personal Statistics

A foundational step in this effort was a complete overhaul of our stats infrastructure to ensure greater reliability and stability. Moving forward, zooniverse.org personal stats will pull data exclusively from our updated stats server, reflecting contributions from 2007 onwards.

Volunteer Recognition: Generate personalized volunteer certificates right from your Zooniverse homepage! Customizable to specific time periods and projects. An often requested feature for students fulfilling volunteer service hour requirements. 

Zooniverse Volunteer Certificate

Group Engagement: A new way to create and share group goals and tell the story of your collective impact. Read this blog post for more details. 

Zooniverse Group Engagement Statistics

Streamlined Navigation: Enjoy an easier flow by clicking the Zooniverse logo in the upper-left on any page to return to your homepage.

We value your feedback! We launched the new homepage in September of 2024. If you encounter any difficulties or have questions as you’re using the new homepage, please share them in this Talk thread and mention @support.

Introducing: the Community Catalog

The Community Catalog (https://community-catalog.zooniverse.org) is a custom tool to offer Zooniverse project participants the opportunity to explore a project dataset, and to allow our team to experiment with creating new pathways into classifying.

We wanted to create a digital space that would facilitate not only sharing, but also discovery of participants’ contributions alongside institutional information (i.e. metadata) about the subjects being classified. The result was a data exploration app connected to specific Zooniverse crowdsourcing projects (How Did We Get Here? and Stereovision) that allows users to search and explore each project’s photo dataset based on participant-generated hashtags as well as the institutional metadata provided by project teams. 

The Home Page of the Stereovision project in the Community Catalog.

The app includes a home page (shown above) with search/browsing capabilities, as well as an individual page for each photograph included in the project. The subject page (shown below) displays any available institutional metadata, participant-generated hashtags, and Talk comments. A ‘Classify this subject’ button allows users exploring the data to go directly to the Zooniverse project and participate in whatever type of data collection is taking place (transcription, labeling, generating descriptive text, etc.).

The Subject Page of the Community Catalog, displaying a subject with multiple Talk comments and community-generated hashtags.

Combined with the Talk (and QuickTalk) features, we’re hoping that this tool will encourage participants to share their experiences, memories, questions, and thoughts about the project photos, the historical events depicted, and the importance of the collection. The Community Catalog offers an approach where a participant can allow their interest in a specific item to lead them to take part in a classification task, rather than classification to Talk being a one-way street.

How Did We Get Here? was the pilot project for the Community Catalog, and is now complete. We have just launched the second project to use the Catalog, Stereovision, which you can participate in either via the Community Catalog site, or by visiting the Zooniverse project here: Stereovision.

The Community Catalog is not available for re-use by other projects in this exact form (i.e. as a standalone app), but we’re planning to incorporate some of its features into the Talk section of the Zooniverse platform in 2025. If you have any questions or would like to share your thoughts about this app, please feel free to reply to this post, or email us at contact@zooniverse.org.

The Community Catalog was developed as part of the AHRC-funded project Communities and Crowds. This project is run in collaboration with volunteer researchers and staff at the National Science and Media Museum in Bradford, United Kingdom, and National Museums Scotland, as well as with the Zooniverse teams at Oxford University and the Adler Planetarium in Chicago.

It all involves U and I, Monteé’s Experience from Junior Designer to Full Stack Developer

Guest post written by Monteé Ellis, Junior Designer with the Zooniverse team at the Adler Planetarium in Chicago from May – July 2024. Before joining the Zooniverse team, Monteé embarked on an incredible journey as an intern with i.c.stars. This rigorous 4-month program wasn’t just about learning IT and software engineering skills; it was a comprehensive experience designed to open doors to economic mobility. Monteé gained invaluable job experience and honed his leadership abilities, setting the stage for his future success. After his role within the Zooniverse, Monteé will join the United Airlines Apprenticeship Program.

Starting something new is always exciting. The same was true when I got the opportunity to work for the Zooniverse at Adler Planetarium as a Junior Designer. I had some background in graphic design, having previously owned a clothing brand, but UI/UX design was an entirely different ballgame. I found myself using the same creative muscles but with a fresh focus on putting myself in the user’s shoes. This experience has truly broadened my perspective and deepened my passion for creating meaningful, user-centered designs.

Growing up, I was fascinated by space. My frequent visits to the Adler Planetarium as a kid fueled this passion. So, when the opportunity arose to contribute to something as significant as Zooniverse while working at one of my favorite childhood museums, it truly felt like a dream come true. I am forever grateful for this incredible experience and the chance to give back to a place that inspired my love for the cosmos.

The Zooniverse team created an incredible environment where I could learn, grow, and truly be myself. Working on this project has not only enhanced my portfolio but also provided me with invaluable knowledge and experience. This opportunity has set me up for future success and opened doors that I never thought possible.

This is the second time the Zooniverse team has accepted interns from the i.c.stars organization, and this year, they welcomed two of us instead of just one. This decision significantly impacted the success of our project. Having my teammate and friend, Yumi Sato, by my side allowed us to bounce ideas off each other, enhancing our overall creativity. Our shared experience and countless hours spent together at i.c.stars made the transition to the Zooniverse team seamless. Our strong repertoire and collaborative spirit were instrumental in the smooth execution of this project.

The transition from i.c.stars to the Zooniverse team was different but incredibly refreshing. Shifting from the front-end and back-end coding of our Pfizer project to focusing on UI/UX design for Adler was exactly what I needed for my career. This change provided a new perspective and helped me grow in ways I hadn’t anticipated.

As I embarked on this UI/UX design journey with Zooniverse, my thought process was deeply rooted in empathy and user experience. My goal was to create intuitive and engaging interfaces that not only meet the users’ needs but also enhance their interaction with the platform. I immersed myself in user research, gathering insights and feedback to understand the challenges and preferences of our audience. This informed my design decisions and allowed me to craft solutions that are both functional and aesthetically pleasing. Collaborating with Yumi, I learned the importance of iterative design and the value of constructive feedback, which were crucial in refining our project and delivering a polished final product.

I want to extend my heartfelt thanks to Laura Trouille, Zooniverse PI and Adler VP of Science Engagement, and Sean Miller, Senior Zooniverse Designer, for their incredible support throughout my time on the team. Laura created a fantastic environment and assembled an outstanding team that made every day a joy. Sean, with his exceptional mentorship, took us step by step through the intricacies of UI/UX design, guiding us through each aspect of the project and helping our ideas come to life. Their combined efforts made this experience truly unforgettable.

From Pixels to Purpose, Reflections from My Zooniverse Internship

Guest post written by Yumi Sato, Junior Designer with the Zooniverse team at the Adler Planetarium in Chicago from May-July 2024. Prior to joining the Zooniverse, Yumi was an i.c.stars intern, an intensive 4-month program supporting pathways to economic mobility through IT and software engineering workforce skills training, job experience, and leadership development. After her role within the Zooniverse, Yumi will join the United Airlines Apprenticeship Program.

I’m coming close to the end of an incredible journey as a Junior Designer intern at the Adler museum, working alongside the amazing Zooniverse team, and I’m super excited to share my experience with you!

After completing a rigorous 16-week – 12 hour/day IT program called i.c.stars, where I discovered my passion for front-end development and design, landing this internship felt like a dream. I was especially thrilled to tackle the challenge of redesigning the Collections and Favorites of the Zooniverse website. Plus, the opportunity to work at the Adler museum, a place I hadn’t visited since I was a child, added an extra layer of nostalgia and excitement.

I remember my first day feeling the nerves creeping in. Thoughts raced through my mind about whether I was prepared enough and if I could truly make meaningful contributions. Fortunately, Sean, Laura, and the entire team immediately put my worries to the rest. Their warm welcome and genuine enthusiasm for work made all the difference. Sean, in particular, played a huge part as our mentor throughout my internship, always encouraging us to explore our creativity and providing invaluable feedback.

We quickly dove into the project to revamp the Collections and Favorites pages where I learned so many new concepts and theories along the way. Starting from scratch, we looked into user stories and feedback analysis to grasp what makes a great user experience. 

One of the most eye-opening aspects of this internship was realizing the depth of thought and effort that goes into every detail of a website. It’s not just about making things look pretty; it’s about understanding user behavior, anticipating needs, and crafting experiences that are seamless and engaging. 

Looking back, I am immensely grateful for the opportunity to learn and grow in such a supportive environment. Beyond technical skills, I gained invaluable insights into teamwork and effective communication in a professional setting. 

This experience has deepened my passion for design and UX. It has also heightened my appreciation for well-designed websites and apps that effortlessly blend aesthetics with functionality.

As I move forward in my career, I am eager to apply everything I’ve learned at the Adler. To anyone embarking on a similar journey, I encourage you to embrace challenges and never stop learning – the rewards are immeasurable. 

Here’s to new beginnings and the exciting road ahead!

Navigating the Future: Zooniverse’s Frontend Codebase Migration and Design Evolution

Dear Zooniverse Community,

We’re pleased to update you on an important development as we undergo a migration to a new frontend codebase over the course of 2024-2025. This transition brings a fresh and improved experience to our platform.

From a participant’s perspective, the primary changes involve project layout and styling, resulting in a more user-friendly interface. Importantly, these updates don’t impact your stats (e.g,. classification count), Collections, Favorites, etc.

To offer you a sneak peek, check out the updated design and layout on projects that have already migrated, such as:

If a project has a design similar to the examples above, it has migrated. Conversely, if it resembles the old design, like the Milky Way Project, it hasn’t migrated yet.

We value your feedback! If you encounter any difficulties or have suggestions as you’re participating in a project, please share them in the respective project’s Talk or within this general Announcements Talk thread and mention @support.

Wondering about the motivation behind this change? We built the new frontend codebase in order to ensure the robustness and stability of the Zooniverse platform, with key updates enhancing code maintenance, accessibility, and overall sustainability.

Here’s a breakdown of some of the improvements:

  • Breaking up the Code: We’ve modularized our code into independent, reusable libraries to enhance maintenance and overall sustainability.
  • Next.js for Server Side Rendering: By utilizing Next.js, we’re improving accessibility for participants worldwide, particularly those with lower internet speeds and bandwidth.
  • Classify Page Code Updates: We’ve refined elements such as workflows and the subject viewer to ensure improved robustness and sustainability of our codebase.
  • Authentication Library Updates: Keeping up with the latest standards, we’ve updated our authentication libraries to enhance security and user experience.
  • Integrated Code Testing: To maintain the long-term health of our technical products, we’ve integrated code testing throughout our development process. This mitigates against updates introducing bugs or other issues into the codebase, adhering to standard practices.

Thank you for being part of the Zooniverse community! Looking forward to many more groundbreaking discoveries and advances in research. Your classifications and participation in Talk make all of this possible. Thank you! 

Warm regards,

Laura Trouille, Zooniverse PI

Zooniverse Wins White House Open Science Award

I’m thrilled to share some exciting news with you all! Zooniverse, our beloved platform for people-powered research, has been honored by the White House Office of Science and Technology Policy (OSTP) as a champion of open science. 

The OSTP Year of Open Science Recognition Challenge awarded five different projects, including Zooniverse, as ‘Champions of Open Science’ for our work to promote open science to tackle a unique problem. To check out the full announcement and see the other winners, click here.

We’re deeply honored by this recognition. It underscores our commitment to Open Science through people-powered research, valuing the public’s diverse expertise and driving innovation beyond traditional boundaries. By democratizing access to scientific spaces and discovery, Zooniverse not only advances research, but also builds trust in science and fosters meaningful engagement within our global community. 

What makes Zooniverse truly special is the community that drives it forward. The Zooniverse team of devs and data scientists building the platform, the hundreds of researchers leading projects, and every single one of you who dedicates your time and expertise to advancing knowledge. Whether you’re classifying galaxies, transcribing historical documents, tagging penguins, or marking the structure of cells for cancer research, your contributions have made a tangible impact on research across a range of disciplines. And the results speak for themselves – over 400 peer-reviewed scientific publications, groundbreaking discoveries, and critical policy impacts have all stemmed from your collective efforts.

But it’s not just about the data or the publications – it’s about the connections we’ve forged along the way. Through Zooniverse, we’ve built a global community of curious minds, united by a shared passion for exploration and discovery. Together, we’ve championed the principles of open science, breaking down barriers to knowledge and fostering a spirit of collaboration that transcends borders and disciplines. And in doing so, we’ve not only advanced scientific research but also strengthened trust in science and empowered individuals to pursue their interests and passions.

So here’s to you, our incredible community of participants and researchers. Thank you for being a part of this extraordinary journey. Together, we’ll continue to champion the principles of openness, collaboration, and innovation in research, one classification and one Talk post at a time.

Laura

Zooniverse PI, VP Science Engagement, Adler Planetarium in Chicago

Snapshot Wisconsin Celebrates 50th Zooniverse Season!

Snapshot WI 50th Logo

What is Snapshot Wisconsin?

Snapshot Wisconsin is a community science project where the Wisconsin Department of Natural Resources (Wisconsin DNR) partners with volunteers to monitor wildlife using a statewide network of trail cameras. Volunteers host trail cameras, which are triggered by heat and movement, capturing pictures of passing animals. Located in the Great Lakes region of the United States, Wisconsin hosts a variety of habitats from coniferous forest to prairies. Wisconsin is home 65 species of native mammals, hundreds of other vertebrate species, and thousands of invertebrates and native plant species.

Snapshot and Zooniverse

Snapshot Wisconsin has collected over 85 million photos since its genesis in 2015. What started as a pilot project in a few Wisconsin counties has now grown to over 2,000 statewide camera hosts.  Online, Snapshot Wisconsin has enlisted the help of Zooniverse volunteers to make nearly 9.3 million classifications in the first 49 seasons on Zooniverse.

Snapshot Across the Globe

The first Snapshot Wisconsin Zooniverse season was launched on May 17th, 2016. Since then, Snapshot Wisconsin has continually brought in thousands of classifiers. From China to Mexico, from Russia to Brazil, online volunteers have donated almost 36,000 hours so far. The map below highlights the countries in which Snapshot classifiers call home!

World map with countries highlighted signifying global range of Snapshot WI volunteers

#Supersnaps

One way Snapshot moderators and experts encourage those thousands of hours of engagement is by promoting the use of #supersnaps! Zooniverse volunteers come across some fantastic images while classifying photos. Using the tag #supersnap, volunteers can nominate their favorite photos for consideration as the best photo of the month. Snapshot Wisconsin will also be celebrating on the Wisconsin Department of Natural Resources’ Instagram (@wi_dnr) by hosting a tournament to decide among a group of amazing Snapshot trail camera images which one deserves the title of SuperSnap! Vote daily January 22-26 in the @wi_dnr Instagram Story.

Here are a few examples of past #supersnaps:

A black bear mother and her cub
A black bear mother and her cub.
Whitetail deer selfie
Whitetail deer selfie.
The “Badger State” namesake
The “Badger State” namesake.

Get a group involved by hosting a Snap-a-thon!

Take the fun of Zooniverse to an even larger group of participants by hosting a Snap-a-thon! Snapshot Wisconsin Snap-a-thons are friendly competitions where a group of people tag animal photos on our crowdsourcing website, Zooniverse, to gather as many points as possible. Who can participate? Anyone familiar with Wisconsin wildlife and with operating a computer can participate. No need to be a wildlife expert: Zooniverse has a built-in field guide to help with more difficult classifications. You can find Snap-a-thon instructions under the ‘activities’ tab here!

Snapshot Wisconsin Scientific Products

One of Snapshot Wisconsin’s goals is to provide data needed to make wildlife management decisions. Thanks to thousands of online volunteers, the program’s millions of trail camera images are transformed into usable data. This data has been used for wildlife research and wildlife decision support by Wisconsin DNR scientists and interested university students and faculty.

The Snapshot Publication webpage has publications organized by topic, ranging from the temporal and spatial behavior of deer to predator-prey relationships. The valuable information gathered from these research projects helps build our understanding of local wildlife and support wildlife management decisions.

Snapshot Wisconsin Blog

Snapshot Wisconsin has its own project blog at blog.snapshotwisconsin.org where the team shares #supersnaps, project updates, team outreach, scientific findings, ecological tid-bits and more! For more information about the project, please visit our main project page, or get started classifying photos at our Zooniverse crowdsourcing site.

Thank You for 50 Great Seasons

Snapshot Wisconsin would like to thank their camera hosts and Zooniverse volunteers for the tremendous amount of work they do for Wisconsin’s natural resources. 50 Zooniverse seasons has certainly flown by for the team, but it is nonetheless a remarkable success that wouldn’t be possible without the dedication and passion of the project’s Zooniverse volunteers.

Happy New Year & YouTube livestream this Thursday

Happy New Year Everyone! We can’t thank you enough for making Zooniverse possible. Thank you, thank you, thank you!!!!

We have so much to celebrate from 2023. 

  • We welcomed our 2.5 millionth registered participant!
    • To date: 2.6 million registered participants from 190 countries
    • Top countries in 2023: US, UK, Germany, India, Canada, Australia
  • 400 Zooniverse projects publicly launched
    • 40 new projects in 2023 alone; ~90 active projects at any given time
    • Each led by a different research team. Zooniverse partners with hundreds of universities, research institutes, museums, libraries, zoos, NGOs, and more
  • 400 peer-reviewed publications (30 in 2023 alone)
  • 780 million classifications (65 million classifications in 2023 alone)
  • 5 million posts in the Zooniverse ‘Talk’ discussion forums (680K in 2023 alone)
  • 19.5 million hours of participation
    • 1.6 million hours in 2023 alone; equivalent to 780 FTEs

We welcome you to join us this Thursday for a YouTube LiveStream from 2:15pm-3:15pm CST (8:15pm GMT; Friday 1:15am in India) celebrating Zooniverse 2023 Milestones as part of a Press Conference for the American Astronomical Society Meeting happening this week in New Orleans.

Bonus: the Press Conference will include a slew of other astronomy related discoveries, mysteries, and intrigues. Connect via https://www.youtube.com/@AASPressOffice/streams (open to the public). Also, throughout the week we’ll post on https://twitter.com/the_zooniverse (with the hashtag #aas243) about our experiences at the conference. 

Milestones are great to celebrate, but we all know a deep magic is in the everyday moments – catching a penguin chick in the midst of a funny dance on Penguin Watch, hearing a coo that reminds you of your own little loves in Maturity of Baby Sounds, uncovering a lost genealogical clue in Civil War Bluejackets, connecting with someone from the other side of the globe who shares your interests in chimps and their fascinating behaviors through the Talk discussion forums, and more, and more. Wonderful if you’d like to share one of your everyday Zooniverse moments with us by tagging @the_zooniverse on X (formerly Twitter) or sharing via email at contact@zooniverse.org. Hearing your moments helps us better understand how the Zooniverse community creates meaning and impact from these experiences (and what we can do to nurture those moments). 

Wishing you a joyful and gentle 2024. Cheers to new beginnings and continued adventures together. 

Laura
Zooniverse PI, VP Science Engagement, Adler Planetarium in Chicago

‘Etch A Cell – Fat Checker’ – Project Update!

We are excited to share with you results from two Zooniverse projects, ‘Etch A Cell – Fat Checker’ and ‘Etch A Cell – Fat Checker Round 2’. Over the course of these two projects, more than 2000 Zooniverse volunteers contributed over 75 thousand annotations!

One of the core aims of these two projects was to enable the design and implementation of machine learning approaches that could automate the annotation of fat droplets in novel data sets, to provide a starting point for other research teams attempting to perform similar tasks.

With this in mind, we have developed multiple machine learning algorithms that can be applied to both 2D and 3D fat droplet data. We describe these models in the blog post below.

Machine learning model2D or 3D dataPublications to dateOther outputs
PatchGAN2Dhttps://ceur-ws.org/Vol-3318/short15.pdf https://github.com/ramanakumars/patchGAN https://pypi.org/project/patchGAN/
TCuP-GAN3D“What to show the volunteers: Selecting Poorly Generalized Data using the TCuPGAN”; Sankar et al., accepted in ECMLPKDD Workshop proceedings.https://github.com/ramanakumars/TCuPGAN/
UNet/UNet3+/nnUNet2D https://huggingface.co/spaces/umn-msi/fatchecker
An overview of the machine learning algorithms produced from the Etch A Cell – Fat Checker projects (described in this post).

Machine learning models for the segmentation of fat droplets in 2D data

Patch Generative Adversarial Network (PatchGAN)
Generative Adversarial Networks (GANs) were introduced in 2018 for the purpose of realistic learning of image-level features and have been used for various computer vision related applications. We implemented a pixel-to-pixel translator model called PatchGAN, which learns to convert (or “translate”) an input image to another image form. For example, such a framework can learn to convert a gray-scale image to a colored version.

The “Patch” in PatchGAN signifies its capability to learn image features in different sub-portions of an image (rather than just across an entire image as a whole). In the context of the Etch A Cell – Fat Checker project data, predicting the annotation regions of fat droplets is analogous to PatchGAN’s image-to-image translation task.

We trained the PatchGAN model framework on the ~50K annotations generated by volunteers in Etch A Cell – Fat Checker. Below we show two example images from the Etch A Cell: Fat Checker (left column) along with aggregated annotations provided by the volunteers (middle panel), and their corresponding 2D machine learning model (PatchGAN) predicted annotations.

We found that the PatchGAN typically performed well in learning the subject image to fat-droplet annotation predictions. However, we noticed that the model highlighted some regions potentially missed by the volunteers, as well as instances where it has underestimated some regions that the volunteers have annotated (usually intermediate to small sized droplets).

We made have made this work, our generalized PatchGAN framework, available via an open-source repository at https://github.com/ramanakumars/patchGAN and https://pypi.org/project/patchGAN/. This will allow anyone to easily train the model on a set of images and corresponding masks, or to use the pre-trained model to infer fat droplet annotations on images they have at hand.
 

UNet, UNet3+, and nnUNet
In addition to the above-mentioned PatchGAN network, we have also trained three additional frameworks for the task of fat droplet identification – UNet, UNet3+, and nnUNet.

UNet is a popular deep-learning method used for semantic segmentation within images (e.g., identifying cars/traffic in an image) and has been shown to capture intricate image details and precise object delineation. Its architecture is U-shaped with two parts – an encoder that learns to reduce the input image down a compressed “fingerprint” and a decoder which learns to predict the target image (e.g., fat droplets in the image) based on that compressed fingerprint. Fine-grained image information is shared between the encoder and decoder parts using the so-called “skip connections”. UNet3+ is an upgraded framework built upon the foundational UNet that has been shown to capture both local and global features within medical images.

nnUNet is an user-friendly, efficient, and state-of-the-art deep learning platform to train and fine-tune models for diverse medical imaging tasks. It employs a UNet-based architecture and comes with image pre-processing and post-processing techniques.

We trained these three networks on the data from the Fat Checker 1 project. Below, we show three example subject images along with their corresponding volunteer annotations and different model predictions. Between the three models, nnUNet demonstrated superior performance.

Screenshot 2023-11-16 at 12.37.02.png

Machine learning models for the segmentation of fat droplets in 3D data

Temporal Cubic PatchGAN (TCuP-GAN
Motivated by the 3D volumetric nature of the Etch A Cell – Fat Checker project data, we also developed a new 3D deep learning method that learns to predict the direct 3D regions corresponding to the fat droplets. To develop this, we built on top of our efforts of our PatchGAN framework and merged it with another computer vision concept called “LongShort-Term Memory Networks (LSTMs)”. Briefly, in recent years, LSTMs have seen tremendous success in learning sequential data (e.g., words and their relationship within a sentence) and they have been used to learn relationships among sequences of images (e.g., movement of a dog in a video).

We have successfully implemented and trained our new TCuP-GAN model on the Etch A Cell – Fat Checker data. Below is an example image cube – containing a collection of 2D image slices that you viewed and annotated – along with the fat droplet annotation prediction of our 3D model. For visual guidance, we show the middle panel where we reduced the transparency of the image cube shown in the left panel, highlighting the fat droplet structures that lie within.

Screenshot 2023-11-16 at 12.37.15.png

We found that our TCuP-GAN model successfully predicts the 3D fat droplet structures. In doing so, our model also learns realistic (and informative) signatures of lipid droplets within the image. Leveraging this, we are able to ask the model which 2D image slices contain the most confusion between lipid droplets and surrounding regions of the cells when it comes to annotating the fat droplets. Below, we show two example slices where the model was confident about the prediction (i.e., less confusion; top panel) and where the model was significantly confused (red regions in fourth column of the bottom panel). As such, we demonstrated that our model can help find those images in the data set that preferentially require information from the volunteers. This can serve as a potential efficiency step for future research teams to prioritize certain images that require attention from the volunteers.

Screenshot 2023-11-16 at 12.37.31.png

Integrating Machine Learning Strategies with Citizen Science

Several thousands of annotations collected from the Etch A Cell – Fat Checker project(s) and their use to train various machine learning frameworks have opened up possibilities that can enhance the efficiency and annotation gathering and help accelerate the scientific outcomes for future projects.

While the models we described here all performed reasonably well in learning to predict the fat droplets, there were a substantial number of subjects where they were inaccurate or confused. Emerging “human-in-the-loop” strategies are becoming increasingly useful in these cases – where citizen scientists can help with providing critical information on those subjects that require the most attention. Furthermore, an imperfect machine learning model can provide an initial guess which the citizens can use as a starting point and provide edits, which will greatly greatly reduce the amount of effort needed by individual citizen scientists.

For our next steps, using the data from the Etch A Cell – Fat Checker projects, we are working towards building new infrastructure tools that will enable future projects to leverage both citizen science and machine learning towards solving critical research problems.

Enhancements to the freehand drawing tool

First, we have made upgrades to the existing freehand line drawing tool on Zooniverse. Specifically, users will now be able to edit a drawn shape, undo or redo during any stage of their drawing process, automatically close open shapes, and delete any drawings. Below is a visualization of an example drawing where the middle panel illustrates the editing state (indicated by the open dashed line) and re-drawn/edited shape. The tool box with undo, redo, auto-close, and delete functions is also shown.

Screenshot 2023-11-16 at 11.56.40.png

A new “correct a machine” framework

We have built a new infrastructure that will enable researchers to upload machine learning (or any other automated method) based outlines in a compatible format to the freehand line tool such that each subject when viewed by a volunteer on a Zooniverse will be shown the pre-loaded machine outlines, which they can edit using the above newly-added functionality. Once volunteers provide their corrected/edited annotations, their responses will be recorded and used by the research teams for their downstream analyses. The figure below shows an example visualization of what a volunteer would see with the new correct a machine workflow. Note, that the green outlines shown on top of the subject image are directly loaded from a machine model prediction and volunteers will be able to interact with them.

Screenshot 2023-11-16 at 12.37.50.png

From Fat Droplets to Floating Forests

Finally, the volunteer annotations submitted to these two projects will have an impact far beyond the fat droplets identification in biomedical imaging. Inspired by the flexibility of the PatchGAN model, we also carried out a “Transfer Learning” experiment, where we tested if a model trained to identify fat droplets can be used for a different task of identifying kelp beds in satellite imaging. For this, we used the data from another Zooniverse project called Floating Forests.

Through this work, we found that our PatchGAN framework readily works to predict the kelp regions. More interestingly, we found that when our pre-trained model to detect fat droplets within Fat Checker project data was used as a starting point to predict the kelp regions, the resultant model achieved very good accuracies with only a small number of training images (~10-25% of the overall data set size). Below is an example subject along with the volunteer annotated kelp regions and corresponding PatchGAN prediction. The bar chart illustrates how the Etch A Cell – Fat Checker based annotations can help reduce the amount of annotations required to achieve a good accuracy.

Screenshot 2023-11-16 at 12.37.59.png

In summary: THANK YOU!

In summary, with the help of your participation in the Etch A Cell – Fat Checker and Etch A Cell – Fat Checker Round 2 projects, we have made great strides in processing the data and training various successful machine learning frameworks. We have also made a lot of progress in updating the annotation tools and building new infrastructure towards making the best partnership between humans and machines for science. We are looking forward to launching new projects that use this new infrastructure we have built!

This project is part of the Etch A Cell organisation

‘Etch A Cell – Fat Checker’ and ‘Etch A Cell – Fat Checker round 2’ are part of The Etchiverse: a collection of multiple projects to explore different aspects of cell biology. If you’d like to get involved in some of our other projects and learn more about The Etchiverse, you can find the other Etch A Cell projects on our organisation page here.

Thanks for contributing to Etch A Cell! – Fat Checker!

new avatar_3a.png

The world's largest and most popular platform for people-powered research. This research is made possible by volunteers—millions of people around the world who come together to assist professional researchers.