‘Etch A Cell – Fat Checker’ – Project Update!

We are excited to share with you results from two Zooniverse projects, ‘Etch A Cell – Fat Checker’ and ‘Etch A Cell – Fat Checker Round 2’. Over the course of these two projects, more than 2000 Zooniverse volunteers contributed over 75 thousand annotations!

One of the core aims of these two projects was to enable the design and implementation of machine learning approaches that could automate the annotation of fat droplets in novel data sets, to provide a starting point for other research teams attempting to perform similar tasks.

With this in mind, we have developed multiple machine learning algorithms that can be applied to both 2D and 3D fat droplet data. We describe these models in the blog post below.

Machine learning model2D or 3D dataPublications to dateOther outputs
PatchGAN2Dhttps://ceur-ws.org/Vol-3318/short15.pdf https://github.com/ramanakumars/patchGAN https://pypi.org/project/patchGAN/
TCuP-GAN3D“What to show the volunteers: Selecting Poorly Generalized Data using the TCuPGAN”; Sankar et al., accepted in ECMLPKDD Workshop proceedings.https://github.com/ramanakumars/TCuPGAN/
UNet/UNet3+/nnUNet2D https://huggingface.co/spaces/umn-msi/fatchecker
An overview of the machine learning algorithms produced from the Etch A Cell – Fat Checker projects (described in this post).

Machine learning models for the segmentation of fat droplets in 2D data

Patch Generative Adversarial Network (PatchGAN)
Generative Adversarial Networks (GANs) were introduced in 2018 for the purpose of realistic learning of image-level features and have been used for various computer vision related applications. We implemented a pixel-to-pixel translator model called PatchGAN, which learns to convert (or “translate”) an input image to another image form. For example, such a framework can learn to convert a gray-scale image to a colored version.

The “Patch” in PatchGAN signifies its capability to learn image features in different sub-portions of an image (rather than just across an entire image as a whole). In the context of the Etch A Cell – Fat Checker project data, predicting the annotation regions of fat droplets is analogous to PatchGAN’s image-to-image translation task.

We trained the PatchGAN model framework on the ~50K annotations generated by volunteers in Etch A Cell – Fat Checker. Below we show two example images from the Etch A Cell: Fat Checker (left column) along with aggregated annotations provided by the volunteers (middle panel), and their corresponding 2D machine learning model (PatchGAN) predicted annotations.

We found that the PatchGAN typically performed well in learning the subject image to fat-droplet annotation predictions. However, we noticed that the model highlighted some regions potentially missed by the volunteers, as well as instances where it has underestimated some regions that the volunteers have annotated (usually intermediate to small sized droplets).

We made have made this work, our generalized PatchGAN framework, available via an open-source repository at https://github.com/ramanakumars/patchGAN and https://pypi.org/project/patchGAN/. This will allow anyone to easily train the model on a set of images and corresponding masks, or to use the pre-trained model to infer fat droplet annotations on images they have at hand.
 

UNet, UNet3+, and nnUNet
In addition to the above-mentioned PatchGAN network, we have also trained three additional frameworks for the task of fat droplet identification – UNet, UNet3+, and nnUNet.

UNet is a popular deep-learning method used for semantic segmentation within images (e.g., identifying cars/traffic in an image) and has been shown to capture intricate image details and precise object delineation. Its architecture is U-shaped with two parts – an encoder that learns to reduce the input image down a compressed “fingerprint” and a decoder which learns to predict the target image (e.g., fat droplets in the image) based on that compressed fingerprint. Fine-grained image information is shared between the encoder and decoder parts using the so-called “skip connections”. UNet3+ is an upgraded framework built upon the foundational UNet that has been shown to capture both local and global features within medical images.

nnUNet is an user-friendly, efficient, and state-of-the-art deep learning platform to train and fine-tune models for diverse medical imaging tasks. It employs a UNet-based architecture and comes with image pre-processing and post-processing techniques.

We trained these three networks on the data from the Fat Checker 1 project. Below, we show three example subject images along with their corresponding volunteer annotations and different model predictions. Between the three models, nnUNet demonstrated superior performance.

Screenshot 2023-11-16 at 12.37.02.png

Machine learning models for the segmentation of fat droplets in 3D data

Temporal Cubic PatchGAN (TCuP-GAN
Motivated by the 3D volumetric nature of the Etch A Cell – Fat Checker project data, we also developed a new 3D deep learning method that learns to predict the direct 3D regions corresponding to the fat droplets. To develop this, we built on top of our efforts of our PatchGAN framework and merged it with another computer vision concept called “LongShort-Term Memory Networks (LSTMs)”. Briefly, in recent years, LSTMs have seen tremendous success in learning sequential data (e.g., words and their relationship within a sentence) and they have been used to learn relationships among sequences of images (e.g., movement of a dog in a video).

We have successfully implemented and trained our new TCuP-GAN model on the Etch A Cell – Fat Checker data. Below is an example image cube – containing a collection of 2D image slices that you viewed and annotated – along with the fat droplet annotation prediction of our 3D model. For visual guidance, we show the middle panel where we reduced the transparency of the image cube shown in the left panel, highlighting the fat droplet structures that lie within.

Screenshot 2023-11-16 at 12.37.15.png

We found that our TCuP-GAN model successfully predicts the 3D fat droplet structures. In doing so, our model also learns realistic (and informative) signatures of lipid droplets within the image. Leveraging this, we are able to ask the model which 2D image slices contain the most confusion between lipid droplets and surrounding regions of the cells when it comes to annotating the fat droplets. Below, we show two example slices where the model was confident about the prediction (i.e., less confusion; top panel) and where the model was significantly confused (red regions in fourth column of the bottom panel). As such, we demonstrated that our model can help find those images in the data set that preferentially require information from the volunteers. This can serve as a potential efficiency step for future research teams to prioritize certain images that require attention from the volunteers.

Screenshot 2023-11-16 at 12.37.31.png

Integrating Machine Learning Strategies with Citizen Science

Several thousands of annotations collected from the Etch A Cell – Fat Checker project(s) and their use to train various machine learning frameworks have opened up possibilities that can enhance the efficiency and annotation gathering and help accelerate the scientific outcomes for future projects.

While the models we described here all performed reasonably well in learning to predict the fat droplets, there were a substantial number of subjects where they were inaccurate or confused. Emerging “human-in-the-loop” strategies are becoming increasingly useful in these cases – where citizen scientists can help with providing critical information on those subjects that require the most attention. Furthermore, an imperfect machine learning model can provide an initial guess which the citizens can use as a starting point and provide edits, which will greatly greatly reduce the amount of effort needed by individual citizen scientists.

For our next steps, using the data from the Etch A Cell – Fat Checker projects, we are working towards building new infrastructure tools that will enable future projects to leverage both citizen science and machine learning towards solving critical research problems.

Enhancements to the freehand drawing tool

First, we have made upgrades to the existing freehand line drawing tool on Zooniverse. Specifically, users will now be able to edit a drawn shape, undo or redo during any stage of their drawing process, automatically close open shapes, and delete any drawings. Below is a visualization of an example drawing where the middle panel illustrates the editing state (indicated by the open dashed line) and re-drawn/edited shape. The tool box with undo, redo, auto-close, and delete functions is also shown.

Screenshot 2023-11-16 at 11.56.40.png

A new “correct a machine” framework

We have built a new infrastructure that will enable researchers to upload machine learning (or any other automated method) based outlines in a compatible format to the freehand line tool such that each subject when viewed by a volunteer on a Zooniverse will be shown the pre-loaded machine outlines, which they can edit using the above newly-added functionality. Once volunteers provide their corrected/edited annotations, their responses will be recorded and used by the research teams for their downstream analyses. The figure below shows an example visualization of what a volunteer would see with the new correct a machine workflow. Note, that the green outlines shown on top of the subject image are directly loaded from a machine model prediction and volunteers will be able to interact with them.

Screenshot 2023-11-16 at 12.37.50.png

From Fat Droplets to Floating Forests

Finally, the volunteer annotations submitted to these two projects will have an impact far beyond the fat droplets identification in biomedical imaging. Inspired by the flexibility of the PatchGAN model, we also carried out a “Transfer Learning” experiment, where we tested if a model trained to identify fat droplets can be used for a different task of identifying kelp beds in satellite imaging. For this, we used the data from another Zooniverse project called Floating Forests.

Through this work, we found that our PatchGAN framework readily works to predict the kelp regions. More interestingly, we found that when our pre-trained model to detect fat droplets within Fat Checker project data was used as a starting point to predict the kelp regions, the resultant model achieved very good accuracies with only a small number of training images (~10-25% of the overall data set size). Below is an example subject along with the volunteer annotated kelp regions and corresponding PatchGAN prediction. The bar chart illustrates how the Etch A Cell – Fat Checker based annotations can help reduce the amount of annotations required to achieve a good accuracy.

Screenshot 2023-11-16 at 12.37.59.png

In summary: THANK YOU!

In summary, with the help of your participation in the Etch A Cell – Fat Checker and Etch A Cell – Fat Checker Round 2 projects, we have made great strides in processing the data and training various successful machine learning frameworks. We have also made a lot of progress in updating the annotation tools and building new infrastructure towards making the best partnership between humans and machines for science. We are looking forward to launching new projects that use this new infrastructure we have built!

This project is part of the Etch A Cell organisation

‘Etch A Cell – Fat Checker’ and ‘Etch A Cell – Fat Checker round 2’ are part of The Etchiverse: a collection of multiple projects to explore different aspects of cell biology. If you’d like to get involved in some of our other projects and learn more about The Etchiverse, you can find the other Etch A Cell projects on our organisation page here.

Thanks for contributing to Etch A Cell! – Fat Checker!

new avatar_3a.png

‘Etch A Cell – Demolition Squad’ – First Results!

Introducing Etch a Cell – Demolition Squad

Earlier this year, the team behind the Etch A Cell series of citizen science projects launched their latest project; ‘Etch A Cell – Demolition Squad’. Through this project, researchers based at the Francis Crick Institute (London, UK) have teamed up with Zooniverse volunteers to study ‘lysosomes’.

What are lysosomes?

Lysosomes are balloon-shaped organelles contain a range of enzymes that can break down biological substances including proteins, sugars, and fats. These enzymes allow lysosomes to function as the cell’s digestive system; lysosomes digest and degrade substances from both inside and outside of cells. This versatile functionality allows lysosomes to contribute to a range of biological tasks, including the breakdown of aged cellular components, the destruction of viruses, and the support of digestion during times of hunger. Lysosomes even play a role in the life cycle of cells by facilitating the removal of cells that have reached their expiry date. Etch A Cell – Demolition Squad has the aim of improving our ability to study lysosomes, to enable us to further understand how they contribute to the world of cellular dynamics.

An image from the first data set analysed in Etch A Cell – Demolition Squad.

What’s the aim of the project?

The team behind Etch A Cell work with a variety of research teams to study different aspects of biology using cutting-edge Electron Microscopes. With their remarkable magnification and resolution, these microscopes allow researchers to capture intricate images of tissues, cells, and molecules. These images can be used to provide us with a richer understanding of biology, which can help us understand the biological changes associated with health and disease. Recent developments in electron microscope technology have enabled automatic image collection, leading to a deluge of data. This influx of information is helping propel research forward, however, it has caused a bottleneck in data analysis pipelines – which is why Etch A Cell needs the help of Zooniverse volunteers!

How are Zooniverse volunteers helping?

To study the images generated by our microscopes we typically analyse them by ‘segmenting’ the features we’re interested in, which means to draw around the bits of the cell that we want to examine. In Etch A Cell projects, Zooniverse volunteers help with this segmentation task. In Etch A Cell – Demolition Squad volunteers were asked to help with segmenting lysosomes. These structures can be very difficult to spot, so an imaging technique was used that enables the selective labelling of the lysosomes with a marker that allows their easier identification. You can see an example image below – the marker is shown in the images as pink, so volunteers were asked to draw around the grey blobs labelled with pink. In the image below you can also see some green lines which show how one expert segmenter drew around the lysosomes in this image.

In Etch A Cell – Demolition Squad, volunteers were asked to draw around lysosomes. To make the lysosomes easier to spot they were labelled with a marker, that is shown in the image above as pink. This image also shows lysosome segmentations done by an expert in green.

What are the results so far?

Since Etch A Cell – Demolition Squad was launched in May 2023, this project has received more than 12,000 classifications from hundreds of Zooniverse volunteers! This has allowed all 792 images in the first project data set to be retired. We’ve now started taking a look at the data, and we’re really impressed with the segmentations submitted! Well done to everyone who has contributed!

Here’s a sneak peek at some of the data you’ve produced through your collective efforts:

Briefly, you can see from these images show that Zooniverse volunteers have done fantastic job at this challenging task: the collective volunteer segmentations (shown in green in panel F) correspond really well to the location of lysosomes in the image (shown in pink in image B). With thanks to Francis Crick Institute internship student Fatihat Ajayi for generating these images.

The images above show the raw data (A) and this data overlaid with the pink marker that highlights the lysosomes (B). Image B is the one Zooniverse volunteers would have seen in the project interface. All the volunteer segmentations submitted for this image are shown overlaid in image C. Image D also shows the volunteer segmentations together, but in this image the line drawings have been ‘filled in’ and stacked on top of each other to make it easier to see where the volunteers agree there is a lysosome (the more volunteers who annotated a region, the brighter white it shows). Image E shows where most of the volunteers agreed there were lysosomes in this image, and image F shows how this corresponds to the raw data image (A). From these images you can see how collectively, Zooniverse volunteers did a great job at drawing around the lysosomes.

What’s next for Etch A Cell – Demolition Squad?

In the near future, we will be uploading new batches of data that we will need your help to analyse. These new data sets may look slightly different, but the task off segmenting the lysosomes will remain the same.

This project is part of the Etch A Cell organisation

‘Etch A Cell – Demolition Squad’ is one of multiple projects produced by the Etch A Cell team and their collaborators to explore different aspects of cell biology. If you’d like to get involved in some of our other projects, you can find the other Etch A Cell projects on our organisation page here.

Thanks for contributing to Etch A Cell!

Who’s who in the Zoo – Ramana Sankar

Name: Ramana Sankar

Location: University of California, Berkeley

Tell us about your role within the team:

I’ve been with the Zooniverse team for two years now as a postdoc working with Lucy Fortson at University of Minnesota. My main role was to help with building human-machine interfaces for project teams on Zooniverse (particularly in the avenue of speeding up project completion rates and improving avenues serendipitous discovery), and also with providing data science assistance to projects.

What did you do in your life before the Zooniverse?

I did my Ph.D. at Florida Institute of Technology. My main focus was on studying the formation of thunderstorms on Jupiter. I was (and still am) interested in understanding more about the dynamics of the jovian atmosphere and how the fluid dynamics and chemical processes shape the cloud structures we see when we look at Jupiter. As part of this work, I also tried out a few deep learning approaches to reduce data from the JunoCam instrument and gained some machine learning and data science experience, which drew me to apply for the postdoctoral scholar position with the Zooniverse team at UMN.

What does your typical working day involve?

Most of my day is spent with code. I break my tasks into astrophysical research (focusing on Jupiter’s atmosphere), machine/deep learning (developing new models for the project teams that I am assisting) and working on aggregation utilities for project teams.

How would you describe the Zooniverse in one sentence?

A platform that reduces the barrier of entry for scientific pursuits

Tell us about the first Zooniverse project you were involved with

As part of my introduction to the team, a fellow postdoc at UMN and I was asked to create a test Zooniverse project to get a hang of using the Zooniverse project builder. We built the “Chi Square Kitties” project where we asked people to rate the fits of cats inside various containers. We scraped the web for cat images (there seemed to be a limitless supply) and asked a simple question of whether the cat inside the container was an underfit, overfit or a purr-fect fit!

Of all the discoveries made possible by the Zooniverse, which for you has been the most notable? (and why?)

I love the fact that the Gravity Spy team found a glitch class called “Air compressor” which was noise from a nearby A/C unit. This is the kind of discovery that is incredibly difficult (if not impossible) without having human eyes on the data. This is the kind of discovery which is fueled by Zooniverse!

What’s been your most memorable Zooniverse experience?

On the Jovian Vortex Hunter project, the volunteers came up with very creative names for some of the types of clouds that was observed. Some of my favourite ones are “red-compact-nursery” for subjects which contain very small red cyclones which seem to be forming in between the folded filamentary regions. These creative names (in my view) are actually much more useful than very descriptive scientific jargon, since it helps to create very interpretable labels based on what the feature looks like, rather than the underlying physics.

What are your top three citizen science projects? 

The best things about citizen science is that they cater to a very wide audience. Some projects can be highly accessible and can be done by basically anyone, while others require very technical expertise. There are a ton of amazing projects on Zooniverse which span this experience range, but I will pick some non-Zooniverse projects to highlight some of the reasons why I really love the citsci methodology:

1. JunoCam: the Juno spacecraft almost did not have a camera since it did not fit within the payload mass and budget constraints. Fortunately, a last minute decision was made to put a simple 4-channel camera (previously flight tested on the MSL) and not have a dedicated science team for the instrument. Instead, JunoCam relies on thousands of artists and hobbyists for downloading and processing the raw footage. JunoCam is a story to be told for the way it engages people, particularly artists, in scientific communication, and also how, now, there are several papers written on using the data processed by citizen scientists for new science!

2. PVOL (Planetary Virtual Observatory and Laboratory): This is a website for amateur observers to post their observation of giant planets to a centralized server which can be used by research to provide contextual information to any related studies. PVOL has helped fueled a lot of research into giant planet atmospheres, especially in the context of long cadence studies (longevity of features, periodicity of instabilities) and also in very short term (e.g., meteor impacts, etc.) This is a very useful way to use a very skilled hobby (astrophotography) for an extremely useful cause.

3. RadioJOVE: A project for interested parties to build and operate a simple multi-wavelength radio telescope. This was started as a way to observe Jupiter (and now other sources) in the radio wavelengths. What is amazing about this project is that it teaches a very niche skill (putting together and using a radio telescope) to non-scientists. Through the method of citsci, this project has made great strides in science communication and ways to inspire kids to pursue STEM degrees.

What advice would you give to a researcher considering creating a Zooniverse project?

Volunteers are not simply labelers. They do not replace a complicated algorithm, but instead provide value far beyond a simple classification. The greatest strength of Zooniverse is the fact that volunteers want to be engaged deeply in science, and that needs to be baked into the ethos of any Zooniverse project

How can someone who’s never contributed to a citizen science project get started?

Find your passion in science and search for local resources. For example, if you’re interested in ecology, there are probably conservation groups that could use your help with taking photos of local fauna and/or flaura. If you’re an astronomy enthusiasts, you can talk to the local planetarium, who can provide you with resources and ways to contribute. Finally, you can see if there are public seminars in their local university for departments related to their interest. These can help answer many of the questions you might have about the field.

Where do you hope citizen science and the Zooniverse will be in 10 years time?

I hope that the bridge between science and citizen science narrows further than it currently is. I hope that more research teams can use citizen science not only as a method to analyze and process data, but also as a way to drive science engagement and reduce the barrier of entry to research. I also hope that future spacecraft missions follow in the path laid by JunoCam and have a strong citsci component

Is there anything in the Zooniverse pipeline that you’re particularly excited about?

I’m really interested in how machine learning will play a role in Zooniverse. As the world heads deeper into the use of AI and automation, I am interested in seeing how Zooniverse will adopt these tools while maintaining efficiency, and more importantly, accuracy. Several projects on Zooniverse already use a variety of AI-based tools, so I am looking forward to seeing many more projects make use of these resources.

When not at work, where are we most likely to find you?

Either at home playing video games (catching up on my huge backlog which built up over grad school) or out hiking. I know these are polar opposites, but if I can get over the barrier of pulling away from my PC, I’m always ready to do a 5 mile hike!

Do you have any party tricks or hidden talents?

I learnt to sing Carnatic music, which is a classical South Indian music tradition, and I like to think I can hold a tune (although others can be the judge of that)!

A note from Chris Lintott

For more than a decade, I’ve been ending talks by looking forward to the rich data that will soon begin flowing from the Vera Rubin Observatory. The observatory’s magnificent new telescope, nearing its long-awaited first light on a mountaintop in Chile, will conduct a ten year survey of the sky, producing 30 TB of images and maybe ten million alerts a night, covering everything from asteroids to distant galaxies. 

A starry sky illuminates a hillside with a long building on the top. At one end is a square-ish telescope dome, which houses the telescope of the Vera Rubin Observatory.
The Vera Rubin Observatory Building, ready for action.

The Observatory has – from really early on – seen citizen science, and in particular working with the Zooniverse, as an important part of how they intend to make the most of this firehose of data. We’ve been working hard on things like making data easy to transfer across, designing tools for helping scientists building good projects and more. 

All of which is to say I’m very excited – as a Zooniverse person, and as an astronomer – about the opportunities ahead. To allow time for me to concentrate on making the most of it – and in particular, in learning how to find odd stuff alongside Zooniverse volunteers – I’ve decided it’s time for me to step down as Zooniverse Principal Investigator, a position I’ve held since…well, since before the Zooniverse existed. 

Chris Lintott, wearing a collared shirt, jumper and jeans, poses on a giant tardigrade throne.
Giving up the Principal Investigator’s Official Throne will be a hardship, but it’s time.

After 15 or so years, it’s well past time for other people to lead. Laura Trouille from the Adler Planetarium has been my co-PI since 2015, and will be taking over as Principal Investigator. Laura is brilliant, and has been the driving force behind much of what we’ve been doing for a long time. I’m confident in her, and along with her fabulous team, I’m excited to see where she leads this marvellous platform and project. With a growing partnership with NASA, new ideas in how to use Zooniverse in education and reaching new communities, and – of course – new projects launching each week, I genuinely think that we’re in a strong place to keep allowing everyone to participate in science. 

In terms of stepping down from this role, I’ll be more Joe Root than Stuart Broad (Note to Americans: This is a cricket reference) – there is a long-standing ‘joke’ that no-one actually leaves the Zooniverse – sticking around the team to serve as Senior Scientist, giving advice where needed and continuing to lead specific projects and efforts. The team in Oxford will work more closely than ever with the rest of the collaboration, and our plans for Rubin and the Zooniverse will be unaffected.

It is, however, strange to be contemplating a change after all this time. I’m immensely proud of the fact that once we stumbled onto the success of the original Galaxy Zoo (a story told in a very reasonably priced book…), we had the imagination not just to build another Galaxy Zoo (which we did – classifications still needed!) but a platform that covers such a wide variety of fields, and through which volunteers have contributed three-quarters of a BILLION classifications. Thanks to the support of all manner of organizations, I’m also very proud that we haven’t had to charge researchers for access to the Zooniverse; the only criteria that have ever mattered has been  whether a project will contribute to science and be welcomed by our volunteers. 

I’m so grateful to those who have lent their effort, energy and expertise to making a Zooniverse which is grander than I would ever have dared on my own. In particular: Arfon Smith, who created the original Zooniverse with me and took over from me as Director of Citizen Science at Adler, and Lucy Fortson, who has been the best grant-writer and collaborator I could wish for as well as putting up with my stresses more than anyone should have to. I’ll stop here, else I’ll end up listing all of the developers, researchers and volunteers who have made this unique project what it is.

Onwards! Let us make, and set the weather fair.

See you in the Zooniverse.

Chris Lintott

September 2023

Adventures of a Junior Designer

Guest post written by Keanu Glover, Junior Designer with the Zooniverse team at the Adler Planetarium in Chicago from June-August 2023. Prior to joining the Zooniverse, Keanu was an i.c.stars intern, an intensive 6-month program supporting pathways to economic mobility through IT and software engineering workforce skills training, job experience, and leadership development. After his role within the Zooniverse, Keanu will join the United Airlines Apprenticeship Program.

Pivoting into a new field of work brings forth so many different thoughts such as “can I do the task” “how will I fit in” “do I belong”? For me, transitioning from basketball and athletics to Tech and Design had me nervous at first but once I was submerged into the tech space, I realized the skills I developed playing sports transitioned seamlessly. When I got the opportunity to work for the Zooniverse at Adler Planetarium as a Junior Designer I realized I was given the opportunity to work for great people and contribute to a great situation. 

Before this opportunity I had never been to the Adler or even thought about what it was. Most of the time these types of opportunities are intimidating. Lucky for me it didn’t have that effect on me. I had prepared myself for whatever opportunities were for me! 

I was really excited, having the opportunity to learn something new, be a part of something bigger than myself, contributing to a real project that I can add to my portfolio, and also having the opportunity to build new relationships with experienced professionals.

During my interview with Laura Trouille (Zooniverse Co-PI and Adler VP of Science Engagement) as part of asking about what interested me in applying for the position, she asked, “Do you have past connections with the Adler or astronomy?” This reminded me of a story about moving countless times when I was young, driving across state lines. When the sun would set and the stars would show themselves, my mother used to listen to Janet Jackson and I used to look out the backseat window at the stars. While I looked out I would wonder if our new destination would be our last.

My first week working with the Zooniverse was different, for sure! I had just got out of the i.c.stars program where I was working 12 hour days for 4 months, building a data collection application with a small group of 5 people. All hands on deck as they say.

When I first met Sean Miller, the Zooniverse designer and my mentor for the summer, I learned that work doesn’t have to be as intense. For example my first day we sat down and he told me we would have a meeting in the morning and another in the afternoon. I asked him “what time, like 8am?” He looked at me and said “whoa no! We are going to meet at 10 am”. I was caught by surprise, but was so happy he said that. I quickly understood every company has different cultures. (The Zooniverse strives to schedule meetings between 10-4pm to accommodate different work schedules, commute times, work-life balance.)

For me I felt an energy working with the Zooniverse team. Everyone knows what they are supposed to do, and how every element of their roles are interconnected. For me coming in as a summer intern I didn’t want to disappoint my job placement program: i.c.stars, my family, or Sean, who hand picked me to join his team. 

Sean gave me all the tools and time I needed to get the most out of my time here at Zooniverse. He never made me feel less than for not knowing something; he was always patient, understanding, and an outstanding communicator. I think it’s safe to say that he enjoyed my company for the three month period. Sean is an amazing teacher with so many resources that helped prepare me for the project and the type of work I wanted to get into after the Zooniverse. The rest of the Zooniverse team were beyond accepting of me! They were always willing to explain and help me with things about the Zooniverse site and the Adler as a whole and when it wasn’t about work they always invited me to the beach for a walk or to have lunch.

I had never participated in Zooniverse before I joined the team. When I was tasked to research the platform my first thought was “this is really cool”. I always wondered if there was a platform that allowed people to feel like a true researcher. The platform reminded me of when I would have days off from school and I would watch the Travel Channel, History channel, and Animal Planet. When I was younger I would think it was the coolest thing to be a part of those experiences.

I picked up the concept of the site pretty quickly. I made over 80 different classifications, commented on other projects, and tried to build my own project using zooniverse.org/lab just so I could put myself in the volunteers’ shoes. The site was easy to use and was not complicated at all to navigate. The more I used it the more I started to think about features I could add and how I could help the Zooniverse platform evolve and grow. For example, Sean and I realized that on the ‘Recents’ page (where you can see your recent classifications for any given Zooniverse project) the participants’ recent classifications maxed out at 20. Now if I was a participant that has classified a lot, I might want to see more than 20 recent classifications. So one of the new features we decided to add was paginations and being able to to view and paginate through 100 recent projects. 

My main objective was to experience a professional design sprint for the Zooniverse “Recents Page”. Breaking down the project into three main phases. Discovery phase is where we deep dive into UI/UX research and gain better knowledge on why design is so important. The Definition phase is when I was able to set a foundation for my redesign, meaning  keeping things in scope and remembering to build what the page needs and how it fits into the bigger picture of page designs for Zooniverse, this was a challenge. I found during the discovery and definition phases researching other websites similar to Zooniverse was a big help and allowed me to build on ideas based on the functionality of these other websites and platforms. The Development phase is where we started to build the project. During this phase of the project I learned how critical it is to always have the volunteer in mind when designing new functionalities, but also making sure to keep features that already exist on the Zooniverse platform. The Retrospective phase allowed me to look at my progress and knowledge I gained this summer.  I hope my redesign of the ‘Recents’ page is well received, As I redesigned it with the volunteer in mind. I wanted to keep things similar but give it an upgrade on new features and provide a little more versatility in finding classified projects. Giving the volunteer what they need before they ask for it is something I wanted to do. This redesign will be great for super-users as well as new participants. Stay tuned for more info and announcements about the new Recents Page.

I have come a long way. This time working for Zooniverse is an experience that I will never forget. This experience was about me developing new skills and embracing new challenges. It was also about creating an opportunity for future interns to follow in my footsteps. Leaving a great impression was very important to me. Being professional, showing a great work ethic, and also being a leader. For anyone who has the opportunity to work for Zooniverse I think it is a great place to be and it provides the opportunity to learn and be a part of a community who will support your learning process. I loved my time here at Zooniverse and I wish I could stay! 

Museums, whales, and citizen science

Guest post from Eilidh O’Brien, Staff Scientist, Whales of Iceland Museum

Appropriately located in Reykjavík’s harbour district, Whales of Iceland is the largest museum dedicated to cetaceans in Europe. Much of the space inside is dedicated to life-sized models of the 23 species of whales, dolphins and porpoises that have been sighted in the waters around Iceland throughout history, some very common while others are very rare. When the museum was founded these models were the main focus of the exhibition: a chance for visitors to experience the true size of these gigantic marine animals, and to learn a little about each of the species on display. However, this focus is now evolving and Zooniverse is set to play an important part.

Iceland is a hotspot for cetaceans – and so, also for cetacean researchers! Some remarkable discoveries have been made here in the last decade, from the first recordings of humpback whales singing in their feeding grounds over winter, to the unusual antagonistic interactions between killer whales and long-finned pilot whales. We want to highlight this at Whales of Iceland so that our museum is not just a place to learn about cetaceans themselves, but also how scientists study these fascinating and complex animals, what this research has uncovered, and all the things that we still do not know!

In addition to learning about the research happening here in Iceland, we want to give visitors the opportunity to take part in some real scientific projects. So, thanks to Zooniverse, our newest exhibit will include a citizen science station where anyone can have a shot at being a scientist! We will feature a range of Zooniverse projects for visitors to choose from, giving them a variety of different marine mammal species and different aspects of wildlife ecology to learn about.

Our aim is to make Whales of Iceland a more interactive and thought-provoking experience. We hope that our museum will continue to offer visitors the chance to marvel at the size and beauty of these wonderful creatures, but also to engage with the natural world in ways they may not have before, and to feel that they have not just learned, but discovered.

This collaboration is still in its early stages. With the green light from Zooniverse Co-PI Dr. Laura Trouille, we have already launched a scaled-down version of what we hope the final exhibit will be, and it has been a really promising success! Museums provide a perfect platform for citizen science; we are a small museum relatively speaking, but our footfall in peak season can be more than 400 people in a day. That’s a lot of potential citizen scientists! In ecology, we would call this a mutualistic symbiosis – or, in other words, everyone wins! Our museum guests can provide valuable contributions to scientific projects all over the world, while at the same time gaining first-hand insight into the life of a whale researcher.

We are so excited to develop and expand our collaboration with Zooniverse, as well as other citizen science initiatives. Our finished research exhibit will be unveiled very soon – watch this space!

Who’s who in the Zoo – Shaun A. Noordin

In this edition of our Who’s who in the Zoo series meet Shaun, a frontend developer here at the Zooniverse. Grab a cuppa and have a read 🙂

– Helen


sketch-20230324-shaun-profile-picture-a

Name: Shaun A. Noordin

Location: Oxford, UK

Tell us about your role within the team:

I’ve been a frontend web developer since the ancient era of 2017, and I’ve been responsible for building and updating (and occasionally accidentally breaking) various features on the website. If it’s something visible on a Zooniverse webpage, I’ve probably tinkered with it.

What did you do in your life before the Zooniverse?

In between playing Super Mario and Zelda, I somehow managed to earn a Bachelor of Computer degree and landed a job as a web developer at a Malaysian news company. I then decided to move to the UK, partially to earn my MBA & find a new job, and partially because nobody else in my home country appreciates tea as much as I do.

What does your typical working day involve?

I start the day by saying hello to my awesome colleagues on Slack and find out what’s happening. Cat GIFs and emojis 🤪 are used to properly convey important technical information. Most of the day will be spent in front of a computer screen, looking at issues on Github https://github.com/zooniverse/ , debugging problems on the website, and submitting updates to our shared code base. Tea will be had throughout the day, because there is always time for tea.

How would you describe the Zooniverse in one sentence?

It’s people working with people to work on science to make the world a slightly better place, and it is awesome.

Tell us about the first Zooniverse project you were involved with

I built WildCam Gorongosa Lab, which was the supplementary educational website to the (then standalone) WildCam Gorongosa project. WildCam Gorongosa Lab eventually became a template for other camera trap-based educational programs, which are now available on Zooniverse Classroom: https://classroom.zooniverse.org/

Of all the discoveries made possible by the Zooniverse, which for you has been the most notable? (and why?)

I’m particularly impressed by the discovery of exoplanets, as with Planet Hunters TESS. Now, I’m sure the amazing astronomy nerds I work with might find the knowledge & technique pretty standard, but I was personally amazed to learn how – by observing the light of distant stars over a period of time and finding out when they regularly decrease in brightness – we could discover planets in distant galaxies.

What’s been your most memorable Zooniverse experience?

The semi-regular Zooniverse Team Meetings are always a highlight. I work regularly with my friends & colleagues from across the UK and USA, but I don’t always get to meet them… until the ZTM brings us all together in one place. It’s a great time to discuss project plans, plan long-term technical solutions together, swap jokes, and debate what’s the best kind of tea. (Japanese green tea with slices of strawberries, for a blend of light freshness and sweet fruitiness.)

What are your top three citizen science projects? 

WildCam Gorongosa has a special place in my heart as it’s one of the earliest projects I’ve been involved in. Creating WCG Lab was a great experience in understanding how the Zooniverse team worked, and the independence and initiative I’m given as a developer. As for the other top two, that’d be Planet Hunters TESS (because I built its unique light curve viewer component) and Anti-Slavery Manuscripts (because I helped build our first transcription project and learnt a lot of things along the way).

What advice would you give to a researcher considering creating a Zooniverse project?

When building a project, I’d share the same advice as writing code: start small and create a proof of concept first. Afterwards, keep iterating by making small and discrete changes until you reach your goal – and always test what you’ve built at each major step. Also, take consistent tea breaks.

How can someone who’s never contributed to a citizen science project get started?

As a complete introvert who’s always wary of social interaction, believe me when I say: just jump in. Or with more detail: if you’re interested in something, jump in, read all the instructions, be nice to the people you meet, and don’t be too shy to ask questions. If you’re afraid of making mistakes or not knowing what to do, don’t be – skills and knowledge can be gained with patience, but that enthusiasm you have to learn and contribute is something special that should be nurtured.

Where do you hope citizen science and the Zooniverse will be in 10 years time?

In space. I can’t wait to work in a zero-G office orbiting our planet, though admittedly the cost of the commute would be through the stratosphere. (But seriously: I hope the Zooniverse becomes more open and accessible to more people throughout the world. We’re doing a lot of work on adding language support and improving accessibility, but there’s still more to do.)

Is there anything in the Zooniverse pipeline that you’re particularly excited about?

We always have a few fun experiments from our monthly hackdays, but not all of them end up ready to enter the full production pipeline, alas. My favourite was an unimplemented “voice command mode” where you could classify images by screaming at your computer, “MONKEYS!” “FIVE!” “DANCING!” “DONE & SUBMIT!”. It would have been a great way to allow for hands-free volunteer classifications as well as make your neighbours wonder what is happening in your home.

When not at work, where are we most likely to find you?

When I’m not in front of a screen doing work, I’m in a front of a screen playing video games. I also draw comics (on a tablet PC), write technical articles (on a PC), and make my own games (again on a PC). 90% of my waking hours is spent staring at glowing rectangles, and if you know which The Onion article that joke was in reference to, I think we’ll get along just fine.

Do you have any party tricks or hidden talents?

I studied Japanese so I could better understand my favourite video games, but then I learnt that Mario is actually Italian. Argh! Boku wa baka desu!!

Is there anything else you’d like to share?

I have no links to share, but I wanted to say thanks for reading all the way to the end. I know that not everyone understands my odd sense of humour, but this is why I love working here at the Zooniverse. My friends and colleagues are great people who encourage me to embrace my weirdness and eccentricities, because they trust that I always have everyone’s best interests at heart. This, in turn, motivates me to work hard to help build a better website for everyone. On that note, I hope you too find yourself welcome here in the Zooniverse!

Fixed Cross-Site Scripting vulnerability on hosted media domains

We recently fixed a security vulnerability whereby an attacker could upload executable content to our media storage domains.

On 13th November 2022, a security researcher notified us of a cross-site scripting (XSS) vulnerability affecting our media storage domains. This XSS vulnerability made it possible for attackers to upload content to our storage domains that could then be shared as links for use in ‘phishing’ or other attacks.

We fixed the vulnerability on the morning of the 15th November 2022 by blocking script access to the API from the impacted domains ensuring any malicious code failed to gain access to authenticated private data. This remedial action was followed by a another fix on the 16th November that deployed block rules on our Content Distribution Network (CDN) provider to prevent malicious resource links being served to users. In addition, on the 8th of December we deployed a change to the API to only allow non-malicious files to be uploaded to these storage domains.

The mitigation and fix steps described above allowed us time to research the problem and audit our storage systems for any live exploits. After this audit we determined that this vulnerability had not been exploited for any malicious purpose; no data was leaked and no users were exposed to injected code.

We’d like to thank Michal Biesiada (https://github.com/mbiesiad) for bringing this issue to our attention and for following responsible disclosure by reporting it to us in private, as requested on our security page.

Bursts from Space

This is a guest post by summer intern Anastasia Unitt.

The study of celestial objects creates a huge amount of data. So much data, that astronomers struggle to make use of it all. The solution? Citizen scientists, who lend their brainpower to analyse and catalogue vast swathes of information. Alex Andersson, a DPhil student at the University of Oxford, has been applying this approach to his field: radio astronomy, through the Zooniverse. I met with him via Zoom to learn about his project detecting rare, potentially explosive events happening far out in space.

Alex’s research uses data collected by a radio telescope located thousands of miles away in South Africa, named MeerKAT. The enormous dishes of the telescope detect radio waves, captured from patches of sky about twice the size of the full Moon. This data is then converted into images, which show the source of the waves, and into light curves, a kind of scatter plot which depicts how the brightness of these objects has changed over time. This information was initially collected for a different project, so Alex is exploiting the remaining information in the background- or, as he calls it: “squeezing science out of the rest of the picture.” The goal: to identify transient sources in the images, things that are changing, disappearing and appearing.

Historically, relatively few of these transients have been identified, but the many extra pairs of eyes contributed by citizen scientists has changed the game. The volume of data analysed can be much larger, the process far faster. Alex is clearly both proud of and extremely grateful to his flock of amateur astronomers. “My scientists are able to find things that using traditional methods we just wouldn’t have been able to find, [things] we would have missed.” The project is ongoing, but his favourite finding so far took the form of a “blip” his citizen scientists noticed in just two of the images (out of thousands). Alex explains: “We followed it up and it turns out it’s this star that’s 10 times further away than our nearest stellar neighbor, and it’s flaring. No one’s ever seen it with a radio telescope before.” His excitement is obvious, and justified. This is just one of many findings that may be previously unidentified stars, or even other kinds of celestial objects such as black holes. There’s still so much to find out, the possibilities are almost endless.

A range of light curve shapes spotted by Zooniverse citizen scientists performing classifications for Bursts from Space: MeerKAT

Unfortunately, research comes with its fair share of frustrating moments along with the successes. For Alex, it’s the process of preparing the data for analysis which has proved the most irksome. “Sometimes there’s bits in the process that take a long time, particularly messing with code. There can be so much effort that went into this one little bit, that even if you did put it in a paper is only one sentence.” These behind-the-scenes struggles are essential to make the data presentable to the citizen scientists in the first place, as well as to deal with the thousands of responses which come out the other side. He assures me it’s all worth it in the end.

As to where this research is headed next, Alex says the prospects are very exciting. Now they have a large bank of images that have been analysed by the citizen scientists, he can apply this information to train machine learning algorithms to perform similar detection of interesting transient sources. This next step will allow him to see “how we can harness these new techniques to apply them to radio astronomy – which again, is a completely novel thing.”

Alex is clearly looking forward to these further leaps into the unknown. “The PhD has been a real journey into lots of things that I don’t know, which is exciting. That’s really fun in and of itself.” However, when I ask him what his favourite part of this research has been so far, it isn’t the science. It’s the citizen scientists. He interacts with them directly through chat boards on the Zooniverse site, discussing findings and answering questions. Alex describes their enthusiasm as infectious – “We’re all excited about this unknown frontier together, and that has been really, really lovely.” He’s already busy preparing more data for the volunteers to examine, and who knows what they might find; they still have plenty of sky to explore.

Adler Zooniverse Summer Intern Experience: Tasnova & Colored Conventions

By Tasnova, Guest Writer and Adler Zooniverse Summer ’22 Teen Intern

This summer, I worked as an intern for the Adler Planetarium in Chicago, alongside Lola Fash and Dylan. As a group, we carried out Zooniverse projects and interviews with the researchers leading them. In this blog post, I will share about my experience with the main project that I took part in: Transcribe Colored Conventions

In July 2022 I interviewed Dr. Jim Casey and Justin Smith, two of the research leads for the Colored Conventions project with Zooniverse. Dr. Casey is an assistant research professor of African American Studies at Penn State University, managing director of the Center for Black Digital Research, and co-founder for the Colored Conventions project. Justin Smith is a Ph.D. candidate in English and African American studies at Penn State and a member of the Douglass Day team.

Before I dig into what the Colored Conventions were, I’d like to share my own experience while working on these projects. I chose to focus on Transcribe Colored Convention because I am a huge history lover. I want to learn everything; learning feeds my curiosity. I was really excited to learn about the Colored Conventions since they are often neglected in textbooks; my school never taught me about the Colored Conventions. It was my first time learning anything about the Colored Conventions. I was so excited to get to interview the amazing people leading the Zooniverse project to transcribe documents related to the Colored Conventions.

The Colored Conventions were events that took place during the nineteenth century and spread across 34 states.  In these Conventions, the participants talked about how they could get access to voting rights, education, labor, and business. 

Artist rendition of the Colored Convention meetings. Credit: https://www.nytimes.com/2016/08/05/arts/design/colored-conventions-a-rallying-point-for-black-americans-before-the-civil-war.html

However, despite how important they were, no one really talks about the Colored Conventions today. It is incredibly sad for me to see this important part of our history being neglected.

Another interesting aspect about the Colored Conventions that I learned about through interviewing the team is that the documents related to the Conventions were very male dominated. What this means is that while men’s efforts were well documented in the Conventions’ archive, women’s efforts were not. For example, of the names initially identified and highlighted in the documents, 98% belong to men.

An early researcher who recognized women’s contributions to the Colored Conventions is Dr. Psyche William Foresham, a University of Maryland professor who wrote the essay “What Did They Eat? Where Did They Stay?” In the essay she talked about how women organized restaurants and boarding houses for the people who traveled from other states to join the Convention meeting. They also financially supported them. The essay was eye opening for other researchers, and prompted them to read the Conventions’ documents more carefully to find references to women that might have been overlooked. As a result of these efforts, they found more references to women in the Convention documents.

Zooniverse volunteers also helped transcribe the Colored Convention documents, further unlocking the data for the researchers. The researchers were thrilled to see so many people actually participating in transcribing the documents and caring deeply about the project. The volunteers transcription efforts also uncovered additional evidence of references to women’s efforts in the Colored Convention documents. In my own journey learning about this project, I was happily surprised to see that so many people participated in transcribing the documents and cared about this piece of history that was neglected for so long.

Here are some clips from the full recording of my interview with Dr. Jim Casey and Justin Smith.


A few final thoughts: When I was interviewing the researchers, I loved seeing how passionate they were. It feels rare to talk with people who are passionate about their work. If I see someone who is really passionate about their work and the effort they put in, it’s incredibly motivating. I hope to feel the same in my career.

Colored Convention Project team helping the volunteers during the Transcribe-a-Thon. Credit: Dr. Jim Casey

During my interview, I was nervous in the beginning because this was my first time interviewing a researcher, or anyone. My hands and feet were cold. I tried to calm myself down so I wouldn’t stutter. I think I did a good job interviewing them. My mentor, Sean (who is the Zooniverse designer at Adler), helped me a lot in preparing for the interview. He helped me see that the pressure is not on me as an interviewer; instead, the pressure is on the interviewees because they need to answer the questions. I think that really helped me to calm down because I kept saying to myself that “the pressure is on them, not me.” And my interviewees were such nice people too! I was proud of myself for how I carried out the interview.

Last, but not least, thank you to my teammates Dylan and Lola Fash for helping me out with my summary, video editing, and my blog. 

These are my Zooniverse intern colleagues. They helped me with every single challenge in my internship. Photo credit: Tasnova] 

The world's largest and most popular platform for people-powered research. This research is made possible by volunteers—millions of people around the world who come together to assist professional researchers.