Category Archives: News

Researchers working to improve participant learning through Zooniverse

Our research group at Syracuse University spends a lot of time trying to understand how participants master tasks given the constraints they face. We conducted two studies as a part of a U.S. National Science Foundation grant to build Gravity Spy, one of the most advanced citizen science projects to date (see: www.gravityspy.org). We started with two questions: 1) How best to guide participants through learning many classes? 2) What type of interactions do participants have that lead to enhanced learning?  Our goal was to improve experiences on the project. Like most internet sites, Zooniverse periodically tries different versions of the site or task and monitors how participants do.

We conducted two Gravity Spy experiments (the results were published via open access: article 1 and article 2). Like in other Zooniverse projects, Gravity Spy participants supply judgments to an image subject, noting which class the subject belongs to. Participants also have access to learning resources such as the field guide, about pages, and ‘Talk’ discussion forums. In Gravity Spy, we ask participants to review spectrograms to determine whether a glitch (i.e., noise) is present. The participant classifications are supplied to astrophysicists who are searching for gravitational waves. The classifications help isolate glitches from valid gravitational-wave signals.

Gravity Spy combines human and machine learning components to help astrophysicists search for gravitational waves. Gravity Spy uses machine learning algorithms to determine the likelihood of a glitch belonging to a particular glitch class (currently, 22 known glitches appear in the data stream); the output is a percentage likelihood of being in each category.

Figure 1. The classification interface for a high level in Gravity Spy

Gradual introduction to tasks increases accuracy and retention. 

The literature on human learning is unclear about how many classes people can learn at once. Showing too many glitch class options might discourage participants since the task may seem too daunting, so we wanted to develop training while also allowing them to make useful contributions. We decided to implement and test leveling, where participants can gradually learn to identify glitch classes across different workflows. In Level 1, participants see only two glitch class options; in Level 2, they see 6; in Level 3, they see 10, and in Level 4, 22 glitch class options. We also used the machine learning results to route more straightforward glitches to lower levels and the more ambiguous subjects to higher workflows. So participants in Level 1 only saw subjects that the algorithm was confident a participant could categorize accurately. However, when the percentage likelihood was low (meaning the classification task became more difficult), we routed these to higher workflows.

We experimented to determine what this gradual introduction into the classification task meant for participants. One group of participants were funneled through the training described above (we called it machine learning guided training or MLGT);  another group of participants was given all 22 classes at once.  Here’s what we found:  

  • Participants who completed MLGT were more accurate than participants who did not receive the MLGT (90% vs. 54%).  
  • Participants who completed MLGT executed more classifications than participants who did not receive the MLGT (228 vs. 121 classifications).
  • Participants who completed MLGT had more sessions than participants who did not receive the MLGT (2.5 vs. 2 sessions). 

The usefulness of resources changes as tasks become more challenging

Anecdotally, we know that participants contribute valuable information on the discussion boards, which is beneficial for learning. We were curious about how participants navigated all the information resources on the site and whether those information resources improved people’s classification accuracy. Our goal was to (1) identify learning engagements, and (2) determine if those learning engagements led to increased accuracy. We turned on analytics data and mined these data to determine which types of interactions (e.g., posting comments, opening the field guide, creating collections) improved accuracy. We conducted a quasi-experiment at each workflow, isolating the gold standard data (i.e., the subjects with a known glitch class). We looked at each occasion a participant classified a gold standard subject incorrectly and determined what types of actions a participant made between that classification and the next classification of the same glitch class. We mined the analytics data to see what activities existed between Classification A and Classification B. We did some statistical analysis, and the results were astounding and cool. Here’s what we found:  

  • In Level 1, no learning actions were significant. We suspect this is because the tutorial and other materials created by the science team are comprehensive, and most people are accurate in workflow 1 (~97%).
  • In Level 2 and Level 3, collections, favoriting subjects, and the search function was most valuable for improving accuracy. Here, participants’ agency seems to help to learn. Anecdotally, we know people collect and learn from ambiguous subjects.
  • In Level 4, we found that actions such as posting comments and, viewing the collections created by other participants were most valuable for improving accuracy. Since the most challenging glitches are administered in workflow 4, participants seek feedback from others.

The one-line summary of this experiment is that when tasks are more straightforward, learning resources created by the science teams are most valuable; however, as tasks become more challenging, learning is better supported by the community of participants through the discussion boards and collections. Our next challenge is making these types of learning engagements visible to participants.

Note: We would like to thank the thousands of Gravity Spy participants without whom this research would not be possible. This work was supported by a U.S. National Science Foundation grant No. 1713424 and 1547880. Check out Citizen Science Research at Syracuse for more about our work.

supernova hunters and nine lessons for curious people

At the weekend, a bunch of us had fun with a timely challenge – trying to find and follow-up supernovae with supernova hunters as part of the Nine Lessons and Carols for Curious People 24 hour science/music/comedy show organised by Robin Ince and the Cosmic Shambles Network in support of various good causes. Robin and Brian Cox normally run a huge show at the Hammersmith Apollo theatre at this time of year, but this socially distant, marathon show was a suitable replacement.

Robin and musician Steve Pretty somewhere in the middle of the 24 and a bit hour long show – they were on stage throughout! Credit: Cosmicshambles.com

In the run up to the show there was some concern that poor weather in Hawai’i – where the PanSTARRS telescope that provides data for Supernova Hunters is located – might prevent us getting enough data, but in the event skies were clear. Very clear. Which caused a problem as the extra data took a while to get to the servers at Queen’s University Belfast and from there to us, but thanks to heroic efforts from the Supernova Hunters team, I was able to zoom into the show early on and pointed the viewers to the supernovahunters.org site, and classifications started to flow in.

Supernova hunting is a competitive sport these days, and though the early results from volunteers were encouraging, most of what we found was either too faint to make follow-up easy with the telescopes we had on stand by or were objects already identified by other surveys (including the Zooniverse’s friends at ZTF). A brief reappearance on the Nine Lessons big screen (and an email to existing volunteers asking for help) later and we finally had a set of good candidates.

Liverpool Telescope in the Canary Islands, which was responsible for our first follow-up observations. Credit: Liverpool Telescope.

The team – especially Ken Smith and Darryl Wright – worked overnight to arrange follow-up. When I emerged from a few hours sleep observers at the Liverpool Telescope had checked out our most promising candidate – but it turned out not to be a supernova, but rather a less extreme cosmic explosion known as a cataclysmic variable. I marvelled at the fact Robin was still awake – and was coherently interviewing cosmologists, brain scientists and the odd astronaut – and gave an update.

Just after I finished, Belfast’s Ken Smith popped up with the news that observers in Hawai’i using the SNIFS instrument had followed up other targets – and one of them was a real supernova! Better, it was a type 1a – the kind of supernova that can be used to measure the expansion rate of the Universe. Admittedly it was a type 1a-91bg, a rarer type of supernova which is fainter than a normal type 1a, but still useful, and this gave us a payoff for the show.

Spectrum confirming our candidate is a SN1a-91bg associated with a galaxy at redshift z=0.061 – light from an explosion that happened nearly a billion years ago.

Using only that supernova, a bit of maths on the back of an envelope and a few fairly shaking assumptions, we calculated that the Universe was 12.8 billion years old, about a billion short of the commonly accepted value. I wouldn’t throw out the careful systematic analysis of populations of supernova for this simple calculation – but we did get to announce to a bleary eyed comedian that the Universe might be (a little bit) younger than expected.

Just as I went on air a message from Mark Huber, the observer providing data from Hawai’i, confirmed a second supernova – this one a type II, an exploding massive star. It might even be of the same type as the famous 1987A which was spotted in a satellite galaxy of the Milky Way, the Large Magellanic Cloud. Trying to take this in, and convey what was happening quickly was bit much for my sleep-deprived brain but hopefully people realised we confirmed a second supernova!

More importantly, we’ve recorded the results of all of our discoveries in a Astronote published on the Transient Name Server website (the worldwide clearing house for such discoveries). You can read the result of a Supernova Hunters weekend here – and rejoice in the fact that Robin Ince and some of the Cosmic Shambles team are now coauthors on a scientific publication!

I’ll post links to clips from the show when they’re available too, and if you fancy supernova hunting yourself there will be more data on the supernovahunters.org site soon!

Chris

PS Thanks a million to the Supernova Hunters volunteers, and to the team that made it happen – Brooke Simmons (Lancaster), Ken Smith (Belfast), Darryl Wright (Mayo Clinic), Coleman Krawczyk (Portsmouth) and Grant Miller and Belinda Nicholson (Oxford). Michael Fulton and Shubham Srivastav from QUB took the Liverpool Telescope observations, and Michael also led the publication of our AstroNote.

PPS This gives Robin Ince a Erdös Number of, I think, no higher than 5. His Bacon number (according to the Infinite Monkey Cage) is no higher than 3, so this gives him a Bacon-Erdös number of no more than 15! More importantly, as he’s performed music on stage, he must have a Sabbath number, though finding out what it is requires further work – making him one of the rare number of individuals with EBS numbers. A suitable reward for 24 hours of effort.

Into the Zooniverse: Vol II now available!

For the second year in a row, we’re honoring the hundreds of thousands of contributors, research teams, educators, Talk moderators, and more who make Zooniverse possible. This second edition of Into the Zooniverse highlights another 40 of the many projects that were active on the website and app in the 2019 – 20 academic year.

Image of Into the Zooniverse book

In that year, the Zooniverse has launched 65 projects, volunteers have submitted more than 85 million classifications, research teams have published 35 papers, and hundreds of thousands of people from around the world have taken part in real research. Wow!

To get your copy of Into the Zooniverse: Vol II, download a free pdf here or order a hard copy on Blurb.com. Note that the cost of the book covers production and shipping; Zooniverse does not receive profit through sales. According to the printer, printing and binding take 4-5 business days, then your order ships. To ensure that you receive your book before December holidays, you can use this tool to calculate shipping times.

Read more at zooniverse.org/about/highlights.

News from the Etchiverse – our first results!

Just over three years ago we launched the first Etch A Cell project (https://www.zooniverse.org/projects/h-spiers/etch-a-cell). The project was the first of its kind on the Zooniverse: never before had we asked volunteers to help draw around the small structures inside of cells (also known as ‘manual segmentation of organelles’) visualised with very high-powered electron microscopes. We even had to develop a new tool type on the Zooniverse to do this – a drawing tool for annotating images.

In this first Etch A Cell project, the organelle we asked Zooniverse volunteers to help examine was the nuclear envelope (as you can see shown in green in the image below). The nuclear envelope is a large membrane found within cells. It surrounds the nucleus, which is the part of the cell that contains the genetic material. It’s an important structure to study as it’s known to be involved in a number of diseases, including cancer, and it’s often the first structure research teams inspect in a new data set.

This gif shows an image of a cell taken with an electron microscope. This particular cell is a HeLa cell, a type of cancer cell that is widely used in scientific research. The segmented nuclear envelope is shown in green.

The results…

Earlier this year, we published the first set of results from this project. I’ve summarised some of our most exciting findings below, but if you’d like to take a look at the original paper, you can access it here (https://www.biorxiv.org/content/10.1101/2020.07.28.223024v1.full).

1. Zooniverse volunteers dedicated a huge amount of effort! Zooniverse volunteers submitted more than 100,000 segmentations across the 4000 images analysed in this first Etch A Cell project. Through this effort, the nuclear envelopes of 18 cells were segmented (shown below in green) from our original data block (shown below).

2. Volunteers were very good at segmenting the nuclear envelope. As you can see in the gif and images below, most classifications submitted for each image were really good! Manual segmentation isn’t an easy task to do, even for experts, so we were really impressed!

An unannotated image is shown on the left. The image on the right shows an overlay of all the volunteer segmentations received for this image. As you can see, most volunteers did a great job at segmenting the nuclear envelope.

3. There’s power in a crowd! The image below shows an overlay of every single segmentation for one of the nuclei studied in Etch A Cell. As you can see, through the collective effort of Zooniverse volunteers, something beautiful emerges – by overlaying everyone’s effort like this, you can see the shape of the nuclear envelope begin to appear!

To make sense of all of this data, we developed an analysis approach that took all of these lines and averaged them to form a ‘consensus segmentation’ for each nuclear envelope. This consensus segmentation, produced through the collective effort of volunteers, was incredibly similar to that produced by an expert microscopist. You can see this in the image below: on the left (in yellow) you can see the expert segmentation of the nuclear envelope of one cell compared to the volunteer segmentation (in green). The top image shows a single slice from the cell, the bottom image shows the 3D reconstruction of the whole nuclear envelope.

4. Volunteer segmentations can be used to train powerful new algorithms capable of segmenting the nuclear envelope. We found that volunteer data alone, with no expert data at all, could be used to train computer algorithms to perform the task of nuclear envelope segmentation to a very high standard. In the gif below you can see the computer predicted nuclear envelope segmentation for each of the cells in pink.

5. Our algorithm works surprisingly well on other data sets. We ran this new algorithm on other datasets that had been produced under slightly different experimental conditions. Because of these differences, we didn’t expect the algorithm to perform very well, however, as you can see in the images below, it did a very good job at identifying the location of the nuclear envelope. Because of this transferability, members of our research team have already begun using this algorithm to aid their new research projects.

The future…

We’re so excited to share these results with you, our volunteer community, and the research communities we collaborate with, and we’re looking forward to building on these findings in the future. The algorithms we’ve been able to produce from this effort are already being used by research teams at the Crick, and we’ve already launched multiple new projects asking for your help to look at other organelles – The Etchiverse is expanding!

You can access all our current Etch A Cell projects through the Etch A Cell Organisation page

Zooniverse Mobile App Release v2.8.2!

Now it’s even easier to contribute to science from your phone!

On any crowded public bus (before the pandemic), people sat next to each other, eyes fixed on their phones, smiling, swiping. 

What were they all doing? Using a dating app, maybe. Or maybe they were separating wildcam footage of empty desert from beautiful birds. Maybe they were spotting spiral arms on faraway galaxies.

Maybe one of them was you!  

We’ve loved seeing the participation in the Zooniverse through the mobile app (available for iOS and Android) over the past two years. So we made it even easier for you to do that wherever you swipe these days—a park bench, or maybe your home. (Please don’t swipe and drive). 

Right now, you can go into the app and contribute to Galaxy Zoo Mobile, Catalina Outer Solar System Survey, Disk Detective, Mapping Historic Skies, Nest Quest Go, or Planet Four: Ridges. And we have more projects on the way!

What’s new in the app

When you update to version 2.8.2, you’ll notice a slick new look. At the very top, there’s now an “All Projects” category. This will show you everything available for mobile—with the projects that need your help the most sorted at the very top! You can also still choose a specific discipline, of course.

That’s it for features that are totally new, but a lot of features in this version are fixed. No more crashing when you tap on browser projects. A lot fewer project-related crashes. Animated gifs, which previously worked only on iOS, now also work on Android—so researchers can show you an image that changes over time.  

What’s more—and you’ll never see this, but it’s important to us, the developers—we’ve made a lot of changes that help us keep improving the app. We have better crash reporting mechanisms and more complete automated testing. We also updated all of our documentation so that developers from outside our team can contribute to the app, too! We’d love to be a go-to open source project for people who are learning, or working in, React Native (the platform on which our app is built).

Aggregate Functionality

The full list of functionalities now includes:

  • Swipe (binary question [A or B.] response)
  • Single-answer question (A, B, or C)
  • Multi-answer question (any combination of A, B, and C.)
  • Rectangle drawing task (drawing a rectangle around a feature within a subject)
  • Single-image subjects
  • Multi-image subjects (e.g. uploading 2+ images as a single subject; users swipe up/down to display the different images)
  • Animated gifs as subjects
  • Subject auto-linking (automatically linking subjects retired from one workflow into another workflow of interest on the same project)
  • Push notifications (sending messages/alerts about new data, new workflows, etc., via the app)
  • Preview (an owner or collaborator on a project in development being able to preview a workflow in the ‘Preview’ section of the mobile app)
  • Beta Review (mobile enabled workflows are accessible through the ‘Beta Review’ section of the app for a project in the Beta Review process; includes an in-app feedback form)
  • Ability to see a list of all available projects, as well as filter by discipline (with active mobile app workflows listed at the top)

We also carried out a number of infrastructure improvements, including: 

  • Upgrades to the React Native libraries we use
  • Created a staging environment to test changes before they are implemented in full production
  • Additional test coverage
  • Implemented bug reporting and tracking
  • Complete documentation, so open source contributors can get the app running from our public code repository
  • And a myriad of additional improvements like missing icons no longer crashing the app, improvements to the rectangle drawing task, etc.

Note: we will continue developing the app; this is just the end of this phase of effort and a great time to share the results.

If you’re leading a Zooniverse project and have any questions about where in the Project Editor ‘workflow’ interface to ‘enable on mobile’, don’t hesitate to email contact@zooniverse.org. And/or if you’re a volunteer and wonder if workflow(s) on a given project could be enabled on mobile, please post in that project’s Talk to start the conversation with the research team and us. The more, the merrier!

Looking forward to having more projects on the mobile app!

A Few Stats of Interest:

  • Since Jan 1, 2020: 
    • 6.2 million classifications submitted via the app (that’s 7% of 86.7 million classifications total through Zooniverse projects)
    • 18,000 installations on iOS + 17,000 on Android
  • Current Active Users (people who have used the app in the last 30 days):
    • 1,800 on iOS + 7,700 on Android

Previous Blog Posts about the Zooniverse Mobile App:

project completed: The American Soldier in wwII

This is a guest post from the research team behind The American Soldier in WWII.

As challenges press upon all of us in the midst of the pandemic, the team behind The American Soldier in World War II has some good news to share. 

When we initially launched our project on Zooniverse on VE Day 2018, our goal was to have all 65,000 pages of commentaries on war and military service written by soldiers in their own hands transcribed and annotated within a 2-year window – in triplicate, for quality-control purposes. We not only hit that milestone in May 2020, but last week we completed an additional 4th round. 

Attracting 3,000-plus new contributors, this extension of the transcription drive took only six months. Beyond allowing more people to engage with these unique and revealing wartime documents, the added round is improving our final project output. Within the next week or so, our top Zooniverse transcribers will begin final, manual verification of these transcriptions and annotations, which have been cleaned algorithmically. If you are a consistent project contributor and interested in helping with final validation, please do let us know by signing up here.

As we move forward with the project, we have created a Farewell Talk board. Since we have had so many incredible contributors to The American Soldier, we would love to hear any parting words our volunteers would like to share with the team and with fellow contributors about your experiences or most memorable transcriptions. 

We are so incredibly grateful for the international team of researchers, data and computer scientists, designers, educators, and volunteers who have gotten the project to where it is and in spite of the great upheaval. Thanks to their hard work and dedication, the project’s open-access website remains on track for a spring 2021 launch. 

We look forward to sharing more news with you soon. Until then, be well and safe. 

The American Soldier in WWII Team

NASA and Zooniverse Announce Partnership

We’re very happy to announce a new partnership between NASA and our Zooniverse teams at the Adler Planetarium and the University of Minnesota. This new partnership advances and deepens our existing relationship and efforts with NASA. Our team will work together with NASA to create new opportunities for the Zooniverse volunteer community to engage and participate in projects that span the wide range of NASA’s science divisions: astrophysics, heliophysics, planetary science, and earth science.

This new NASA grant will enable new projects as well as provide support for our developers to maintain our research-enabling platform. This support is very welcome, and will help us share our platform with a growing number of scientists who want to unlock data from NASA’s missions, centers, and projects. We’re really looking forward to building and launching these new projects, but don’t worry — nothing else will change. The platform will still be a welcome home to a wide range of research and projects.

It’s been more than a decade now since the Zooniverse launched, and it’s exciting to have reached the point where the Zooniverse platform, research teams, and AMAZING community of volunteers are consistently recognized as valuable contributors and collaborators in research.  The Zooniverse team is excited for this partnership and for the future ahead — here’s to lots more adventures to come!

The Zooniverse: A Quick starter guide for research teams

Over the past several months, we’ve welcomed thousands of new volunteers and dozens of new teams into our community.

This is wonderful.

Because there are new people arriving every day, we want to take this opportunity to (re)introduce ourselves, provide an overview of how Zooniverse works, and give you some insight on the folks who maintain the platform and help guide research teams through the process of building and running projects.

Who are we?

The core Zooniverse team is based across three institutions:

  • Oxford University, Oxford UK
  • The Adler Planetarium, Chicago IL
  • The University of Minnesota-Twin Cities, Minneapolis MN

We also have collaborators at many other institutions worldwide. Our team is made up of web developers, research leads, data scientists, and a designer.

How we build projects

Research teams can build Zooniverse projects in two ways.

First, teams can use the Project Builder to create their very own Zooniverse project from scratch, for free. In order to launch publicly and be featured on zooniverse.org/projects, teams must go through beta review, wherein a team of Zooniverse volunteer beta testers give feedback on the project and answer a series of questions that tell us whether the project is 1) appropriate for the platform; and 2) ready to be launched. Anyone can be a beta tester! To sign up, visit https://www.zooniverse.org/settings/email. Note: the timeline from requesting beta review to getting scheduled in the queue to receiving beta feedback is a few weeks. It can then take a few weeks to a few months (depending on the level of changes needed) to improve your project based on beta feedback and be ready to apply for full launch. For more details and best practices around using the Project Builder, see https://help.zooniverse.org/getting-started/.

The second option is for cases where the tools available in the Project Builder aren’t quite right for the research goals of a particular team. In these cases, they can work with us to create new, custom tools. We (the Zooniverse team) work with these external teams to apply for funding to support design, development, project management, and research.

Those of you who have applied for grant funding before will know that this process can take a long time. Once we’ve applied for a grant, it can take 6 months or more to hear back about whether or not our efforts were successful. Funded projects usually require at least 6 months to design, build, and test, depending on the complexity of the features being created. Once new features are created, we then need additional time to generalize (and often revise) them for inclusion in the Project Builder toolkit.

To summarize:

Option 1: Project Builder

  • Free!
  • Quick!
  • Have to work with what’s available (no customization of tools or interface design)

Option 2: Custom Project

  • Funding required
  • Can take a longer time
  • Get the features you need!
  • Supports future teams who may also benefit from the creation of these new tools!

We hope this helps you to decide which path is best for you and your research goals.

SuperWASP Variable Stars – Update

The following is an update from the SuperWASP Vairable Stars research team. Enjoy!

Welcome to the Spring 2020 update! In this blog, we will be sharing some updates and discoveries from the SuperWASP Variable Stars project.

What are we aiming to do?

We are trying to discover the weirdest variable stars!

Stars are the building blocks of the Universe, and finding out more about them is a cornerstone of astrophysics. Variable stars (stars which change in brightness) are incredibly important to learning more about the Universe, because their periodic changes allow us to probe the underlying physics of the stars themselves.

We have asked citizen scientists to classify variable stars based on their photometric light curves (the amount of light over time), which helps us to determine what type of variable star we’re observing. Classifying these stars serves two purposes: firstly to create large catalogues of stars of a similar type which allows us to determine characteristics of the population; and secondly, to identify rare objects displaying unusual behaviour, which can offer unique insights into stellar structure and evolution.

We have 1.6 million variable stars detected by the SuperWASP telescope to classify, and we need your help! By getting involved, we can build up a better idea of what types of stars are in the night sky.

What have we discovered so far?

We’ve done some initial analysis on the first 300,000 classifications to get a breakdown of how many of each type of star is in our dataset.

So far it looks like there’s a lot of junk light curves in the dataset, which we expected. The programme written to detect periods in variable stars often picks up exactly a day or a lunar month, which it mistakes for a real period. Importantly though, you’ve classified a huge number of real and exciting light curves!

We’re especially excited to do some digging into what the “unknown” light curves are… are there new discoveries hidden in there? Once we’ve completed the next batch of classifications, we’ll do some more to see whether the breakdown of types of stars changes.

An exciting discovery…

In late 2018, while building this Zooniverse project, we came across an unusual star. This Northern hemisphere object, TYC-3251-903-1, is a relatively bright object (V=11.3) which has previously not been identified as a binary system. Although the light curve is characteristic of an eclipsing contact binary star, the period is ~42 days, notably longer than the characteristic contact binary period of less than 1 day.

Spurred on by this discovery, we identified a further 16 candidate near-contact red giant eclipsing binaries through searches of archival data. We were excited to find that citizen scientists had also discovered 10 more candidates through this project!

Figure 1: Artist’s impression of a contact binary star [Mark A. Garlick] Over the past 18 months, we’ve carried out an observing campaign of these 27 candidate binaries using telescopes from across the world. We have taken multi-colour photometry using The Open University’s own PIRATE telescope, and the Las Cumbres Observatory robotic telescopes, and spectroscopy of Northern candidates with the Liverpool Telescope, and Southern candidates using SALT. We’ve also spent two weeks in South Africa on the 74-inch telescope to take further spectroscopy.

Of the 10 candidate binaries discovered by citizen scientists, we were happy to be able to take spectroscopic observations for 8 whilst in South Africa, and we have confirmed that at least 2 are, in fact, binaries! Thank you citizen scientists!

Why is this discovery important?

Figure 2: V838 Mon and its light echo [ESA/NASA]

The majority of contact or near-contact binaries consist of small (K/M dwarf) stars in close orbits with periods of less than 1 day. But for stars in a binary in a contact binary to have such long periods requires both the stars to be giant. This is a previously unknown configuration…

Interestingly, a newly identified type of stellar explosion, known as a red nova, is thought to be caused by the merger of a giant binary system, just like the ones we’ve discovered.

Red novae are characterised by a red colour, a slow expansion rate, and a lower luminosity than supernovae. Very little is known about red novae, and only one has been observed pre-nova, V1309 Sco, and that was only discovered through archival data. A famous example of a possible red nova is the 2002 outburst in V838 Mon. Astronomers believe that this was likely to have been a red nova caused by a binary star merger, forming the largest known star for a short period of time after the explosion.

So, by studying these near-contact red giant eclipsing binaries, we have an unrivalled opportunity to identify and understand binary star mergers before the merger event itself, and advance our understanding of red novae.

What changes have we made?

Since the SuperWASP Variable Stars Zooniverse project started, we’ve made a few changes to make the project more enjoyable. We’ve reduced the number of classifications needed to retire a target, and we’ve also reduced the number of classifications of “junk” light curves needed to retire it. This means you should see more interesting, real, light curves.

We’ve also started a Twitter account, where we’ll be sharing updates about the project, the weird and wacky light curves you find, and getting involved in citizen science and astronomy communities. You can follow us here: www.twitter.com/SuperWASP_stars

What’s next?

We still have thousands of stars to classify, so we need your help!

Once we have more classifications, we will be beginning to turn the results into a publicly available, searchable website, a bit like the ASAS-SN Catalogue of Variable Stars (https://asas-sn.osu.edu/variables). Work on this is likely to begin towards the end of 2020, but we’ll keep you updated.

We’re also working on a paper on the near-contact red giant binary stars, which will include some of the discoveries by citizen scientists. Expect that towards the end of 2020, too.

Otherwise, watch this space for more discoveries and updates!

We would like to thank the thousands of citizen scientists who have put time into this Zooniverse project. If you ever have any questions or suggestions, please get in touch.

Heidi & the SuperWASP Variable Stars team.

We Are Still here

These are strange times we live in. With many people ill or worried, and communities all over the world in lockdown or cutting out social contact in order to try and control the spread of the novel coronavirus, it’s hard to work out what the future holds.

The Zooniverse team – including our teams in Oxford and in Chicago – are all working from home, and we’re struggling to master how to communicate and work in this odd situation. So far we’ve encountered all sorts of weird glitches while trying to keep in touch.

Zoom backgrounds can be weird and terrifying, as demonstrated here by Sam.
Why am I the only one with a profile picture?

But we are still here! As we know lots of you are turning to Zooniverse for a distraction while your lives are disrupted, we’ve asked our research teams to pay particular attention to their projects and to be even more present online during this time. We’ll try and bring you more news from them over the next few weeks.

Anyway, if any of you would like to distract yourselves by taking part and contributing to one of our projects, we’ve made it easier to find a new project to dive into. The top of our projects page now highlights selected projects – they will change frequently, and might be topical, timely, particularly in need of your help – or just our favourites!

Zooniverse projects succeed because they’re the collective work of many thousands of you who come together to collaborate with our research teams – and a little bit of collective action in the world right now feels pretty good.

Look after yourselves, and see you in the Zooniverse.

Chris