Category Archives: CitizenScience

Exoplanet Explorers Discoveries – A Small Planet in the Habitable Zone

This post is by Adina Feinstein. Adina is a graduate student at the University of Chicago. Her work focuses on detecting and characterizing exoplanets. Adina became involved with the Exoplanet Explorers project through her mentor, Joshua Schlieder, at NASA Goddard through their summer research program.

Let me tell you about the newly discovered system – K2-288 – uncovered by volunteers on Exoplanet Explorers.

K2-288 has two low-mass M dwarf stars: a primary (K2-288A) which is roughly half the size of the Sun and a secondary (K2-288B) which is roughly one-third the size of the Sun. The capital lettering denotes a star in the planet-naming world. Already this system is shaping up to be pretty cool. The one planet in this system, K2-288Bb, hosts the smaller, secondary star. K2-288Bb orbits on a 31.3 day period, which isn’t very long compared to Earth, but this period places the planet in the habitable zone of its host star. The habitable zone is defined as the region where liquid water could exist on the planet’s surface. K2-288Bb has an equilibrium temperature -47°C, colder than the equilibrium temperature of Earth. It is approximately 1.9 times the radius of Earth, which places it in a region of planet radius space where we believe planets transition to volatile-rich sub-Neptunes, rather than being potentially habitable super-Earth. Planets of this size are rare, with only about a handful known to-date.

Artist’s rendering of the K2-288 system.

The story of the discovery of this system is an interesting one. When two of the reaction wheels on the Kepler spacecraft failed, the mission team re-oriented the spacecraft to allow observations to continue to happen. The re-orientation caused slight variations in the shape of the telescope and temperature of the instruments on board. As a consequence, the beginning of each observing campaign experienced extreme systematic errors and initially, when searching for exoplanet transits, we “threw out” or ignored the first days of observing. Then, when we were searching the data by-eye for new planet candidates, we came across this system and only saw 2 transits. In order for follow-up observations to proceed, we need a minimum of 3 transits, so we put this system on the back-burner. The light curve (the amount of light we see from a star over time) with the transits is shown below.

Later, we learned how to model and correct for the systematic errors at the beginning of each observing run and re-processed all of the data. Instead of searching it all by-eye again, as we had done initially, we outsourced it to Exoplanet Explorers and citizen scientists, who identified this system with three transit signals. The volunteers started a discussion thread about this planet because given initial stellar parameters, this planet would be around the same size and temperature as Earth. This caught our attention. As it turns out, there was an additional transit at the beginning of the observing run that we missed when we threw out this data! Makennah Bristow, a fellow intern of mine at NASA Goddard, identified the system again independently. With now three transits and a relatively long orbital period of 31.3 days, we pushed to begin the observational follow-up needed to confirm this planet was real.

First, we obtained spectra, or a unique chemical fingerprint of the star. This allowed us to place better constraints on the parameters of the star, such as mass, radius, temperature, and brightness. While obtaining spectra from the Keck Observatory, we noticed a potential companion star. We conducted adaptive optics observations to see if the companion was bound to the star or a background source. Most stars in the Milky Way are born in pairs, so it was not too surprising that this system was no different. After identifying a fainter companion, we made extra sure the signal was due to a real planet and not the companion; we convinced ourselves this was the case.

Finally, we had to determine which star the planet was orbiting. We obtained an additional transit using the Spitzer spacecraft. Using both the Kepler and Spitzer transits, we derived planet parameters for both when the planet orbits the primary and the secondary. The planet radius derived from both light curves was most consistent when the host star was the secondary. Additionally, we derived the stellar density from the observed planet transit and this better correlated to the smaller secondary star. To round it all off, we calculated the probability of the signal being a false positive (i.e. not a planet signal) when the planet orbits the secondary and it resulted in a false positive probability of roughly 10e-9, which indicates it most likely is a real signal.

The role of citizen scientists in this discovery was critical, which is why some of the key Zooniverse volunteers are included as co-authors on this publication. K2-288 was observed in K2 Campaign 4, which ran from April to September back in 2015. We scientists initially missed this system and it’s likely that even though we learned how to better model and remove spacecraft systematics, it would have taken years for us to go back into older data and find this system. Citizen scientists have shown us that even though there is so much new data coming out, especially with the launch of the Transiting Exoplanet Survey Satellite, the older data is still a treasure trove of new discoveries. Thank you to all of the Exoplanet volunteers who made this discovery possible and continue your great work!

The paper written by the team is available here. It should be open to all very shortly.

Exoplanet Explorers Discoveries – A Sixth Planet in the K2-138 System

This is the first of two guest posts from the Exoplanet Explorers research team announcing two new planets discovered by their Zooniverse volunteers. This post was written by Jessie Christiansen.

Hello citizen scientists! We are here at the 233rd meeting of the American Astronomical Society, the biggest astronomy meeting in the US of the year (around 3000 astronomers, depending on how many attendees are ultimately affected by the government shutdown). I’m excited to share that on Monday morning, we are making a couple of new exoplanet announcements as a result of your work here on Zooniverse, using the Exoplanet Explorers project!

Last year at the same meeting, we announced the discovery of K2-138. This was a system of five small planets around a K star (an orange dwarf star). The planets all have very short orbital periods (from 2.5 to 12.8 days! Recall that in our solar system the shortest period planet is Mercury, with a period of ~88 days) that form an unbroken chain of near-resonances. These resonances offer tantalizing clues as to how this system formed, a question we are still trying to answer for exoplanet systems in general. The resonances also beg the question – how far could the chain continue? This was the longest unbroken chain of near first-order resonances which had been found (by anyone, let alone citizen scientists!).

At the time, we had hints of a sixth planet in the system. In the original data analysed by citizen scientists, there were two anomalous events that could not be accounted for by the five known planets – events that must have been caused by at least one, if not more, additional planets. If they were both due to a single additional planet, then we could predict when the next event caused by that planet would happen – and we did. We were awarded time on the NASA Spitzer Space Telescope at the predicted time, and BOOM. There it was. A third event, shown below, confirming that the two previous events were indeed caused by the same planet, a planet for which we now knew the size and period.

So, without further ado, I’d like to introduce K2-138 g! It is a planet just a little bit smaller than Neptune (which means it is slightly larger than the other five planets in the system, which are all between the size of Earth and Neptune). It has a period of about 42 days, which means it’s pretty warm (400 degrees K) and therefore not habitable. Also, very interestingly, it is not on the resonant chain – it’s significantly further out than the next planet in the chain would be. In fact, it’s far enough out that there is a noticeable gap – a gap that is big enough to hide more planets on the chain. If these planets exist, they don’t seem to be transiting, but that doesn’t mean they couldn’t be detected in other ways, including by measuring the effect of their presence on the other planets that do transit. The planet is being published in a forthcoming paper that will be led by Dr Kevin Hardegree-Ullman, a postdoctoral research fellow at Caltech/IPAC.

In the meantime, astronomers are still studying the previously identified planets, in particular to try to measure their masses. Having tightly packed systems that are near resonance like K2-138 provides a fantastic test-bed for examining all sorts of planet formation and migration theories, so we are excited to see what will come from this amazing system discovered by citizen scientists on Zooniverse in years to come!

We are also announcing a second new exoplanet system discovered by Exoplanet Explorers, but I will let Adina Feinstein, the lead author of that paper, introduce you to that exciting discovery.

Adler Members’ Night recap

We had a blast hanging out with Chicago-area volunteers and Adler Members at last month’s Adler Members’ Night! Visitors were able to try out potential new Zooniverse projects and Adler exhibits, including a constellation-themed project in collaboration with the Adler’s collections department, as well as U!Scientist, our NSF-supported touch table installation which features Galaxy Zoo.

Northwestern University researchers shook it up demonstrating why earthquakes behave in different ways based on plate friction, registered jumps on a seismograph and quizzed guests on seismograms from jumping second graders, storms and different earthquakes. Their Zooniverse project Earthquake Detective is currently in beta and is set to launch soon.

And we were delighted to watch volunteer @GlamasaurusRex complete her 15,000th classification LIVE IN PERSON. She made the classification on Higgs Hunters. Check out the video here: https://drive.google.com/open?id=1jttO1w1OfPY9LEaS5SjmEzy36PiGnY4U

Beta for Mobile

One of the big efforts for the mobile app right now is to make the project building experience for mobile feel about the same as it does for web. For the most part, the experiences were very similar. In fact, they were almost identical besides the limitations we put on what workflows mobile projects can have. There was, however, a very large limiting factor for mobile project builders. There was no formal path from creating a project to getting that project to release on the app.

Introducing Beta Mode for mobile!

Now project builders who want their workflows to be enabled on mobile can have them reviewed on mobile as well. Here’s how it works:

When a project that has a mobile workflow is approved to go to beta, it will appear in the “Beta Review” section on the main page of the app.

Simulator Screen Shot - iPhone 8 - 2018-10-31 at 13.41.22

From there, users will be able to view and test all of the beta projects that are currently live.

We are launching this feature with (of course) Galaxy Zoo Mobile. It is available now for all our users, so go ahead and check it out!

Simulator Screen Shot - iPhone 8 - 2018-10-31 at 13.41.35

Like beta review on web-based projects, we will collect feedback from volunteer testers and give that back to project owners. This new process will lead to better, clearer mobile workflows in the future.

Stay tuned for more notes about upcoming mobile features!

Download our iOS and Android apps!

Zooniverse Data Aggregation

Hi all, I am Coleman Krawczyk and for the past year I have been working on tools to help Zooniverse research teams work with their data exports.  The current version of the code (v1.3.0) supports data aggregation for nearly all the project builder task types, and support will be added for the remaining task types in the coming months.

What does this code do?

This code provides tools to allow research teams to process and aggregate classifications made on their project, or in other words, this code calculates the consensus answer for a given subject based on the volunteer classifications.  

The code is written in python, but it can be run completely using three command line scripts (no python knowledge needed) and a project’s data exports.

Configuration

The first script is the uses a project’s workflow data export to auto-configure what extractors and reducers (see below) should be run for each task in the workflow.  This produces a series of `yaml` configuration files with reasonable default values selected.

Extraction

Next the extraction script takes the classification data export and flattens it into a series of `csv` files, one for each unique task type, that only contain the data needed for the reduction process.  Although the code tries its best to produce completely “flat” data tables, this is not always possible, so more complex tasks (e.g. drawing tasks) have structured data for some columns.

Reduction

The final script takes the results of the data extraction and combine them into a single consensus result for each subject and each task (e.g. vote counts, clustered shapes, etc…).  For more complex tasks (e.g. drawing tasks) the reducer’s configuration file accepts parameters to help tune the aggregation algorithms to best work with the data at hand.

A full example using these scripts can be found in the documentation.

Future for this code

At the moment this code is provided in its “offline” form, but we testing ways for this aggregation to be run “live” on a Zooniverse project.  When that system is finished a research team will be able to enter their configuration parameters directly in the project builder, a server will run the aggregation code, and the extracted or reduced `csv` files will be made available for download.

Experiments on the Zooniverse

Occasionally we run studies in collaboration with external  researchers in order to better understand our community and improve our platform. These can involve methods such as A/B splits, where we show a slightly different version of the site to one group of volunteers and measure how it affects their participation, e.g. does it influence how many classifications they make or their likelihood to return to the project for subsequent sessions?

One example of such a study was the messaging experiment we ran on Galaxy Zoo.  We worked with researchers from Ben Gurion University and Microsoft research to test if the specific content and timing of messages presented in the classification interface could help alleviate the issue of volunteers disengaging from the project. You can read more about that experiment and its results in this Galaxy Zoo blog post https://blog.galaxyzoo.org/2018/07/12/galaxy-zoo-messaging-experiment-results/.

As the Zooniverse has different teams based at different institutions in the UK and the USA, the procedures for ethics approval differ depending on who is leading the study. After recent discussions with staff at the University of Oxford ethics board, to check our procedure was up to date, our Oxford-based team will be changing the way in which we gain approval for, and report the completion of these types of studies. All future study designs which feature Oxford staff taking part in the analysis will be submitted to CUREC, something we’ve been doing for the last few years. From now on, once the data gathering stage of the study has been run we will provide all volunteers involved with a debrief message.

The debrief will explain to our volunteers that they have been involved in a study, along with providing information about the exact set-up of the study and what the research goals were. The most significant change is that, before the data analysis is conducted, we will contact all volunteers involved in the study allow a period of time for them to state that they would like to withdraw their consent to the use of their data. We will then remove all data associated with any volunteer who would not like to be involved before the data is analysed and the findings are presented. The debrief will also contain contact details for the researchers in the event of any concerns and complaints. You can see an example of such a debrief in our original post about the Galaxy Zoo messaging experiment here https://blog.galaxyzoo.org/2015/08/10/messaging-test/.

As always, our primary focus is the research being enabled by our volunteer community on our individual projects. We run experiments like these in order to better understand how to create a more efficient and productive platform that benefits both our volunteers and the researchers we support. All clicks that are made by our volunteers are used in the science outcomes from our projects no matter whether they are part of an A/B split experiment or not. We still strive never to waste any volunteer time or effort.

We thank you for all that you do, and for helping us learn how to build a better Zooniverse.

What’s going on with the classify interface? Part three

Part three in a multi-part series exploring the visual and UX changes to the Zooniverse classify interface

Coming soon!

Today we’ll be going over a couple of visual changes to familiar elements of the classify interface and new additions we’re excited to premier. These updates haven’t been implemented yet, so nothing is set in stone. Please use this survey to send me feedback about these or any of the other updates to the Zooniverse.

Keyboard shortcut modal

New modals

Many respondents to my 2017 design survey requested that they be able to use the keyboard to make classifications rather than having to click so many buttons. One volunteer actually called the classifier “a carpal-tunnel torturing device”. As a designer, that’s hard to hear – it’s never the goal to actively injure our volunteers.

We actually do support keyboard shortcuts! This survey helped us realize that we need to be better at sharing some of the tools our developers have built. The image above shows a newly designed Keyboard Shortcut information modal. This modal (or “popup”) is a great example of a few of the modals we’re building – you can leave it open and drag it around the interface while you work, so you’ll be able to quickly refer to it whenever you need.

This behavior will be mirrored in a few of the modals that are currently available to you:

  • Add to Favorites
  • Add to Collection / Create a New Collection
  • Subject Metadata
  • “Need Help?”

It will also be applied to a few new ones, including…

Field Guide

New field guide layout

Another major finding from the design survey was that users did not have a clear idea where to go when they needed help with a task (see chart below).

Survey results show a mix of responses

We know research teams often put a lot of effort into their help texts, and we wanted to be sure that work was reaching the largest possible audience. Hence, we moved the Field Guide from a small button on the right-hand side of the screen – a place that can become obscured by the browser’s scrollbar – and created a larger, more prominent button in the updated toolbar:

By placing the Field Guide button in a more prominent position and allowing the modal to stay open during classifications, we hope this tool will be taken advantage of more than it currently is.

The layout was the result of the audit of every live project I conducted in spring 2017:

Field Guide
Mode item count 5 Mode label word count 2
Min item count 2 Min label word count 2
Max items count 45 Max label word count 765

Using the mode gave me the basis on which to design; however, there’s quite a disparity between min and max amounts. Because of this disparity, we’ll be giving project owners with currently active projects a lot of warning before switching to the new layout, and they’ll have the option to continue to use the current Field Guide design if they’d prefer.

Tutorial

Another major resource Zooniverse offers its research teams and volunteers is the Tutorial. Often used to explain project goals, welcome new volunteers to the project, and point out what to look for in an image, the current tutorial is often a challenge because its absolute positioning on top of the subject image.

No more!

In this iteration of the classify interface, the tutorial opens once as a modal, just as it does now, and then lives in a tab in the task area where it’s much more easily accessible. You’ll be able to switch to the Tutorial tab in order to compare the example images and information with the subject image you’re looking at, rather than opening and closing the tutorial box many times.

A brand-new statistics section

Another major comment from the survey was that volunteers wanted more ways to interact with the Zooniverse. Thus, you’ll be able to scroll down to find a brand-new section! Features we’re adding will include:

  • Your previous classifications with Add to Favorites or Add to Collection buttons
  • Interesting stats, like the amount of classifications you’ve done and the amount of classifications your community have done
  • Links to similar projects you might be interested in
  • Links to the project’s blog and social media to help you feel more connected to the research team
  • Links to the project’s Talk boards, for a similar purpose
  • Possibly: A way to indicate that you’re finished for the day, giving you the option to share your experience on social media or find another project you’re interested in.

The statistics we chose were directly related to the responses from the survey:

Survey results

Respondents were able to choose more than one response; when asked to rank them in order of importance, project-wide statistics were chosen hands-down:

Project-wide statistics are the most important

We also heard that volunteers sometimes felt disconnected from research teams and the project’s accomplishments:

“In general there is too less information about the achievement of completed projects. Even simple facts could cause a bit of a success-feeling… how many pictures in this project over all have been classified? How much time did it take? How many hours were invested by all participating citizens? Were there any surprising things for the scientists? Things like that could be reported long before the task of a project is completely fullfilled.”

Research teams often spend hours engaged in dialog with volunteers on Talk, but not everyone who volunteers on Zooniverse is aware or active on Talk. Adding a module on the classify page showing recent Talk posts will bring more awareness to this amazing resource and hopefully encourage more engagement from volunteers.

Templates for different image sizes and dimensions

When the project builder was created, we couldn’t have predicted the variety of disparate topics that would become Zooniverse projects. Originally, the subject viewer was designed for one common image size, roughly 2×3, and other sizes have since been shoehorned in to fit as well as they can.

Now, we’d like to make it easier for subjects with extreme dimensions, multimedia subjects, and multi-image subjects to fit better within the project builder. By specifically designing templates and allowing project owners to choose the one that best fits their subjects, volunteers and project owners alike will have a better experience.

Very wide subjects will see their toolbar moved to the bottom of the image rather than on the right, to give the image as much horizontal space as possible. Tall subjects will be about the same width as they have been, but the task/tutorial box will stay fixed on the screen as the image scrolls, eliminating the need to scroll up and down as often when looking at the bottom of the subject.

Wide and tall subjects

Let’s get started!

I’m so excited for the opportunity to share a preview of these changes with you. Zooniverse is a collaborative project, so if there’s anything you’d like us to address as we implement this update, please use this survey to share your thoughts and suggestions. Since we’re rolling these out in pieces, it will be much easier for us to be able to iterate, test, and make changes.

We estimate that the updates will be mostly in place by early 2019, so there’s plenty of time to make sure we’re creating the best possible experience for everyone.

Thank you so much for your patience and understanding as we move forward. In the future, we’ll be as open and transparent as possible about this process.

What’s going on with the classify interface? Part two

Part two in a multi-part series exploring the visual and UX changes to the Zooniverse classify interface

The breakdown

Today and in the next post, we’ll take a look at the reasoning behind specific changes to the classifier that we’ve already started to roll out over the past few months. We’ve had good discussions on Talk about many of the updates, but I wanted to reiterate those conversations here so there’s just one source of information to refer back to in the future.

In case you missed it, the first blog post in this series previews the complete new classify layout.

As a reminder, if you have feedback about these changes or anything else on the site you’d like to see addressed, please use this survey link.

Navigation bar

Updated navigation bar

We started with a rethinking of each project’s navigation bar. The new design features cleaner typography, a more prominent project title, and visual distinction from the sitewide navigation. It also includes the project’s home page background image, giving the project visual distinction while keeping the classify interface itself clean and legible. It’s also responsive: on smaller screen heights, the height of the navigation bar adjusts accordingly.

The most important goal we solved in making this change was to separate the project navigation from the site navigation. During my initial site research and in talking to colleagues and volunteers, many found it difficult to distinguish between the two navigations. Adding a background, a distinct font style, and moving the options to the right side of the page accomplishes this goal.

Neutral backgrounds

Classify interface with neutral background

In conjunction with adding the background image to the navigation bar, the background image was removed from the main classify interface. It was replaced with a cool light grey, followed quickly by the dark grey of the Dark Theme.

Legibility is one of the main goals of any web designer, and it was the focus of this update. By moving to clean greys, all of the focus is now on the subject and task. There are some really striking subject images on Zooniverse, from images of the surface of Mars to zebras in their natural habitat. We want to make sure these images are front and center rather than getting lost within the background image.

The Dark Theme was a suggestion from a Zooniverse researcher – they pointed out that some subject images are similar in tone to the light grey, so a darker theme was added to make sure contrast would be enough to make the image “pop”. We love suggestions like this! While the team strives to be familiar with every Zooniverse project, the task is sometimes beyond us, so we rely on our researchers and volunteers to point out anomalies like this. If you find something like this, you can use this survey to bring it to my attention.

Another great suggestion from a Zooniverse volunteer was the addition of the project name on the left side of the screen. This hasn’t been implemented yet, but it’s a great way to help with wayfinding if the interface is scrolled to below the navigation bar.

Updated task section

New task section

By enclosing the task and its responses in a box rather than leaving it floating in space, the interface gives a volunteer an obvious place to look for the task across every project. Adjusting the typography elevates the interface and helps it feel more professional.

One of the most frequent comments we heard in the 2017 survey was that the interface had far too much scrolling – either the subject image or the task area was too tall. The subject image height will be addressed at a later date, but this new task area was designed specifically with scrolling in mind.

I used the averages I found in my initial project audit and the average screen height (643 px) based on Google Analytics data from the same time period to design a task area that would comfortably fit on screen without scrolling. It’s important to note that there are always outliers in large-scale sites like Zooniverse. While using averages is the best way to design for most projects, we know we can’t provide the most optimal experience for every use case.

You’ll also notice the secondary “Tutorial” tab to the right of the “Task” label. This is a feature that’s yet to be implemented, and I’ll talk more about it in the next post.

And more to come

The next installments in this series will address the additional updates we have planned, like updated modals and a whole new stats section.

Check back soon!

What’s going on with the classify interface? Part One

Part one in a multi-part series exploring the visual and UX changes to the Zooniverse classify interface

First, an introduction.

Zooniverse began in 2007, with a galaxy-classifying project called Galaxy Zoo. The project was wildly successful, and one of the lead researchers, Chris Lintott, saw an opportunity to help other researchers accomplish similar goals. He assembled a team of developers and set to work building custom projects just like Galaxy Zoo for researchers around the world.

And things were good.

But the team started to wonder: How can we improve the process to empower researchers to build their own Zooniverse projects, rather than relying on the team’s limited resources to build their projects for them?

Thus, the project builder (zooniverse.org/lab) was born.

In the first year of its inception, the number of projects available to citizen scientist volunteers nearly doubled. Popularity spread, the team grew, and things seemed to be going well.

That’s where I come in. * Record scratch *

Three years after the project builder’s debut, I was hired as the Zooniverse designer. With eight years’ experience in a variety of design roles from newspaper page design to user experience for mobile apps to web design, I approached the new project builder-built projects with fresh eyes, taking a hard look at what was working and what areas could be improved.

Over the next week, I’ll be breaking down my findings and observations, and talking through the design changes we’re making, shedding more light on the aims and intentions behind these changes and how they will affect your experience on the Zooniverse platform.

If you take one thing away from this series it’s that this design update, in following with the ethos of Zooniverse, is an iterative, collaborative process. These posts represent where we are now, in June 2018, but the final product, after testing and hearing your input, may be different. We’re learning as we go, and your input is hugely beneficial as we move forward.

Here’s a link to an open survey in case you’d like to share thoughts, experiences, or opinions at any point.

Let’s dive in.

Part one: Research

My first few weeks on the job were spent exploring Zooniverse, learning about the amazing world of citizen science, and examining projects with similar task types from across the internet.

I did a large-scale analysis of the site in general, going through every page in each section and identifying areas with inconsistent visual styles or confusing user experiences.

Current site map, March 2017
Analysis of current template types

After my initial site analysis, I created a list of potential pages or sections that were good candidates for a redesign. The classify interface stood out as the best place to start, so I got to work.

Visual design research

First, I identified areas of the interface that could use visual updates. My main concerns were legibility, accessibility, and varying screen sizes. With an audience reaching to the tens of thousands per week, the demographic diversity makes for an interesting design challenge.

Next, I conducted a comprehensive audit of every project that existed on the Zooniverse in March 2017 (79 in total, including custom projects like Galaxy Zoo), counting question/task word count, the max number of answers, subject image dimensions, field guide content, and a host of other data points. That way, I could accurately design for the medians rather than choosing arbitrarily. When working on this scale, it’s important to use data like these to ensure that the largest possible group is well designed for.

Here are some selected data:

Task type: Drawing 20
Answers
Average number of possible answers 2 Answer average max word count 4.5
Min number 1 Answer max max word count 10
Max number 7 Answer min max word count 2
Median number 1 Answer median max word count 1
Number with thumbnail images 1

 

Task type: Question 9
Answers
Average number of possible answers 6 Answer average max word count 6
Min number 2 Answer max max word count 18
Max number 9 Answer min max word count 1
Median number 3.5 Answer median max word count 4
Number with thumbnail images 3

 

Task type: Survey 9
Answers
Average number of possible answers 31 Answer average max word count 4
Min number 6 Answer max max word count 7
Max number 60 Answer min max word count 3
Median number 29 Answer median max word count 4
Number with thumbnail images 9

Even More Research

Next, I focused on usability. To ensure that I understood issues from as many perspectives as possible, I sent a design survey to our beta testers mailing list, comprising about 100,000 volunteers (if you’re not already on the list, you can opt in via your Zooniverse email settings). Almost 1,200 people responded, and those responses informed the decisions I made and helped prioritize areas of improvement.

Here are the major findings from that survey:

  • No consensus on where to go when you’re not sure how to complete a task.
  • Many different destinations after finishing a task.
  • Too much scrolling and mouse movement.
  • Lack of keyboard shortcuts.
  • Would like the ability to view previous classifications.
  • Translations to more languages.
  • Need for feedback when doing classifications.
  • Finding new projects that might also be interesting.
  • Larger images.

In the next few blog posts, I’ll be breaking down specific features of the update and showing how these survey findings help inform the creation of many of the new features.

Without further ado

Basic classify template

Some of these updates will look familiar, as we’ve already started to implement style and layout adjustments. I’ll go into more detail in subsequent posts, but at a high level, these changes seek to improve your overall experience classifying on the site no matter where you are, what browser you’re using, or what type of project you’re working on.  

Visually, the site is cleaner and more professional, a reflection of Zooniverse’s standing in the citizen science community and of the real scientific research that’s being done. Studies have shown that good, thoughtful design influences a visitor’s perceptions of a website or product, sometimes obviously, sometimes at a subliminal level. By making thoughtful choices in the design of our site, we can seek to positively affect audience perceptions about Zooniverse, giving volunteers and researchers even more of a reason to feel proud of the projects they’re passionate about.

It’s important to note that this image is a reflection of our current thought, in June 2018, but as we continue to test and get feedback on the updates, the final design may change. One benefit to rolling updates out in pieces is the ability to quickly iterate ideas until the best solution is found.

The timeline

We estimate that the updates will be mostly in place by early 2019.

This is due in part to the size of our team. At most, there are about three people working on these updates while also maintaining our commitments to other grant-funded projects and additional internal projects. The simple truth is that we just don’t have the resources to be able to devote anyone full-time to this update.

The timeline is also influenced in a large part by the other half of this update: A complete overhaul of the infrastructure of the classifier. These changes aren’t as visible, but you’ll notice an improvement in speed and functionality that is just as important as the “facelift” portion of the update.

Stay tuned!

We’ve seen your feedback on Talk, via email, and on Github, and we’re happy to keep a dialog going about subsequent updates. To streamline everything and make sure your comments don’t get missed, please only use this survey link to post thoughts moving forward.

We took it offline and you can too! A night of Zooniverse fun at the Adler Planetarium

Our inaugural Chicago-area meetup was great fun! Zooniverse volunteers came to the Adler Planetarium, home base for our Chicago team members, to meet some of the Adler Zooniverse web development team and talk to Chicago-area researchers about their Zooniverse projects.

adler_membersnight_5
Laura Trouille, co-I for Zooniverse and Senior Director for Citizen Science at the Adler Planetarium

Presenters:

  • Zooniverse Highlights and Thank You! (Laura Trouille, co-I for Zooniverse and Senior Director for Citizen Science at the Adler Planetarium)
  • Chicago Wildlife Watch (Liza Lehrer, Assistant Director, Urban Wildlife Institute, Lincoln Park Zoo)
  • Gravity Spy (Sarah Allen, Zooniverse developer, supporting the Northwestern University LIGO team)
  • Microplants (Matt Von Konrat, Head of Botanical Collections, Field Museum)
  • Steelpan Vibrations (Andrew Morrison, Physics Professor, Joliet Junior College)
  • Wikipedia Gender Bias (Emily Temple Wood, medical student, Wikipedia Editor, Zooniverse volunteer)
  • In-Person Zooniverse Volunteer Opportunities at the Adler Planetarium (Becky Rother, Zooniverse designer)

Researchers spoke briefly about their projects and how they use the data and ideas generated by our amazing Zooniverse volunteers in their work. Emily spoke of her efforts addressing gender bias in Wikipedia. We then took questions from the audience and folks chatted in small groups afterwards.

The event coincided with Adler Planetarium’s biennial Member’s Night, so Zooniverse volunteers were able to take advantage of the museum’s “Spooky Space” themed activities at the same time, which included exploring the Adler’s spookiest collection pieces, making your own spooky space music, and other fun. A few of the Zooniverse project leads also led activities: playing Andrew’s steel pan drum, interacting with the Chicago Wildlife Watch’s camera traps and other materials, and engaging guests in classifying across the many Zooniverse projects. There was also a scavenger hunt that led Zooniverse members and Adler guests through the museum, playing on themes within the exhibit spaces relating to projects within the Zooniverse mobile app (iOS and Android).

We really enjoyed meeting our volunteers and seeing the conversation flow between volunteers and researchers. We feel so lucky to be part of this community and supporting the efforts of such passionate, interesting people who are trying to do good in the world. Thank you!

Have you hosted a Zooniverse meetup in your town? Would you like to? Let us know!