All posts by chrislintott

Why the Zooniverse is easy to use.

A blog post from Adam Stevens today appeared in my Twitter stream, containing some discussion (and criticism) of the Zooniverse in general and Planet 4 in particular.

All debate is useful, so I wanted to respond to a few of the points made. Dispute about whether the main Planet 4 interface is any good scientifically should, I think, be settled by seeing if the team publish a paper with the results – our track record (in need of updating!) is here.

The meat of the post draws a distinction between ‘real science’ – by which I assume the author means analysis, paper writing and so on – and what the main Zooniverse interfaces do, which is described as ‘data analysis’. We’ve been here before, and part of the answer is the same one I gave then : data analysis and classification is as much a part of science as solving an equation, and while there may be scientists who do nothing but think grand analytic thoughts, I’ve never met any.

However, there’s another part to the answer. Zooniverse projects are explicitly designed so that even a brief interaction with the site produces meaningful results. This is partly pragmatic (as this post from our Old Weather project shows, as a rule of thumb half of contributions come from people who only do a few) but it also because we truly believe in the transformational nature of having someone do something real. Those visiting Zooniverse for the first time are typically not scientists; often they are not yet even fans of science. We know from anecdote and from our own research that for many of these people doing something simple that makes a contribution to our understanding of the Universe is very fulfilling, often unexpectedly so.

More than that, these projects act as engines of motivation. Once people have found their feet in the main interface, once people have got used to the idea that science is now an activity they can participate in, once people are excited to further investigate interesting images and objects that are now theirs, wonderful things happen.

There are great examples from many projects, but on Twitter I pointed to our recent Planet Hunters paper which reported one new confirmed planet and 42 new planet candidates (with greater than 90% certainty of being real) which were discovered by the community active on our Talk discussion tool.

Many of these volunteers (including Kian Jek, who was just awarded the Chambliss prize for achievement by the AAS) are doing far, far more than just using the Planet Hunters interface. But they’re there because they were drawn in by the proposition of the initial site. For many, the motivation to learn about classes of variable stars and the minutia of transits came only after they’d found something special, and for many the confidence to attack these more detailed questions comes from the initial, guided experience.

As technical supremo Arfon put it on Twitter, the Zooniverse is a set of analysis tasks where scientists need help, and where they will analyze results and report back, but if you’ll come with us there’s a whole world of conversation and discovery that can happen. Drawing a distinction between the two misses the point – without the former, participation in the latter (the ‘real science’, if you must) is limited to those who already have the confidence to participate.

Chris

PS Adam did suggest a specific change: that, as one of the main science goals of Planet 4 is to measure wind speed we should add an arrow allowing people to indicate the wind speed and direction. This seems to me misguided; we’re getting that information from the task that the volunteers are doing in marking the shape, size and direction of the fans. You could add further pedagogical material early on, but this would likely reduce the number of people who make it to the ‘ah ha! I’m doing science!’ moment because we know that it’s very easy to trigger an adverse reaction in the form of a loss of confidence when we ask slightly more abstract questions in the initial phase of engagement with a project. In any case, inference follows measurement – and we’re still at the measurement stage in this strange and fascinating region of Mars.

PPS In the main post, I’ve ignored comments about the relationship between the BBC’s Stargazing Live program and Planet4. It’s important to realize that the driving force behind the Planet 4 project is Candy Hansen and her team of Martian scientists – ironically, we’d discussed a version of the idea while I was interviewing her for the Sky at Night about 18 months ago. That’s before Planet Hunters was on TV, so it’s dead wrong to say that Planet 4 was cooked up in response to a desire to have something else to do on telly. If there were inaccuracies on camera, I can only plead that live television is tricky and the real test is whether the project produces papers – which will, as any real scientist knows, take time! Stargazing’s commitment to real engagement instead of ‘educational experiments’ is, I think, a huge strength of the series: Here’s the latest news on the planet candidate identified in the 2012 series.

Updated privacy policy

This is just a quick note to let you know that we’ve updated the Zooniverse privacy policy, and that you can see the new version here. In truth, I don’t think there’s anything that surprising in there, but as we continue to grow we thought it was good to be much more explicit about what data we collect, and what we do with it.

We’re also now required to explicitly tell you that we’re using cookies for some features of the site, and you’ll see pop-ups that inform you of this fact appear in the next few days. Once selected, they should go away.

If you have any concerns, you can get in touch with the team by emailing support AT zooniverse.org.

I, for one, welcome our new machine collaborators

This post, from Chris Lintott, is one of three marking the end of this phase of the Galaxy Zoo : Supernova project. You can hear from project lead Mark Sullivan here and machine learning expert and statistician Joey Richards here.

Today’s a bittersweet day for us, as the Galaxy Zoo : Supernova project moves off into (perhaps temporary) retirement. You can read about the reasons for this change over on the Galaxy Zoo blog, but the short answer is that the team have used the thousands of classifications submitted by volunteers to train a machine that can now outperform the humans. Time to wheel out this graphic again, last posted when we started looking at teaching machines with Galaxy Zoo data.

That’s all very well, but what of those of us who enjoyed the thrill of hunting for supernovae? I think there are two reasons to believe that the supernova project or something very like it will be back someday soon. Firstly, the machine learning solution is now very good at finding supernovae in images from just one search, the Palomar Transient Factory. I suspect other surveys, with their own quirks, may require a training set as large as that used for PTF. I suspect we’ll see a pattern developing in which the early months or years of a survey require volunteer classification, before relaxing until the next challenge comes along. We’re hoping to test this idea sometime soon.

The second way in which I think human classification will return is more subtle – we need to make friends, and collaborate with, the robots themselves. At the minute, for mostly practical reasons, we see this as a choice between the two, but the Zooniverse team and more than a few friends have started building a more sophisticated system which combines the two approaches.

One piece of that system is already in place, and owes a lot to the supernova project. Edwin Simpson and colleagues from Oxford’s Robotics SEO Consultant Research Group and the Zooniverse have built a mathematical model that’s capable of combining results from many different classifiers, measuring their performance and deciding who to listen to, and when. It was developed and tested using the supernova project data and has also been running live and keeping track of what’s happening. This should lead to an improvement in classification accuracy, but there’s more. The same sort of method could be used to combine human and machine classification, and we’re beginning to work on a system that can make decisions about when it’s worth asking humans for help. That allows us the best of both worlds – we’ll get to take advantage of machines for routine tasks, but allow them to call for our help when they get stuck. The result should be a more interesting project to participate in, a greater scientific return and the certainty that we’re not wasting your time. That all sounds pretty good to me.