It’s an exciting time at Zooniverse HQ, with results flooding in from our existing projects – I’ve just been taking a quick look at the Moon Zoo data – and the programming team preparing new projects and some surprises, too. It’s been fantastic to see the other projects coming into their own. Don’t tell the Galaxy Zoo team but it’s particularly great that Moon Zoo has been our busiest project for the last few weeks.
That’s one sign that whatever magic powered the enormous and unexpected wave of enthusiasm for Galaxy Zoo can be replicated. Our task, then, should be simple – all we have to do is launch projects with the right mixture and sit back and watch the science roll in.
Unfortunately, writing down the recipe isn’t that simple. Although the education team are working hard to try and understand what makes a good project, it will never be an exact science. There will, I suspect, always be an element of hit and miss in whether a project attracts an audience, but what we do know is that many of you contribute because you’re enthused by the opportunity to make a difference – to actually add something to what we understand about the Universe.
That means that we have one absolutely unbreakable rule when selecting projects – they must be constructed in such a way that we know that clicks or contributions will add up to something meaningful.
In the original Galaxy Zoo, for example, we would never have predicted that we’d find the Voorwerp or the Peas and a random search for things that might look interesting wouldn’t have let us guarantee that Zooite’s contributions would be useful.
Instead, we found a set of questions with defined answers that we knew would be interesting. For example, we know that producing a catalogue of clumpy galaxies will be interesting, and so there’s an ‘Is this clumpy’ question in Galaxy Zoo’s latest incarnation.
This golden rule has implications for the design of the projects as well. It’s very tempting to rely on description – rather than forcing people to sort galaxies into categories that don’t always apply, why don’t we just allow people to ‘say what they see’, just as people on Flickr tag and comment on photos?
If producing science is the goal, though, this doesn’t work. There isn’t an easy way to average comments, and there’s no way we can read every tag or post on the Forum (even if Alice and the other moderators do a fairly good job of that!). To guarantee results we need quantifiable data – and then we can rely on the forum to do the wonderful, surprising job of serendipitous discovery.