Category Archives: News

So Long, and Thanks for All the Fish!

It seems like only a couple of weeks ago I announced that I’d be heading off soon to pastures new and yet somehow that time has already come – today is my last day working with the Zooniverse.

It’s pretty much impossible for me to describe how much fun I’ve had over the past five years. Playing a part in shaping the Zooniverse from the early days of Galaxy Zoo (2) when we were a tiny team in Oxford through to where we are today has been a blast. In a coincidence of timing my son Caio has been around for almost exactly the same amount of time as I’ve been involved with the Zooniverse, and to be honest I’m not really sure I remember life before either. I checked the commit logs of the Galaxy Zoo 2 codebase and the first code was saved on 25th October 2008 – just over a month before Caio came into the world. Significantly this was more than two months before Chris began paying me but that’s just a testament to what a remarkably persuasive individual he is 🙂

These last few years have been filled with so many significant moments it’s hard to pick out highlights but if I had to then the launch of Galaxy Zoo 2 and furiously coding as people around me were sipping champagne is pretty memorable. Taking what felt like a massive leap into the unknown with Planet Hunters and then going to find exoplanets is definitely up there too. And announcing Old Weather (still my favourite Zooniverse project) to the world and seeing how people responded to the Zooniverse doing something ‘other’ than astro was very special.

I’m not going to try and thank every individual I’ve been working with because I’m bound to forget important people. Suffices to say, I love you all dearly and I’m going to miss working with you day to day immensely.

So farewell and stay in touch!

ArfX

New Project: Plankton Portal

It’s always great to launch a new project! Plankton Portal allows you to explore the open ocean from the comfort of your own home. You can dive hundreds of feet deep, and observe the unperturbed ocean and the myriad animals that inhabit the earth’s last frontier.

Plankton Portal Screenshot

The goal of the site is to classify underwater images in order to study plankton. We’ve teamed up with researchers at the University of Miami and Oregon State University who want to understand the distribution and behaviour of plankton in the open ocean.

The site shows you one of millions of plankton images taken by the In Situ Ichthyoplankton Imaging System (ISIIS), a unique underwater robot engineered at the University of Miami. ISIIS operates as an ocean scanner that casts the shadow of tiny and transparent oceanic creatures onto a very high resolution digital sensor at very high frequency. So far, ISIIS has been used in several oceans around the world to detect the presence of larval fish, small crustaceans and jellyfish in ways never before possible. This new technology can help answer important questions ranging from how do plankton disperse, interact and survive in the marine environment, to predicting the physical and biological factors could influence the plankton community.

The dataset used for Plankton Portal comes a period of just three days in Fall 2010. In three days, they collected so much data that would take more than three years to analyze it themselves. That’s why they need your help! A computer will probably be able to tell the difference between major classes of organisms, such as a shrimp versus a jellyfish, but to distinguish different species within an order or family, that is still best done by the human eye.

If you want to help, you can visit http://www.planktonportal.org. A field guide is provided, and there is a simple tutorial. The science team will be on Plankton Portal Talk to answer any questions, and the project is also on Twitter, Facebook and Google+.

Zooniverse: Live

Yesterday we pushed Zooniverse Live to be… er… live. Zooniverse Live is a constantly updated screen, showing live updates from most of our projects. You’ll see a map displaying the location of recent Zooniverse volunteer’s classifications and a fast-moving list of recently classified images. Zooniverse Live is on display in our Chicago and Oxford offices, but we thought it would be cool to share it with everyone.

At the time this screenshot was taken, the USA was very active and Snapshot Serengeti was busy.
At the time this screenshot was taken, the USA was very active and Snapshot Serengeti was busy.

The Zooniverse is a very busy place these days and we’ve been looking for ways to visualize activity across all the projects. Zooniverse Live is a fairly simple web application. Its backend is written in Clojure (pronounced Closure) and the front end is written in JavaScript using a library for data visualization called D3. The Zooniverse Live server listens to a stream of classification information provided by the Zooniverse projects – via a database technology called Redis. Zooniverse Live then updates its own internal database of classifications on the backend, with the front end periodically asking for updates.

The secret sauce is figuring out where users are classifying from. Zooniverse Live does that using IP Addresses. Everyone connected to the internet is assigned an IP Address by their Internet Service Provider (ISP). While the IP address assigned may change each time a computer connects to the internet, each address is unique and can be tied to a rough geographical area. When Zooniverse projects send their classifications to Zooniverse Live, they include the IP Address the user was classifying from, letting Zooniverse Live do a lookup for the user’s location to plot on the map. The locations obtained in this way are approximate, and in most cases represent your local Internet exchange.

Hopefully you’ll enjoy having a look at Zooniverse Live, and we’d love to hear ideas for other Zooniverse data visualizations you’d like to see.

Our Elusive Milky Way

In the coming months the Zooniverse Education Blog will feature guest posts from participants in the Zooniverse Teacher Ambassadors Workshop. Today’s guest blogger William H. Waller is author of The Milky Way — An Insider’s Guide and co-editor of The Galactic Inquirer — an e-journal and forum on the topics of galactic and extragalactic astronomy, cosmochemistry and astrobiology, and interstellar communications.  Bill’s day job involves teaching courses in physics and astronomy at Rockport High School.

For most of human history, the night sky demanded our attention.  The shape-shifting Moon, wandering planets, pointillist stars, and occasional comet enchanted our sensibilities while inspiring diverse tales of origin.  The Milky Way, in particular, exerted a powerful presence on our distant ancestors.  Rippling across the firmament, this irregular band of ghostly light evoked myriad myths of life and death among the stars.  In 1609, Galileo Galilei pointed his telescope heavenward and discovered that the Milky Way is “nothing but a congeries of innumerable stars grouped together in clusters.”  Fast forward 400 years to the present day, and we find that the Milky Way has all but disappeared from our collective consciousness.  Where did it go?

For 25 years as an astronomy educator, I have informally polled hundreds of students, teachers, and the general public regarding their awareness of the night sky.  Invariably, no more than 25 percent have ever seen the Milky Way with their own eyes.  For city dwellers, this is completely understandable.  Unless properly shielded, the artificial lighting from municipal, commercial, and residential sources will spill into the sky and overwhelm the diffuse band of luminescence that is the hallmark of our home galaxy.  The recent video “The City Dark” produced by POV underscores the disruptive aspects that artificial lighting can produce on the life cycles of certain animals – and even upon ourselves.

View from Goodwood, Ontario before and after a power blackout (Courtesy Todd Carlson)
View from Goodwood, Ontario before and after a power blackout (Courtesy Todd Carlson

For residents of small towns well away from large cities (such as my own hometown of Rockport, MA), it is much easier to find dark “sanctuaries” where the Milky Way can be spied in all its exquisite beauty.  Yet when I poll Rockport’s sundry inhabitants about having ever seen the Milky Way, I still get a measly 25% positive response.  What’s going on here?

Is it that they don’t care about astronomy and the night sky?  I would have to say that such astronomical indifference is not typical.  Most people in conversations with me will volunteer their fascination for the planets, stars, and the exotica that our universe provides in abundance – from exoplanets to pulsars, black holes, dark matter, and dark energy.  Images from our great space telescopes have also revealed to the casual viewer many marvels of the Milky Way Galaxy, other nearby galaxies, and the remote galaxian cosmos.  Recently, stunning composite images of X-ray, visible, and infrared emission from regions of cosmic tumult have vivified the many powerful dramas that continue to unfold upon the galactic stage.

Supernova remnant Cassiopeia A, as observed 325 years after a massive star exploded.   (X-ray: blue), (Visible: green), (Infrared: red) – NASA
Supernova remnant Cassiopeia A, as observed 325 years after a massive star exploded.
(X-ray: blue), (Visible: green), (Infrared: red) – NASA

Yet, despite popular enthusiasm for the wonders of space, most people still do not bother to find a dark site and witness the source of these wonders for themselves.  Otherwise, my informal polling would have indicated that they knew about the Milky Way as a naked-eye marvel.  I suppose it comes down to the delivery of experiences.  We have grown accustomed to having our experiences conveyed to us in familiar, safe, and readily-accessible packages – be they books, magazines, television programs, planetarium shows, or interactive websites.

Regarding the latter, consider the Zooniverse online portal where anybody with an internet connection can contribute to authentic scientific research.  With just your eyes and hands, you can search for exoplanets around distant suns, trace out star-blown bubbles in our galaxy’s interstellar medium, and categorize the types of galaxies that dwell in deep space.  To date, close to a million people have contributed to  these and sundry other online scientific investigations.

Then there are the mobile apps.  One popular type of app, in particular, has brought millions more people closer to the night sky.  Google Sky Map, Droid Sky View, The Night Sky, and other interactive planetarium simulators enable a smartphone user to point the phone in any direction and see what stars and constellations are located there.  Most of these simulators show the Milky Way as a hazy band, thus cueing the viewer to its existence.  But does that mean that more people are making the effort to find dark sites for smartphone-aided star gazing?  Is participation in amateur astronomy clubs on the rise as a result?  And are star parties at our national parks surging with attendees?  My very limited research on these questions suggests that – yes – ever more people are seeking the sublime wonders of dark skies.  Whether such interactive apps are responsible for these trends remains unknown.  Still, I remain optimistic.

Perhaps our electronic addictions and virtual realities will ultimately re-introduce ourselves to the unembellished Milky Way – and to other direct experiences that Nature so generously provides.  We may be plugged-in as never before, but still we hunger for authentic interactions with the mysterious ways of Nature.  Towards these ends, I urge that we re-double our efforts to preserve the dark night sky through the advocacy of properly-shielded lighting and the establishment of dark-sky sanctuaries.  To help in these regards, please visit the International Dark Sky Society’s webpage.

Zooniverse, GitHub and the future

In case you haven’t noticed I’ve had a pretty busy five years at the Zooniverse. With more than 25 projects launched in fields from astronomy to biodiversity and from climataology all the way to zoology, it’s been an incredible experience to work with so many new science teams hungry for answers to research questions that can only be answered by enlisting the help of a large number of volunteers. This model of citizen science, one where we boil down the often complex analysis task brought to us by a science team to the ‘simplest thing that will work’, build a rich user experience and then ask a bunch of people to help, seems to work pretty well.

For me, one of the best aspects of what I get to do is that I work in a domain that is an inherently open way of doing research. Having joined Zooniverse when we were still ‘just’ Galaxy Zoo, to see the range of projects we host broaden and to watch our community mature has been a remarkable experience. With our latest endeavour – the Galaxy Zoo Quench project – it’s clear that the line between the activites of the ‘science’ team and the ‘volunteers’ is becoming less defined by the day. Citizen-led science in the Zooniverse began with a group of people in the Galaxy Zoo Forum, ‘The Peas Corp’ when they discovered a new class of galaxy, and it continues today with volunteers discovering new types of worms, exotic exoplanets and even, through Quench, analysing and writing a new paper as a group. These of course are just examples I’ve taken from the Zooniverse and there are many more in other projects run by other people, but in each case the result is the same: by enagaing the public in a meaningful way Citizen Science is challenging the centuries old practices of academia and that has to be a good thing.

The opportunity to change the way science is done, whether it’s building software to increase efficiency or developing new collaboration models, is what brought me to the Zooniverse and now it’s what is leading me away. At the end of September this year I’m going to be hanging up my hat as Technical Lead of the Zooniverse and joining GitHub as their ‘science guy’.

As with all big decisions in life this wasn’t an easy one. I feel very fortunate to have had the opportunity to give technical direction to an incredible team of scientists, developers, educators and designers here at the Adler and the wider Zooniverse. But over the past couple of years I’ve also got to know a number of the GitHub folks and I’ve been hugely impressed by their focus on building the very best platform possible for online collaboration. Starting with the very simple idea that ‘it should be easier to work together than alone’ they’ve clearly nailed what it looks like to work on a problem with others in code. But software isn’t the only thing people are sharing on GitHub – legislators are publishing drafts of state law, technicians are documenting scientific laboratory protocols and with tools like the IPython Notebook researchers have defined formats and means of sharing entire research workflows.

The mantra of ‘collaborative versioned science’ has been rattling around my head now for a couple of years. I believe there’s an opportunity for GitHub to be the platform for capturing the process of scientific discovery and I want to help make that happen.

So what does this mean for the Zooniverse? Well, I’m leaving at a pretty good time as the Zooniverse has never been healthier – there’s a first-class web and education team of twelve people I’m going to be leaving behind at the Adler Planetarium in Chicago and we’ve just secured several large grants to expand our sister team at The University of Oxford to ten people (watch this space for job ads).

With all of these people and a number of major development projects in the pipeline we’re going to need a new Technical Lead. If this sounds fun, like you might be a good fit (and you’re able to work in the UK or US) then drop myself and Chris Lintott a line (we’re arfon@zooniverse.org and chris@zooniverse.org) – we’d love to talk. Our software is a mixture of Ruby, Rails and Javascript and we like using technologies like MongoDB, Redis, Amazon Web Services and Hadoop. We get to work on hard data science problems, build custom software for solving crowdsourcing at scale and work with some incredibly smart and creative collaborators.  Whoever takes over is going to have a lot of fun.

Arfon

PS If you’d like to know more about what work looks like as a Technical Lead of the Zooniverse then I’ve written recently about some of the problems we’ve addressed over the past few years here, here and here.

(Many) Zooniverse Papers Now Open Access

You don’t have to hang around the Zooniverse very long to find out that we’re rather proud of our growing list of publications. We think it’s essential that these papers are available to everyone which is why, for example, we’ve been posting versions of the astronomical papers on arXiv’s Astro-Ph. This is where I get papers I want to read, anyway, but there are advantages to occasionally being able to access the ‘real thing’ – the journal’s own version of the paper.

The doors to the Bodelian library in Oxford are labelled by subject. The one on the left here serves both astronomy and rhetoric. Credit : Jim Linwood

I’m delighted, therefore, to say that Oxford University Press, publishers of the journal we most frequently submit papers to have agreed to make all Zooniverse papers completely free to access. This applies to any Zooniverse paper in Monthly Notices of the Royal Astronomical Society (which is neither monthly nor contains notices of the Royal Astronomical Society), so whether you want to read about bulgeless galaxies, the Solar System’s dust, the supernovae we discovered, Planet Hunters results or Milky Way Project bubbles you can now do so from the journal itself.

How the Zooniverse Works: Keeping It Personal

This the the third post in a series about how, at a high level, the Zooniverse collection of citizen science projects work. In the first post I describe the core domain model description that we use – something that turns out to be a crucial part of faciliating conversation between scienctists and developers. In the second I covered about some of the core technologies that keep things running smoothly. In this and the next few posts I’m going to talk about parts of the Zooniverse that are subtle but important optimisations. Things such as how we pick which Subject to show to someone next, how we decide when a Subject is complete, and measuring the quality of a person’s Classifications.

Much of what I’m about to describe probably isn’t obvious to the casual observer but these are some of the pieces of the Zooniverse technical puzzle that as a team we’re most proud of and have taken many iterations over the past five years to get right. This post is about how we decide what to show to you next.

A Quick Refresher

At its most basic, a Zooniverse citizen science project is simply a website that shows you some data images, audio or plots, asks you to perform some kind of analysis on interpretation on it and collects back what you said. As I described in my previous post we’ve abstracted most of the data-part of that workflow into an API called Ouroboros which handles functionality such as login, serving up Subjects and collecting back user-generated Classifications.

Keeping it Fast

The ability for our infrastructure to scale quickly and predictably is a major technical requirement for us. We’ve been fortunate over the past few years to receive a fair bit of attention in the press which can result in tens or hundreds of thousands of people coming to our projects in a very short period of time. When you’re dealing with visitor numbers at that scale ideally you want everyone to have a pleasant experience.

Let’s think a little more about what absolutely has to happen when a person visits for example Galaxy Zoo.

  1. We need to show a login/signup form and send the information provided by the individual back to the server.
  2. Once registration/login is complete we need to serve back some personal information (such as a screen name).
  3. We need to pick some Subjects to show.

For many of the operations that happen in the Zooniverse, a record is written to a database somewhere. When trying to improve the performance of code that involves databases, a key strategy is to try and avoid querying these database as much as possible especially if the queries are complex and the databases are large as these are often the slowest parts of your application.

What count’s as ‘complex’ and ‘big’ in database terms varies based upon the types of records that you are storing, the choices you’ve made about how to index them and the resources you provide to the database server i.e. how much RAM/CPU you have available.

Keeping it personal

If there’s one place that complex queries are guaranteed to reside in a Zooniverse project codebase then it’s the part where we decide what to show to a particular person next. It’s complex, in need of optimisation and potentially slow for a number of reasons:

  1. When selecting a Subject we need to pick from one that a particular User hasn’t seen before.
  2. Often Subjects are in Groups (such as a collection of records in Notes from Nature) and so these queries have to happen within a particular scope.
  3. We often want to prioritise a certain subset of the Subjects.
  4. These queries happen a lot, at least n * the total number of Subjects (where n is the number of repeat classifications each Subject receives).
  5. The list of Subjects we’re selecting from is often large (many millions).

On first inspection, writing code to achieve the requirements above might not seem that hard but if you add in the requirement that we’d like to be able to select Subjects hundreds of times per second for many thousands of Users then it starts to get tricky.

A ‘poor man’s’ version of this might look something like this

def self.next_original_for_user(user)
  recents = joins(:classifications).where(:classifications => { :zooniverse_user_id => user.id }).select('subjects.id').all
  if recents.any?
    where(['id NOT IN (?)', recents]).first
  else
    first
  end
end

What we’re doing here is finding all the classifications for a given User and grabbing all of the Subject ids for them. Then we do a SQL select to grab the first record that doesn’t have an id matching one of the ones from existing classifications.

While this code is perfectly valid and would work OK for small-scale datasets there are a number of core issues with it:

  1. It’s pretty much guaranteed to get slower over time – as the number of classifications grows for a user retrieving the recent classifications is going to become a bigger and bigger query.
  2. It’s slow from the start – NOT IN queries are notoriously slow.
  3. It’s wasteful – every time we grab a new Subject for a User we essentially run the same query to grab the recent classification Subject ids.

These factors combined make for some serious potential performance issues if we want to execute code like this frequently, for large numbers of people and across large datasets all of which are requirements for the Zooniverse.

A better way

It turns out that there are technologies out there designed to help with this sort of scenario. When we select the new Subject for a user there’s no reason why this operation has to actually happen in the database that the Subjects are stored in, instead we can keep ‘proxy’ records stored in lists or sets. That means that if we have a big list of ids of things that are available to be classified and a list of ids of things that each user has seen so far then when we want to select a Subject for someone we just subtract those two things and then pick randomly from the difference and pluck that record from the database.

Screen Shot 2013-07-22 at 21.35.20

In the diagram above when Rob (in the middle) comes to one of our sites we subtract from the big list of Subjects that need classifying still (in blue) the list of things that he’s already seen (in green) and then pick randomly from that resulting set. Going by this diagram it looks like we must have to keep a list of available Subjects for each project together with a separate list of Subjects per project per user so that we can do this subtraction and that’s exactly the case. The database technology that we use to do this is called Redis and it’s designed for operations just like this.

The result

Maturing our codebase to a point where the queries described above are straightforward has been a lot of work, mostly by this guy. What does it look like to actually require this kind of behaviour in code? Just two lines:

class BatDetectiveSubject < Subject
  include SubjectSelector
  include SubjectSelector::Unique
end

This example is selecting ‘unique’ records for each user. We can also select unique grouped and prioritised unique records for projects like Planet Hunters. Regardless of the selection ‘flavour’ we’re using it’s simple for us to now to implement selection behaviour, using Redis to perform these selection operations means that everything is insanely quick, typically returning from Redis in ~30ms even for databases with many tens of thousands of Subjects to be classified.

Screen Shot 2013-07-22 at 22.14.00

Making the routinely hard stuff easier is a continual goal for the Zooniverse development team. That way we can focus maximum effort on the front-end experience and what’s different and hard about each new project we build.

Chasing Storms Online with the New Cyclone Center

Cyclone Center has recorded almost 250,000 classifications from volunteers around the world since its launch in September 2012. We’ve had lots of feedback on the project and have recently made significant changes that we think will make the experience of classifying storms more rewarding.

Patterns in storm imagery are best recognized by the human eye, so the scientists behind Cyclone Center are asking you to help look through 30 years of images of tropical storms. The end product will be a new global tropical cyclone dataset that could not be realistically obtained in any other fashion. We have already found that the pattern matching by our classifiers is doing better in many cases than a computer algorithm on the same images – this is very exciting!

The biggest change to the site is that we’re now targeting storms for classification. We’ve shifted to a system where the whole community will work on particular storms until they’re finished. This produces useful data very quickly – and means we can classify timely and scientifically useful storms as needed. These targeted storms will change frequently as you help us complete each one. You can check a box on the Cyclone Center home page that will mean you get alerted when new targeted storms appear: we hope to recruit a horde of enthusiastic online storm chasers this way.

Cyclone Centre Homepage

We’ve added much more inline classification guidance – gone are the days of clicking on question marks to get help.  For each step in the process, you will be shown information on how to best answer the question. We think this will give you more confidence in what you are doing and hopefully inspire you to do even more!

We’ve improved the tutorial and we’re providing more feedback as you go along – now instead of waiting for several images to see the “Storm Stats” page, you will immediately go there after your first image. We’ve also upgraded Cyclone Center Talk, which allows for better searching and highlights more of the interesting discussions going on between other citizen scientists.

All-in-all it’s a big change for an awesome project. Log in to Cyclone Center today and give the new version a try. Don’t forget to check the box to start getting alerted to new storms as they appear: this will be incredibly useful for the research behind the site, and means you can be the first to classify data on new storms.

[Visit http://www.cyclonecenter.org and see the blog at http://blog.cyclonecenter.org]

Zoo Tools: A New Way to Analyze, View and Share Data

Since the very first days of Galaxy Zoo, our projects have seen amazing contributions from volunteers who have gone beyond the main classification tasks. Many of these examples have led to scientific publications, including Hanny’s Voorwerp, the ‘green pea’ galaxies, and the circumbinary planet PH1b.

One common thread that runs through the many positive experiences we’ve had with the volunteers is the way in which they’ve interacted more deeply with the data. In Galaxy Zoo, much of this has been enabled by linking to the Sloan SkyServer website, where you can find huge amounts of additional information about galaxies on the site (redshift, spectra, magnitudes, etc). We’ve put in similar links on other projects now, linking to the Kepler database on Planet Hunters, or data on the location and water conditions in Seafloor Explorer.

The second part of this that we think is really important, however, is providing ways in which users can actually use and manipulate this data. Some users have been already been very resourceful in developing their own analysis tools for Zooniverse projects, or have done lots of offline work pulling data into Excel, IDL, Python, and lots of other programs (see examples here and here). We want to make using the data easier and available to more of our community, which has led to the development of Zoo Tools (http://tools.zooniverse.org). Zoo Tools is still undergoing some development, but we’d like to start by describing what it can do and what sort of data is available.

An Example

Zoo Tools works in an environment which we call the Dashboard – each Dashboard can be thought of as a separate project that you’re working on. You can create new Dashboards yourself, or work collaboratively with other people on the same Dashboard by sharing the URL.

Zoo Tools Main Page

Create a New Dashboard

Within the Dashboard, there are two main functions: selecting/importing data, and then using tools to analyze the data.

The first step for working with the Dashboard is to select the data you’d like to analyze. At the top left of the screen, there’s a tab named “Data”. If you click on this, you’ll see the different databases that Zoo Tools can query. For Galaxy Zoo, for example, it can query the Zooniverse database itself (galaxies that are currently being classified by the project), or you can also analyze other galaxies from the SDSS via their Sky Server website.

Import Data from Zooniverse

Clicking on the “Zooniverse” button, for example, you can select galaxies in one of four ways: a Collection (either your own or someone else’s), looking at your recently classified galaxies, galaxies that you’ve favorited, or specific galaxies via their Zooniverse IDs. Selecting any of these will import them as a dataset, which you can start to look at and analyze. In this example we’ll import 20 recent galaxies.

Import 20 Recents

After importing your dataset, you can use any of the tools in Dashboard (which you can select under “Tools” at the top of the page) on your data. After selecting a tool, you choose the dataset that you’d like to work with from a dropdown menu, and then you can begin using it. For example: if I want to look at the locations of my galaxies on the sky, I can select the “Map” tool. I then select the data source I’d like to plot (in this case, “Zooniverse–1”) and the tool plots the coordinates of each galaxy on a map of the sky. I can select different wavelength options for the background (visible light, infrared, radio, etc), and could potentially use this to analyze whether my galaxies are likely to have more stars nearby based on their position with respect to the Milky Way.

The other really useful part is that the tools can talk to each other, and can pass data back and forth. For example: you could import a collection of galaxies and look at their colour in a scatterplot. You could then select only certain galaxies in that tool, and then plot the positions of those galaxies on the map. This is what we do in the screenshots below:

This slideshow requires JavaScript.

Making Data Analysis Social

You can also share Dashboards with other people. From the Zoo Tools home page you can access your existing dashboards as well as delete them and share them with others. You can share on Twitter and Facebook or just grab the URL directly. For example, the Dashboard above can be found here – with a few more tools added as a demonstration.

Sharing a Dashboard

This means that once you have a Dashboard set up and ready to use, you can send it to somebody else to use too. Doing this will mean that they see the same tools in the same configuration, but on their own account. They can then either replicate or verify your work – or branch off and use what you were doing as a springboard for something new.

What ‘Tools’ Are There?

Currently, there are eight tools available for both regular Galaxy Zoo and the Galaxy Zoo Quench projects:

  • Histogram: makes bar charts of a single data parameter
  • Scatterplot: plot any two data parameters against each other
  • Map: plot the position of objects on the sky, overplotted on maps of the sky at different wavelengths (radio, visible, X-ray, etc.)
  • Statistics: compute some of the most common statistics on your data (eg, mean, minimum, maximum, etc).
  • Subject viewer: examine individual objects, including both the image and all the metadata associated with that object
  • Spectra: for galaxies in the SDSS with a spectrum, download and examine the spectrum.
  • Table: List the metadata for all objects in a dataset. You can also use this tool to create new columns from the data that exists – for example, take the difference between magnitudes to define the color of a galaxy.
  • Color-magnitude: look at how the color and magnitude of galaxies compare to the total population of Galaxy Zoo. A really nice way of visualizing and analyzing how unusual a particular galaxy might be.

We have one tool up and running for Space Warps called Space Warp Viewer. This lets users adjust the color and scale parameters of image to examine potential gravitational lenses in more detail.

Snapshot Serengeti Dashboard

Finally, Snapshot Serengeti has several of the same tools that Galaxy Zoo does, including Statistics, Subject Viewer, Table, and Histogram (aka Bar Graph). There’s also Image Gallery, where you can examine the still images from your datasets, and we’re working on an Image Player. There’s a few very cool and advanced tools we started developing last week – they’re not yet deployed, but we’re really excited to let you follow the activity over many seasons or by focusing on particular cameras. Stay tuned. You can see an example Serengeti Dashboard, showing the distribution of Cheetahs, here (it’s also shown in the screenshot above).

We hope that Zoo Tools will be an important part of all Zooniverse projects in the future, and we’re looking forward to you trying them out. More to come soon!

Galaxy Zoo Quench: A New Kind of Citizen Science

A new ‘mini’ project went live yesterday called Galaxy Zoo Quench. This project involves new images of 6,004 galaxies drawn from the original Galaxy Zoo. As usual, everyone is invited to come and classify these galaxies, but this project has a twist that makes it special! We hope to take citizen science to the next level by providing the opportunity to take part in the entire scientific process – everything from classifying galaxies to analyzing results to collaborating with astronomers to writing a scientific article!

Galaxy Zoo Quench

Galaxy Zoo Quench is examining a sample of galaxies that have recently and abruptly quenched their star formation. These galaxies are aptly named Post-Quenched Galaxies. They provide an ideal laboratory for studying galaxy evolution. So that’s exactly what we want to do: with the help of the Zooniverse community. We hope you’ll join us as we try out a new kind of citizen science project. Visit http://quench.galaxyzoo.org to learn more.

The entire process of classifying, analyzing, discussing, and writing the article will take place over an ~8-12 week period. After classifying the galaxies, Quench volunteers can use tools.zooniverse.org to plot the data and look for trends. We also have a special Quench Talk forum to discuss and identify key results to include in the paper – above you can see examples of some of the cool objects people have already found and discussed.

Have questions about the project? Leave a comment here or ask us on Twitter (@galaxyzoo) or on the Galaxy Zoo Facebook page. In case you’re worried: the regular Galaxy Zoo will continue as normal.

Now go visit http://quench.galaxyzoo.org and start classifying!