We’re sorry to let you know that at 16:29 BST on Wednesday last week we made a change to the Panoptes code which had the unexpected result that it failed to record classifications on six of our newest projects; Season Spotter, Wildebeest Watch, Planet Four: Terrains, Whales as Individuals, Galaxy Zoo: Bar Lengths, and Fossil Finder. It was checked by two members of the team – unfortunately, neither of them caught the fact that it failed to post classifications back. When we did eventually catch it, we fixed it within 10 minutes. Things were back to normal by 20:13 BST on Thursday, though by that time each project had lost a day’s worth of classifications.
To prevent something like this happening in the future we are implementing new code that will monitor the incoming classifications from all projects and send us an alert if any of them go unusually quiet. We will also be putting in even more code checks that will catch any issues like this right away.
It is so important to all of us at the Zooniverse that we never waste the time of any of our volunteers, and that all of your clicks contribute towards the research goals of the project. If you were one of the people whose contributions were lost we would like to say how very sorry we are, and hope that you can forgive us for making this terrible mistake. We promise to do everything we can to make sure that nothing like this happens again, and we thank you for your continued support of the Zooniverse.
Today, we launchAnnoTate, an art history and transcription project made in partnership with Tate museums and archives. AnnoTate was built with the average time-pressed user in mind, by which I mean the person who does not necessarily have five or ten minutes to spare, but maybe thirty or sixty seconds.
AnnoTate takes a novel approach to crowdsourced text transcription. The task you are invited to do is not a page, not sentences, but individual lines. If the kettle boils, the dog starts yowling or the children are screaming, you can contribute your one line and then go attend to life.
The new transcription system is powered by an algorithm that will show when lines are complete, so that people don’t replicate effort unnecessarily. As in other Zooniverse projects, each task (in this case, a line) is done by several people, so you’re not solely responsible for a line, and it’s ok if your lines aren’t perfect.
Of course, if you want trace the progression of an artist’s life and work through their letters, sketchbooks, journals, diaries and other personal papers, you can transcribe whole pages and documents in sequence. Biographies of the artists are also available, and there will be experts on Talk to answer questions.
Every transcription gets us closer to the goal of making these precious documents word searchable for scholars and art enthusiasts around the world. Help us understand the making of twentieth-century British art!
Calling all Zooniverse volunteers! As we transition from the dog days of summer to the pumpkin spice latte days of fall (well, in the Northern hemisphere at least) it’s time to mobilize and do science!
Our Zooniverse community of over 1.3 million volunteers has the ability to focus efforts and get stuff done. Join us for the Sunspotter Citizen Science Challenge! From August 29th to September 5th, it’s a mad sprint to complete 250,000 classifications on Sunspotter.
Sunspotter needs your help so that we can better understand and predict how the Sun’s magnetic activity affects us on Earth. The Sunspotter science team has three primary goals:
Hone a more accurate measure of sunspot group complexity
Improve how well we are able to forecast solar activity
Create a machine-learning algorithm based on your classifications to automate the ranking of sunspot group complexity
In order to achieve these goals, volunteers like you compare two sunspot group images taken by the Solar and Heliospheric Observatory and choose the one you think is more complex. Sunspotter is what we refer to as a “popcorn project”. This means you can jump right in to the project and that each classification is quick, about 1-3 seconds.
Let’s all roll up our sleeves and advance our knowledge of heliophysics!
In the previous post, I described the creation of the Zooniverse Project Success Matrix from Cox et al. (2015). In essence, we examined 17 (well, 18, but more on that below) Zooniverse projects, and for each of them combined 12 quantitative measures of performance into one plot of Public Engagement versus Contribution to Science:
The aim of this post is to answer the questions: What does it mean? And what doesn’t it mean?
Discussion of Results
The obvious implication of this plot and of the paper in general is that projects that do well in both public engagement and contribution to science should be considered “successful” citizen science projects. There’s still room to argue over which is more important, but I personally assert that you need both in order to justify having asked the public to help with your research. As a project team member (I’m on the Galaxy Zoo science team), I feel very strongly that I have a responsibility both to use the contributions of my project’s volunteers to advance scientific research and to participate in open, two-way communication with those volunteers. And as a volunteer (I’ve classified on all the projects in this study), those are the 2 key things that I personally appreciate.
It’s apparent just from looking at the success matrix that one can have some success at contributing to science even without doing much public engagement, but it’s also clear that every project that successfully engages the public also does very well at research outputs. So if you ignore your volunteers while you write up your classification-based results, you may still produce science, though that’s not guaranteed. On the other hand, engaging with your volunteers will probably result in more classifications and better/more science.
Surprises, A.K.A. Failing to Measure the Weather
Some of the projects on the matrix didn’t appear quite where we expected. I was particularly surprised by the placement of Old Weather. On this matrix it looks like it’s turning in an average or just-below-average performance, but that definitely seems wrongto me. And I’m not the only one: I think everyone on the Zooniverse team thinks of the project as a huge success. Old Weather has provided robust and highly useful data to climate modellers, in addition to uncovering unexpected data about important topics such as the outbreak and spread of disease. It has also provided publications for more “meta” topics, including the study of citizen science itself.
Additionally, Old Weather has a thriving community of dedicated volunteers who are highly invested in the project and highly skilled at their research tasks. Community members have made millions of annotations on log data spanning centuries, and the researchers keep in touch with both them and the wider public in multiple ways, including a well-written blog that gets plenty of viewers. I think it’s fair to say that Old Weather is an exceptional project that’s doing things right. So what gives?
There are multiple reasons the matrix in this study doesn’t accurately capture the success of Old Weather, and they’re worth delving into as examples of the limitations of this study. Many of them are related to the project being literally exceptional. Old Weather has crossed many disciplinary boundaries, and it’s very hard to put such a unique project into the same box as the others.
Firstly, because of the way we defined project publications, we didn’t really capture all of the outputs of Old Weather. The use of publications and citations to quantitatively measure success is a fairly controversial subject. Some people feel that refereed journal articles are the only useful measure (not all research fields use this system), while others argue that publications are an outdated and inaccurate way to measure success. For this study, we chose a fairly strict measure, trying to incorporate variations between fields of study but also requiring that publications should be refereed or in some other way “accepted”. This means that some projects with submitted (but not yet accepted) papers have lower “scores” than they otherwise might. It also ignores the direct value of the data to the team and to other researchers, which is pretty punishing for projects like Old Weather where the data itself is the main output. And much of the huge variety in other Old Weather outputs wasn’t captured by our metric. If it had been, the “Contribution to Science” score would have been higher.
Secondly, this matrix tends to favor projects that have a large and reasonably well-engaged user base. Projects with a higher number of volunteers have a higher score, and projects where the distribution of work is more evenly spread also have a higher score. This means that projects where a very large fraction of the work is done by a smaller group of loyal followers are at a bit of a disadvantage by these measurements. Choosing a sweet spot in the tradeoff between broad and deep engagement is a tricky task. Old Weather has focused on, and delivered, some of the deepest engagement of all our projects, which meant these measures didn’t do it justice.
To give a quantitative example: the distribution of work is measured by the Gini coefficient (on a scale of 0 to 1), and in our metric lower numbers, i.e. more even distributions, are better. The 3 highest Gini coefficients in the projects we examined were Old Weather (0.95), Planet Hunters (0.93), and Bat Detective (0.91); the average Gini coefficient across all projects was 0.82. It seems clear that a future version of the success matrix should incorporate a more complex use of this measure, as very successful projects can have high Gini coefficients (which is another way of saying that a loyal following is often a highly desirable component of a successful citizen science project).
Thirdly, I mentioned in part 1 that these measures of the Old Weather classifications were from the version of the project that launched in 2012. That means that, unlike every other project studied, Old Weather’s measures don’t capture the surge of popularity it had in its initial stages. To understand why that might make a huge difference, it helps to compare it to the only eligible project that isn’t shown on the matrix above: The Andromeda Project.
In contrast to Old Weather, The Andromeda Project had a very short duration: it collected classifications for about 4 weeks total, divided over 2 project data releases. It was wildly popular, so much so that the project never had a chance to settle in for the long haul. A typical Zooniverse project has a burst of initial activity followed by a “long tail” of sustained classifications and public engagement at a much lower level than the initial phase.
The Andromeda Project is an exception to all the other projects because its measures are only from the initial surge. If we were to plot the success matrix including The Andromeda Project in the normalizations, the plot looks like this:
Because we try to control for project duration, the very short duration of the Andromeda Project means it gets a big boost. Thus it’s a bit unfair to compare all the other projects to The Andromeda Project, because the data isn’t quite the same.
However, that’s also true of Old Weather — but instead of only capturing the initial surge, our measurements for Old Weather omit it. These measurements only capture the “slow and steady” part of the classification activity, where the most faithful members contribute enormously but where our metrics aren’t necessarily optimized. That unfairly makes Old Weather look like it’s not doing as well.
In fact, comparing these 2 projects has made us realize that projects probably move around significantly in this diagram as they evolve. Old Weather’s other successes aren’t fully captured by our metrics anyway, and we should keep those imperfections and caveats in mind when we apply this or any other success measure to citizen science projects in the future; but one of the other things I’d really like to see in the future is a study of how a successful project can expect to evolve across this matrix over its life span.
Why do astronomy projects do so well?
There are multiple explanations for why astronomy projects seem to preferentially occupy the upper-right quadrant of the matrix. First, the Zooniverse was founded by astronomers and still has a high percentage of astronomers or ex-astronomers on the payroll. For many team members, astronomy is in our wheelhouse, and it’s likely this has affected decisions at every level of the Zooniverse, from project selection to project design. That’s starting to change as we diversify into other fields and recruit much-needed expertise in, for example, ecology and the humanities. We’ve also launched the new project builder, which means we no longer filter the list of potential projects: anyone can build a project on the Zooniverse platform. So I think we can expect the types of projects appearing in the top-right of the matrix to broaden considerably in the next few years.
The second reason astronomy seems to do well is just time. Galaxy Zoo 1 is the first and oldest project (in fact, it pre-dates the Zooniverse itself), and all the other Galaxy Zoo versions were more like continuations, so they hit the ground running because the science team didn’t have a steep learning curve. In part because the early Zooniverse was astronomer-dominated, many of the earliest Zooniverse projects were astronomy related, and they’ve just had more time to do more with their big datasets. More publications, more citations, more blog posts, and so on. We try to control for project age and duration in our analysis, but it’s possible there are some residual advantages to having extra years to work with a project’s results.
Moreover, those early astronomy projects might have gotten an additional boost from each other: they were more likely to be popular with the established Zooniverse community, compared to similarly early non-astronomy projects which may not have had such a clear overlap with the established Zoo volunteers’ interests.
The citizen science project success matrix presented in Cox et al. (2015) is the first time such a diverse array of project measures have been combined into a single matrix for assessing the performance of citizen science projects. We learned during this study that public engagement is well worth the effort for research teams, as projects that do well at public engagement also make better contributions to science.
It’s also true that this matrix, like any system that tries to distill such a complex issue into a single measure, is imperfect. There are several ways we can improve the matrix in the future, but for now, used mindfully (and noting clear exceptions), this is generally a useful way to assess the health of a citizen science project like those we have in the Zooniverse.
What makes one citizen science project flourish while another flounders? Is there a foolproof recipe for success when creating a citizen science project? As part of building and helping others build projects that ask the public to contribute to diverse research goals, we think and talk a lot about success and failure at the Zooniverse.
But while our individual definitions of success overlap quite a bit, we don’t all agree on which factors are the most important. Our opinions are informed by years of experience, yet before this year we hadn’t tried incorporating our data into a comprehensive set of measures — or “metrics”. So when our collaborators in the VOLCROWE project proposed that we try to quantify success in the Zooniverse using a wide variety of measures, we jumped at the chance. We knew it would be a challenge, and we also knew we probably wouldn’t be able to find a single set of metrics suitable for all projects, but we figured we should at least try to write down onepossible approach and note its strengths and weaknesses so that others might be able to build on our ideas.
In this study, we only considered projects that were at least 18 months old, so that all the projects considered had a minimum amount of time to analyze their data and publish their work. For a few of our earliest projects, we weren’t able to source the raw classification data and/or get the public-engagement data we needed, so those projects were excluded from the analysis. We ended up with a case study of 17 projects in all (plus the Andromeda Project, about which more in part 2).
In late July I led a week-long course about crowdsourcing and data visualization at the Digital Humanities Oxford Summer School. I taught the crowdsourcing part, while my friend and collaborator, Sarah, from Google, lead the data visualization part. We had six participants from fields as diverse as history, archeology, botany and literature, to museum and library curation. Everyone brought a small batch of images, and used the new Zooniverse Project Builder (“Panoptes”) to create their own projects. We asked participants what were their most pressing research questions? If the dataset were larger, why would crowdsourcing be an appropriate methodology, instead of doing the tasks themselves? What would interest the crowd most? What string of questions or tasks might render the best data to work with later in the week?
Within two days everyone had a project up and running. We experienced some teething problems along the way (Panoptes is still in active development) but we got there in the end! Everyone’s project looked swish, if you ask me.
Participants had to ‘sell’ their projects in person and on social media to attract a crowd. The rates of participation were pretty impressive for a 24-hour sprint. Several hundred classifications were contributed, which gave each project owner enough data to work with.
But of course, a good looking website and good participation rates do not equate to easy-to-use or even good data! Several of us found that overly complex marking tasks rendered very convoluted data and clearly lost people’s attention. After working at the Zooniverse for over a year I knew this by rote, but I’d never really had the experience of setting up a workflow and seeing what came out in such a tangible way.
Despite the variable data, everyone was able to do something interesting with their results. The archeologist working on pottery shards investigated whether there was a correlation between clay color and decoration. Clay is regional, but are decorative fashions regional or do they travel? He found, to his surprise, that they were widespread.
In the end, everyone agreed that they would create simpler projects next time around. Our urge to catalogue and describe everything about an object—a natural result of our training in the humanities and GLAM sectors—has to be reined in when designing a crowdsourcing project. On the other hand, our ability to tell stories, and this particular group’s willingness to get to grips with quantitative results, points to a future where humanities specialists use crowdsourcing and quantitative methods to open up their research in new and exciting ways.
Anyone heading over to the Zooniverse today will spot a few changes (there may also be some associated down-time, but in this event we will get the site up again as soon as possible). There’s a new layout for the homepage, a few new projects have appeared and there’s a new area and a new structure to Talk to enable you to discuss the Zooniverse and citizen science in general, something we hope will bring together conversations that until now have been stuck within individual projects.
What you won’t see immediately is that the site is running on a new version of the Zooniverse software, codenamed ‘Panoptes‘. Panoptes has been designed so that it’s easier for us to update and maintain, and to allow more powerful tools for project builders. It’s also open source from the start, and if you find bugs or have suggestions about the new site you can note them on Github (or, if you’re so inclined, contribute to the codebase yourself). We certainly know we have a lot more to do; today is a milestone, but not the end of our development. We’re looking forward to continuing to work on the platform as we see how people are using it.
Panoptes allows the Zooniverse to be open in another way too. At its heart is a project building tool. Anyone can log in and start to build their own Zooniverse-style project; it takes only a moment to get started and I reckon not much more than half an hour to get to something really good. These projects can be made public and shared with friends, colleagues and communities – or by pressing a button can be submitted to the Zooniverse team for a review (to make sure our core guarantee of never wasting people’s time is preserved), beta test (to make sure it’s usable!), and then launch.
We’ve done this because we know that finding time and funding for web development is the bottleneck that prevents good projects being built. For the kind of simple interactions supported by the project builder, we’ve built enough examples that we know what a good and engaging project looks like. We’ll still build new and novel custom projects helping the Zooniverse to grow, but today’s launch should mean a much greater number of engaging and exciting projects that will lead to more research, achieved more quickly.
We hope you enjoy the new Zooniverse, and comments and feedback are very welcome. I’m looking forward to seeing what people do with our new toy.
PS You can read more about building a project here, about policies for which projects are promoted to the Zooniverse community here and get stuck into the new projects at www.zooniverse.org/#/projects.
PPS We’d be remiss if we didn’t thank our funders, principally our Google Global Impact award and the Alfred P. Sloan Foundation, and I want to thank the heroic team of developers who have got us to this point. I shall be buying them all beer. Or gin. Or champagne. Or all three.
Orchid Observers, the latest Zooniverse project, is perhaps at first glance a project like all the others. If you visit the site, you’ll be asked to sort through records of these amazing and beguiling plants, drawn from the collections of the Natural History Museum and from images provided by orchid fans from across the country. There’s a scientific goal, related to identifying how orchid flowering times are changing across the UK, a potential indicator of the effects of climate change, and we will of course be publishing our results in scientific journals.
Yet the project is, we hope, also a pointer to one way of creating a richer experience for Zooniverse volunteers. While other projects, such as iNaturalist, have made great efforts in mobilizing volunteers to carry out data collection, this is the first time we’ve combined that sort of effort with ‘traditional’ Zooniverse data analysis. We hope that those in a position to contribute images of their own will also take part in the online phase of the project, both as classifiers but also sharing their expertise online – if you’re interested, there’s an article in the most recent BSBI News that team member Kath Castillo wrote to encourage that magazine’s audience to get involved in both phases of the project.
BSBI News – published by the Botanical Society of Britain and Ireland, and not as far as I know available online – is a common place for the environmental and naturalist communities to advertise citizen science projects in this way, and so it also serves as a place where people talk about citizen science. The same edition that contains Kath’s article also includes a piece by Kew research associate Richard Bateman chewing over the thorny issue of funding distributed networks of volunteers to participate (and indeed, to coordinate) projects like these. He alludes to the ConSciCom project in which we’re partners, and which has funded the development of both Orchid Observers and another Zooniverse project, Science Gossip, suggesting that we view volunteers as either a freely available source of expertise or, worse, as ‘laboratory rats’.
Neither rings true to me. While the work that gets done in and around Zooniverse projects couldn’t happen without the vast number of hours contributed by volunteers, we’re very conscious of the need to go beyond just passively accepting clicks. We view our volunteers as our collaborators – that’s why they appear on author lists for papers, and why when you take part in a Zooniverse project, then we should take on the responsibility of communicating the results back to you in a form that’s actually useful. The collaboration with the historians in ConSciCom, who study the 19th century – a time when the division between ‘professional’ and ‘citizen’ scientist was much less clear – has been hugely useful in helping us think this through (see, for example, Sally Frampton’s discussion of correspondence in the medical journals of the period). Similarly, it’s been great to work with the Natural History Museum who have a long and distinguished history in working with all sorts of naturalist groups. We’ve been working hard on directly involving volunteers in more than mere clickwork too, and ironically enough, the kind of collaboration with volunteer experts we hope to foster in Orchid Observers is part of the solution.
I hope you enjoy the new project – and as ever, comments and thoughts on how we can improve are welcome, either here or via the project’s own discussion space.
PS This debate is slightly different, but it reminds me of the discussions we’ve had over the years about whether ‘citizen’ science is actually science, or just mere clickwork. Here are some replies from 2010 and from 2013.
We are often asked who our community are by project scientists, sociologists, and by the community itself. A recent Oxford study tried to find out, and working with them we conducted a survey of volunteers. The results were interesting and when combined with various statistics that we have at Zooniverse (web logs, analytics, etc) we can start to see a pretty good picture of who volunteers at the Zooniverse.
Much of what follows comes from a survey was conducted last Summer as part of Masters student Victoria Homsy’s thesis, though the results are broadly consistent with other surveys we have performed. We asked a small subset of the Zooniverse community to answer an online questionnaire. We contacted about 3000 people regarding the survey and around 300 responded. They were not a random sample of users, rather they were people who had logged-in to the Zooniverse at least once in the three months before we emailed them.
The remaining aspects of this post involve data gathered by our own system (classification counts, log-in rates, etc) and data from our use of Google Analytics.
So with that preamble done: let’s see who you are…
This visualisation is of Talk data from last Summer. It doesn’t cover every project (e.g. Planet Hunters is missing) but it gives you a good flavour for how our community is structured. Each node (circle) is one volunteer, sized proportionally according to how many posts they have made overall. You can see one power-mod who has commented more than 16,000 times on Talk near the centre. Volunteers are connected to others by talking in the same threads (a proxy for having conversations). They have been automatically coloured by network analysis, to reflect sub-networks within the Zooniverse as a whole. The result is that we see the different projects’ Talk sites.
There are users that rise largely out of those sub-communities and talk across many sites, but mostly people stick to one group. You can also see how relatively few power users help glue the whole together, and how there are individuals talking to large numbers of others, who in turn may not participate much otherwise – these are likely examples of experienced users answering questions from others.
One thing we can’t tell from our own metrics is a person’s gender, but we did ask in the survey. The Zooniverse community seems to be in a 60/40 split, which in some ways is not as bad as I would have thought. However, we can do better, and this provides a metric to measure ourselves against in the future.
It is also interesting to note that there is very little skew in the ages of our volunteers. There is a slight tilt away from older people, but overall the community appears to be made up of people of all ages. This reflects the experience of chatting to people on Talk.
We know that the Zooniverse is English-language dominated, and specifically UK/US dominated. This is always where we have found the best press coverage, and where we have the most links ourselves. The breakdown between US/UK/the rest is basically a three-way split. This split is seen not just in this survey but also generally in our analytics overall.
Only 2% of the users responding to our survey only came from the developing world. As you can see in a recent blog post, we do get visitors from all over the world. It may be that the survey has the effect of filtering out these people (it was conducted via an online form), or maybe that there is language barrier.
We also asked people about their employment status. We find a about half of our community is employed (either full- or part-time). Looking at the age distribution, we might expect up a fifth or sixth of people to be retired (15% is fairly close). This leaves us with about 10% unemployed, nearly twice the UK or US unemployment rate, and about 4% unable to work due to disability (about the UK averaged, by comparison). This is interesting, especially in relation to the next question, on motivation for participating.
We also asked them to tell us what they do and the result is the above word cloud (thanks, Wordle!) which shows a wonderful array of occupations including professor, admin, guard, and dogsbody. You should note a high instance of technical jobs on this list, possibly indicating that people need to have, or be near, a computer to work on Zooniverse projects in their daily life.
When asked why they take part in Zooniverse projects we find that the most-common response (91%) is a desire to contribute to progress. How very noble. Closely following that (84%) are the many people who are interested in the subject matter. It falls of rapidly then to ‘entertainment’, ‘distraction’ and ‘other’. We are forever telling people that the community is motivated mainly by science and contribution, and for whatever reason they usually don’t believe us. It’s nice to see this result reproducing an important part of the Raddick et. al. 2009 study, which first demonstrated it.
It is roughly what I would have expected to see that people tend to classify mostly in their spare time, and that most don’t have dedicated ‘Zooniverse’ time every day. It’s more interesting to see why, if they tend to stop and start, i.e. if they answered in the purple category above. Here is a word cloud showing the reason people stop participating in Zooniverse. TL;DR they have the rest of their life to get on with.
We’ll obviously have to fix this by making Zooniverse their whole life!
This is my final blog post as a part of the Zooniverse team. It has been by pleasure to work at the Zooniverse for the last five years. Much of that time has been spent trying to motivate and engage the amazing community of volunteers who come to click, chat, and work on all our projects. You’re an incredible bunch, motivated by science and a desire to be part of something important and worthwhile online. I think you’re awesome. In the last five years I have seen the Zooniverse grow into a community of more than one million online volunteers, willing to tackle big questions, and trying and understand the world around us.
Thank you for your enthusiasm and your time. I’ll see you online…
I and other Galaxy Zoo and Zooniverse scientists are looking forward to the Citizen Science Association (CSA) and American Association for the Advancement of Scientists (AAAS) meetings in San Jose, California this week.
As I mentioned in an earlier post, we’ve organized an AAAS session that is titled, “Citizen Science from the Zooniverse: Cutting-Edge Research with 1 Million Scientists,” which will take place on Friday afternoon. It fits well with the AAAS’s them this year: “Innovations, Information, and Imaging.” Our excellent line-up includes Laura Whyte (Adler) on Zooniverse, Brooke Simmons (Oxford) on Galaxy Zoo, Alexandra Swanson (U. of Minnesota) on Snapshot Serengeti, Kevin Wood (U. of Washington) on Old Weather, Paul Pharoah (Cambridge) on Cell Slider, and Phil Marshall (Stanford) on Space Warps.
And in other recent Zooniverse news, which you may have heard already, citizen scientists from the Milky Way Project examined infrared images from NASA’s Spitzer Space Telescope and found lots of “yellow balls” in our galaxy. It turns out that these are indications of early stages of massive star formation, such that the new stars heat up the dust grains around them. Charles Kerton and Grace Wolf-Chase have published the results in the Astrophysical Journal.
But let’s get back to the AAAS meeting. It looks like many other talks, sessions, and papers presented there involve citizen science too. David Baker (FoldIt) will give plenary lecture on post-evolutionary biology and protein structures on Saturday afternoon. Jennifer Shirk (Cornell), Meg Domroese and others from CSA have a session Sunday morning, in which they will describe ways to utilize citizen science for public engagement. (See also this related session on science communication.) Then in a session Sunday afternoon, people from the European Commission and other institutions will speak about global earth observation systems and citizen scientists tackling urban environmental hazards.
Before all of that, we’re excited to attend the CSA’s pre-conference on Wednesday and Thursday. (See their online program.) Chris Filardi (Director of Pacific Programs, Center for Biodiversity and Conservation, American Museum of Natural History) and Amy Robinson (Executive Director of EyeWire, a game to map the neural circuits of the brain) will give the keynote addresses there. For the rest of the meeting, as with the AAAS, there will be parallel sessions.
The first day of the CSA meeting will include: many sessions on education and learning at multiple levels; sessions on diversity, inclusion, and broadening engagement; a session on defining and measuring engagement, participation, and motivations; a session on CO2 and air quality monitoring; a session on CS in biomedical research;
and sessions on best practices for designing and implementing CS projects, including a talk by Chris Lintott on the Zooniverse and Nicole Gugliucci on CosmoQuest. The second day will bring many more talks and presentations along these and related themes, including one by Julie Feldt about educational interventions in Zooniverse projects and one by Laura Whyte about Chicago Wildlife Watch.
I also just heard that the Commons Lab at the Woodrow Wilson Center is releasing two new reports today, and hardcopies will be available at the CSA meeting. One report is by Muki Haklay (UCL) about “Citizen Science and Policy: A European Perspective” and the other is by Teresa Scassa & Haewon Chung (U. of Ottawa) about “Typology of Citizen Science Projects from an Intellectual Property Perspective.” Look here for more information.
In any case, we’re looking forward to these meetings, and we’ll keep you updated!
The Zooniverse Blog. We're the world's largest and most successful citizen science platform and a collaboration between the University of Oxford, The Adler Planetarium, and friends