Tag Archives: ethics

AI Ethics Workshop Series: Update #1

This post is part of our Kavli Foundation-funded series, Ethical Considerations for Machine Learning in Public-Engaged Research. Read our project announcement blog post here.

We’d like to thank everyone who participated in the first of four surveys to help shape the future of AI and public-engaged research. We received over 1000 responses to the first survey, which informed priorities for the first workshop and helped Zooniverse leadership understand some of your interests, concerns, and ideas around this important topic.

Our second survey is launching today, and will be accepting responses through July 18th. We hope you will participate!

In case you missed it, check out the project announcement blog post to learn more about Zooniverse’s effort to develop recommendations for running AI-engaged projects on the Zooniverse platform.

Who is running this study? The Project Director is Dr. Samantha Blickhan, Zooniverse Co-Director and Digital Humanities Lead.

Who is funding this research? This research is funded by The Kavli Foundation.

How can I contact the team? Questions can be addressed to hillary@zooniverse.org or samantha@zooniverse.org

​​Ethical Considerations for Machine Learning in Public-Engaged Research

Highlights

  • With support from the Kavli Foundation, the Zooniverse team is launching a project to help us develop a set of recommendations for running Machine Learning (ML) and Artificial Intelligence (AI)-engaged projects on the Zooniverse platform.
  • The project will bring together subject matter experts, Zooniverse leadership, and platform participants in a series of workshops and working sessions.
  • The project deepens partnerships among Zooniverse and its participant community, as well as the Kavli Institute for Cosmological Physics, UC-Berkeley Kavli Center for Ethics, Science, and the Public, and the SkAI AI Astro Institute. 
  • Zooniverse participants have an opportunity to get involved and follow along in a number of ways!

Developing recommendations for ML/AI projects on Zooniverse

As ML/AI has become more prevalent—now in about ⅓ of Zooniverse projects—it has sparked a range of reactions on the Talk message boards within the participant community, reflecting broader societal discourse. Zooniverse participants have surfaced concerns and insights on issues like ownership, agency, transparency, and trust. It is crucial to address the risks, opportunities, challenges, and broader ethical questions. 

In response, we developed a project to create a set of recommendations for running ML/AI-engaged projects on the Zooniverse platform. In this project we will explore the tensions of integrating ML/AI within online public-engaged research. We hope that these recommendations will also be useful for related fields incorporating ML/AI in public-engaged research processes. 

Collaborative workshops

With funding from The Kavli Foundation, this project will bring together Zooniverse leadership, platform participants, researchers, and experts in topics like communications, ethics, law, and ML/AI in a series of workshops and working sessions. The project deepens partnerships among Zooniverse and its participant community, as well as the Kavli Institute for Cosmological Physics, UC-Berkeley Kavli Center for Ethics, Science, and the Public, and the SkAI AI Astro Institute.

Workshop themes cover topics raised by Zooniverse participants and project research teams as well as gaps in existing knowledge, resources, and guidance. 

  • Workshop 1 (June) will focus on Transparency and Communication Best Practices. It will inform guidelines that will support researchers in effectively communicating with participants when integrating ML/AI into their public-engaged research projects. 
  • Workshop 2 (July) will cover Ethical Approaches to ML/AI. It will invite discussions that explore and identify foundational elements of an ethical approach to ML/AI-focused public-engaged research, addressing risks while leveraging opportunities. 
  • Workshop 3 (August) will focus on Deepening Contextual Understanding. It will expand on the ethical considerations raised in Workshop 2 by examining a matrix of factors including disciplinary differences, task type affordances, and the varied needs of stakeholders (e.g., researchers, participants, platform maintainers). We anticipate that ethical principles may at times conflict within this matrix, making it essential to foster a shared understanding of how, why, and when we will draw from different elements as we develop these recommendations. 
  • Workshop 4 (September) will consider Downstream Data Protection. It will inform recommendations for licensing frameworks to use with public-engaged research data outputs that align with platform values, particularly in relation to projects that incorporate ML/AI. 

Call to action: We want you to participate!

Zooniverse participants have an opportunity to get involved and follow along in a number of ways:

1. Help shape the future of ML/AI and public-engaged research. Options include:

  • Complete four short surveys throughout the duration of the project, starting with this one.
  • Survey responses will be considered as we draft the recommendations for running ML/AI-engaged projects on the Zooniverse platform.
  • We’ll also be reaching out to a subset of our community about participating in the workshops.

2. Follow along:

  • We’ll be posting updates on Talk and on our Zooniverse blog during the process, and project results will be shared broadly.
  • You can opt in to receive project updates by completing the first survey here.


Who is running this study? The Project Director is Dr. Samantha Blickhan, Zooniverse Co-Director and Digital Humanities Lead.

Who is funding this research? This research is funded by the Kavli Foundation.

How can I contact the team? Questions can be addressed to hillary@zooniverse.org or samantha@zooniverse.org

Experiments on the Zooniverse

Occasionally we run studies in collaboration with external  researchers in order to better understand our community and improve our platform. These can involve methods such as A/B splits, where we show a slightly different version of the site to one group of volunteers and measure how it affects their participation, e.g. does it influence how many classifications they make or their likelihood to return to the project for subsequent sessions?

One example of such a study was the messaging experiment we ran on Galaxy Zoo.  We worked with researchers from Ben Gurion University and Microsoft research to test if the specific content and timing of messages presented in the classification interface could help alleviate the issue of volunteers disengaging from the project. You can read more about that experiment and its results in this Galaxy Zoo blog post https://blog.galaxyzoo.org/2018/07/12/galaxy-zoo-messaging-experiment-results/.

As the Zooniverse has different teams based at different institutions in the UK and the USA, the procedures for ethics approval differ depending on who is leading the study. After recent discussions with staff at the University of Oxford ethics board, to check our procedure was up to date, our Oxford-based team will be changing the way in which we gain approval for, and report the completion of these types of studies. All future study designs which feature Oxford staff taking part in the analysis will be submitted to CUREC, something we’ve been doing for the last few years. From now on, once the data gathering stage of the study has been run we will provide all volunteers involved with a debrief message.

The debrief will explain to our volunteers that they have been involved in a study, along with providing information about the exact set-up of the study and what the research goals were. The most significant change is that, before the data analysis is conducted, we will contact all volunteers involved in the study allow a period of time for them to state that they would like to withdraw their consent to the use of their data. We will then remove all data associated with any volunteer who would not like to be involved before the data is analysed and the findings are presented. The debrief will also contain contact details for the researchers in the event of any concerns and complaints. You can see an example of such a debrief in our original post about the Galaxy Zoo messaging experiment here https://blog.galaxyzoo.org/2015/08/10/messaging-test/.

As always, our primary focus is the research being enabled by our volunteer community on our individual projects. We run experiments like these in order to better understand how to create a more efficient and productive platform that benefits both our volunteers and the researchers we support. All clicks that are made by our volunteers are used in the science outcomes from our projects no matter whether they are part of an A/B split experiment or not. We still strive never to waste any volunteer time or effort.

We thank you for all that you do, and for helping us learn how to build a better Zooniverse.