All posts by Helen Spiers

Microscopy Masters draws to a close

The guest post below was written by Jacob Bruggemann, a graduate student based at the Scripps Research Institute, who helped lead the biomedical Zooniverse project, Microscopy Masters.

Read on to find out about the findings of this project, made possible by the efforts of our fantastic citizen scientist volunteers. If you would like to learn more, you can access a preprint publication about this work here 

 With thanks to Zooniverse Volunteer Becky Kennard for editing this piece. 

– Helen 

 

Microscopy Masters draws to a close

Microscopy Masters, the cryo-electron microscopy (cryo-EM) project building complex 3D models of proteins, is approaching its initial conclusion. Over the two years the project has been running, we’ve collected over 17,000 classifications and built a dataset of 209,696 unique protein particles. The primary dataset used in this project was the 26S proteasome lid complex generated by the Lander Lab at the Scripps Research Institute. We have also annotated other, smaller datasets.

The proteasome is a large multi-protein complex responsible for breaking down unwanted proteins into reusable parts, kind of like a large recycling center for the cell. Studying its structure could reveal the mechanisms behind how the proteasome lid only opens for proteins marked for recycling, and give insights into problems caused by the lid malfunctioning.

This project is centered on an important tenet in biology, ‘form follows function.’ On the molecular level of biology, what this means is that the shapes of large biological molecules, such as proteins and nucleic acids, are evolved to perform specific functions. By studying and understanding the structure of biological complexes, researchers can better understand how all the little moving parts of life interact, which will allow them to better combat diseases and disorders.

Scientists are often too busy in the lab to come up with catchy names, meaning that the techniques they invent are usually given pretty self-explanatory titles. In the case of cryo-EM, everything you need to know is in the name.

Imagine some scientists are interested in studying a protein, say our subject, the 26S proteasome lid. Cells containing large quantities of the protein are lysed (a scientific way to say ‘popped like balloons’) and the contents of the cells are put into a solution. That solution is purified so that it only contains the protein the scientists are interested in. The purified solution is then flash-frozen in extremely thin ice (cryo) and put under an electron microscope (EM) to obtain images of the proteins. These images are then put through sophisticated reconstruction software to obtain a detailed 3D model of the protein. This technique is so powerful, scientists can identify individual atoms in the protein complex, giving them deep insights into how it interacts with its environment.

nature19948-i1

Figure 1 A schematic of how cryo-EM is done. Taken from dx.doi.org/10.1038/nature19948.

Of course, there have been years of work involved in the cryo-EM that I just explained in four sentences (and some very, very expensive microscopes!). A particularly time-consuming task for cryo-microscopists is picking the individual proteins from the microscopy images (micrographs), called ‘particle picking.’

Scientists used to do this by hand, but since they often have thousands of these images to process, this can take weeks of work. So, they usually rely on software to extract the protein images. But because some proteins are so complex, it can be difficult for software to identify them in the noisy micrographs. For this reason, we decided to train citizen scientists to pick the particles from our proteasome lid data and see if it could be used to build a detailed molecular model.

picking

Figure 2 On the left is a blank micrograph, on the right is a micrograph that has had the proteins manually picked.

Using the data from our volunteers, we made a full 3D reconstruction of the proteasome lid. We compared the model to one made with an automatic particle picker, both of which are shown below. Although they look very similar, what matters for microscopists is the ‘resolution’ of the reconstructions. What resolution means in this context is how consistent the models are when used several times.

In this case each dataset was divided into two random halves and made into two separate models, which were then compared to determine a resolution. Even though the resolution for the computer-made model is lower (better) in this case, this is partly due to the fact that the computer-picked dataset had so many more particles. For this reason, we also did a reconstruction using a subset of the computer-picked data with the same amount of particles as the crowdsourced dataset. This brought the resolution to 4.036 Å, closer to the crowdsourced dataset but still lower.

refinements

Figure 3 The final reconstructions of the crowdsourced and computational datasets. The resolutions for each are listed, the lower the better.

Even with the higher resolution, we believe this is a fantastic example of the power of citizen science. We built an entirely hand-picked dataset from people who had little to no experience with cryo-EM. This dataset allowed us to build a detailed, 3D model of a complex protein that had a similar resolution to the one built by a team of trained scientists with state-of-the-art software.

This was the first time we have run a project of this nature, and we believe that with tweaking and better feedback systems (which were only implemented by the fantastic Zooniverse team late into our project’s run) we can process data better and faster than we did in our first run.

As a side experiment to try and figure out how to better engage users, some of our project’s participants might remember being sent newsletters about ‘sprint’ datasets, which were small datasets of 15-20 images of other proteins. The use for these was not to build an entire particle dataset, but to provide data for the researchers to feed their automatic particle picking software. We found that giving the images different color schemes than the traditional black-and-white was a nice way to ‘spice’ the micrographs up for users, and we were able to provide researchers with usable data in a matter of days that they could use to start their data processing.

Although we currently have put the Microscopy Masters project on hold, we are excited with the results and in the process of submitting a publication of our initial results. I would like to thank everybody involved with the Zooniverse for building a fantastic platform to try our project. In particular, I would like to thank the Zooniverse team for answering my questions and helping me get Microscopy Masters up and working.

And lastly, thank you to all the hard-working participants in Microscopy Masters and everybody who participates in this great website!

 

Find out more:

You can check more of the great work done here at the Su lab on our website.

The Lander Lab is consistently pushing the work being done in cryo-EM, go see the work they are doing at their homepage.

Our publication can be previewed as a preprint on bioRxiv.

 

 

Advertisements

Step into the Zoo

Dr Sam Illingworth, Senior Lecturer in Science Communication and Poet, wrote the following Zooniverse inspired poem for us; ‘Research for All’.

If you’d like to read more of his work, check out Sam’s blog here.

– Helen

Research for All

 

Detecting bubbles in the Milky Way,

Or sorting a muon and gamma ray;

Identifying planets and their stars,

Then codifying ice geysers on Mars.

 

From mapping out old weather lost at sea,

To counting jungle rhythms in a tree;

With floating forests hiding in plain sight,

Sometimes research just needs a brighter light.

 

Etching a cell to analyse their state,

And bashing bugs to keep drugs up-to-date;

The history of what has gone before,

Can help predict what science has in store.

 

Transcribing ancient texts and works of art,

Unearthing words that set Shakespeare apart;

Revealing secret lives and hidden gist,

By searching for what others might have missed.

 

In answering the questions left to find,

We need the help of more than just one mind;

A Universe of projects yet to do,

The door is open, step into the zoo.

 

 

 

A Late Night at the Museum

At the end of October, The Zooniverse team was invited to the Natural History Museum in London to be part of the Museum’s monthly Lates event program.

(Photos courtesy of the Etch A Cell team)

The event was organised by the ConSciCom team who have partnered with the Zooniverse to create two very successful projects – Science Gossip and Orchid Observers. The theme for the evening was to explore the role images, such as illustrations and photographs, have played within natural history and scientific research.

From studying animal behaviour using photos taken by camera traps, to advancing our understanding of cell biology with photos from microscopes, many Zooniverse projects improve our understanding of the world around us through the help of citizen scientist volunteers.

Teams from multiple Zooniverse projects, including BashTheBug, Etch A Cell, Notes from  Nature, Orchid Observers, Science Gossip and Seabird Watch, attended the event and spent the evening speaking to people about their projects, and showing how anyone can contribute to real research through citizen science.

(Photos courtesy of the Etch A Cell team and Jim O’Donnell)

Illustrator Dr Makayla Lewis led a live gallery drawing event, asking visitors to pick up a pencil and spend 15 minutes sketching their favourite exhibits.

2017-10-27_22-00-28_130

(Photos courtesy of Jim O’Donnell)

Thanks to everyone who got involved, including Fiona (Penguin Watch), Freddie (University of Oxford), Jim (Zooniverse Developer), Makayla (Illustrator), Martin (Etch A Cell), Nathan (University of Oxford) and Phil (BashTheBug), and especially all our volunteers who attended the event!

 

Six months of bashing bugs

Below is a guest blog post from Dr Philip Fowler, lead researcher on our award-winning biomedical research project Bash the Bug. Read on to find out more about this project and how you can get involved!

– Helen

 

Our bug-squishing project, BashTheBug, was six months old this month. Since launching on 7th April 2017, over seven thousand Zooniverse volunteers have contributed nearly half a million classifications between them, making 58 classifications per person, on average.

The bugs our volunteers have been bashing are the bacterium responsible for Tuberculosis (TB); ‘Mycobacterium Tuberculosis’. Many people think of TB as a disease of the past, to be found only in the books of Charles Dickens. However, the reality is quite different; TB is now responsible for more deaths each year than HIV/AIDS; in 2015 this disease killed 1.8 million people. To make matters worse, like all other bacterial diseases, TB is evolving resistance to the antibiotics used to treat it. It is this problem that inspired the BashTheBug project, which aims to improve both the diagnosis and treatment of TB.

At the heart of this project is the simple idea that, in order to find out which antibiotics are effective at killing a particular TB strain, we have to try growing that strain in the presence of a range of antibiotics at different doses. If an antibiotic stops the bacterium growing at a dose that can be used safely within the human body, then bingo! that antibiotic can be used to treat that strain. To make doing this simpler, the CRyPTIC project (which is an international consortium of TB research institutions), has designed a 96-well plate which has 14 different anti-TB drugs freeze-dried to the bottom of each well.

96well plate

Figure 1. A 96-well microtitre plate

These plates are common in science and are about the size of a large mobile phone. When a patient comes into clinic with TB, a sample of the bacterium they are infected with is taken, grown for a couple of weeks and then some is added to each of the 96 wells. The plate is then incubated for two weeks, and then examined to see which wells have TB growing in them and which do not. As each antibiotic is included on the plate at different doses, it is possible to work out the minimum concentration of antibiotic that stops the bug from growing.

But why are we doing this? Well, the genome of each TB sample will also be sequenced. This will allow us to build two large datasets; one of the mutations in the TB genome and another listing which antibiotics work for each sample (and which do not). Using these two datasets, we will then be able to infer which genetic mutations are responsible for resistance to specific antibiotics. With me still? Good. This will give researchers a large and accurate catalogue that would allow anyone to predict which antibiotics would work on any TB infection, simply by sequencing its genome. This is particularly important for the diagnosis and treatment of TB; currently used approaches are notoriously slow, taking up to eight weeks to identify which antibiotics can be used for effective treatment. If you were a clinician would you want to wait two months before starting your patient on treatment? Of course not.

Figure 2

Figure 2. A photograph of M. tuberculosis that has been growing on a plate for two weeks.

You might scoff at this point and say, pah, using genetics like this in hospitals will never happen. Well it already is. Since March 2017, all routine testing for Tuberculosis in England has been done by sequencing the genome of each sample that is sent to either of the two Public Health England reference laboratories. A report is returned to the clinician in around 9 days. Surprisingly, this costs less than the old, traditional methods for TB diagnosis and treatment. Sequencing TB samples also provides other valuable information, for example, you can compare the genomes of different infections to determine if an outbreak is underway, at no extra cost.

So far, so good. The main challenge to this project though, is size. We will be collecting around 100,000 samples from people with TB from around the world between now and 2020. Every single sample will have its genome sequenced and its susceptibility to different antibiotics tested on our 96-well plates. Each of these plates then need to be looked at, and any errors or inconsistencies in how this huge number of 96 well plates are read could lead to false conclusions about which mutations confer resistance, and which don’t.

This problem is why we need your help! You might not be clinical microbiologists (although a few of you no doubt are!) but there are many, many more of you than we have experienced and trained scientists. In fact, each plate will only be looked at by one, maybe two, scientists, and so it is highly likely that, without the help of volunteers, our final dataset will be riven with differences due to how different people in different labs have read the plates. The inconvenient truth, however much we’d like to think otherwise, is staring at a small white circle and deciding whether there is any M. tuberculosis growing or not is a highly subjective task. Take a look at the strip of wells below – the two wells in the top left have no antibiotic at all so give you an idea of how this strain of TB grows normally.

Figure 3

Figure 3. Is there a dose above which the bacteria doesn’t grow?

In the BashTheBug project, you are asked if there is a dose of antibiotic above which the antibiotic doesn’t grow. If you think there is, you are then asked the number of the first well that doesn’t have any TB growing. For the example image above, I might be cautious and say, well, I can see that there appears to be less and less growth as we go to the right and the dosage increases, but it never entirely goes away; there is a very, very faint dot in well #8. So I’m going to say that actually I think there is bacterial growth in all eight wells. You might be optimistic (or even just in a good mood) and disagree with me and say, yes, but by the time you get to well #6, that dot is so small compared to the growth in the control wells, either the antibiotic is doing its job, or, you know what, I’m not convinced that the dot isn’t some sediment or something else entirely.

There is no correct answer. We are probably both right to some extent; there IS something in well #8, but maybe this antibiotic would still be an effective treatment as it would be able to kill enough of the bacteria for your immune system to then be able to kill off the remainder of the infection. Therefore, the aim of BashTheBug is to identify which antibiotic dose multiple people agreed is the dose above which the bacteria no longer grows. Our result from this project is the consensus we get from showing each image to multiple people. Yes, the volunteers might, on average, take a slightly different view to an experienced clinical microbiologist, but that doesn’t matter as they will, on average, be consistent across all the plates which is vital if we are to uncover which genetic mutations confer resistance to antibiotics.

None of this would be possible without the hard work of all our volunteers. So, if you’ve done any classifications, thank you for all your help. Here’s to another six months, many more classifications, and the first results from the hard work done by the many volunteers who have taken part in the project to date.

Find out more:

  • Contribute to the project here
  • Read the official BashTheBug blog here
  • Follow @BashTheBug on Twitter here
  • BashTheBug won the Online Community Award of the NIHR Let’s Get Digital Competition, read more here

Check out other coverage of BashTheBug:

The Universe Inside Our Cells

Below is the first in a series of guest blog posts from researchers working on one of our recently launched biomedical projects, Etch A Cell.

Read on to let Dr Martin Jones tell you about the work they’re doing to further understanding of the universe inside our cells!

– Helen

 

Having trained as a physicist, with many friends working in astronomy, I’ve been aware of Galaxy Zoo and the Zooniverse from the very early days. My early research career was in quantum mechanics, unfortunately not an area where people’s intuitions are much use! However, since I found myself working in biology labs, now at the Francis Crick Institute in London, I have been working in various aspects of microscopy – a much more visual enterprise and one where human analysis is still the gold standard. This is particularly true in electron microscopy, where the busy nature of the images means that many regions inside a cell look very similar. In order to make sense of the images, a person is able to assimilate a whole range of extra context and previous knowledge in a way that computers, for the most part, are simply unable to do. This makes it a slow and labour-intensive process. As if this wasn’t already a hard enough problem, in recent years it has been compounded by new technologies that mean the microscopes now capture images around 100 times faster than before.

Picture1
Focused ion beam scanning electron microscope

 

Ten years ago it was more or less possible to manually analyse the images at the same rate as they were acquired, keeping the in-tray and out-tray nicely balanced. Now, however, that’s not the case. To illustrate that, here’s an example of a slice through a group of cancer cells, known as HeLa cells:

Picture2

We capture an image like this and then remove a very thin layer – sometimes as thin as 5 nanometres (one nanometre is a billionth of a metre) – and then repeat… a lot! Building up enormous stacks of these images can help us understand the 3D nature of the cells and the structures inside them. For a sense of scale, this whole image is about the width of a human hair, around 80 millionths of a metre.

Zooming in to one of the cells, you can see many different structures, all of which are of interest to study in biomedical research. For this project, however, we’re just focusing on the nucleus for now. This is the large mostly empty region in the middle, where the DNA – the instruction set for building the whole body – is contained.

Picture3

By manually drawing lines around the nucleus on each slice, we can build up a 3D model that allows us to make comparisons between cells, for example understanding whether a treatment for a disease is able to stop its progression by disrupting the cells’ ability to pass on its genetic information.

Nucleus3D-1.gif

Animated gif of 3D model of a nucleus

However, images are now being generated so rapidly that the in-tray is filling too quickly for the standard “single expert” method – one sample can produce up to a terabyte of data, made up of more than a thousand 64 megapixel images captured overnight. We need new tricks!

 

Why citizen science?

With all of the advances in software that are becoming available you might think that automating image analysis of this kind would be quite straightforward for a computer. After all, people can do it relatively easily. Even pigeons can be trained in certain image analysis tasks! (http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0141357). However, there is a long history of underestimating just how hard it is to automate image analysis with a computer. Back in the very early days of artificial intelligence in 1966 at MIT, Marvin Minsky (who also invented the confocal microscope) and his colleague Seymour Papert set the “summer vision project” which they saw as a simple problem to keep their undergraduate students busy over the holidays. Many decades later we’ve discovered it’s not that easy!

Picture4

(from https://www.xkcd.com/1425/)

Our project, Etch a Cell is designed to allow citizen scientists to draw segmentations directly onto our images in the Zooniverse web interface. The first task we have set is to mark the nuclear envelope that separates the nucleus from the rest of the cell – a vital structure where defects can cause serious problems. These segmentations are extremely useful in their own right for helping us understand the structures, but citizen science offers something beyond the already lofty goal of matching the output of an expert. By allowing several people to annotate each image, we can see how the lines vary from user to user. This variability gives insight into the certainty that a given pixel or region belongs to a particular object, information that simply isn’t available from a single line drawn by one person. Difference between experts is not unheard of unfortunately!

The images below show preliminary results with the expert analysis on the left and a combination of 5 citizen scientists’ segmentations on the right.

Screen Shot 2017-06-21 at 15.29.00
Example of expert vs. citizen scientist annotation

In fact, we can go even further to maximise the value of our citizen scientists’ work. The field of machine learning, in particular deep learning, has burst onto the scene in several sectors in recent years, revolutionising many computational tasks. This new generation of image analysis techniques is much more closely aligned with how animal vision works. The catch, however, is that the “learning” part of machine learning often requires enormous amounts of time and resources (remember you’ve had a lifetime to train your brain!). To train such a system, you need a huge supply of so-called “ground truth” data, i.e. something that an expert has pre-analysed and can provide the correct answer against which the computer’s attempts are compared. Picture it as the kind of supervised learning that you did at school: perhaps working through several old exam papers in preparation for your finals. If the computer is wrong, you tweak the setup a bit and try again. By presenting thousands or even millions of images and ensuring your computer makes the same decision as the expert, you can become increasingly confident that it will make the correct decision when it sees a new piece of data. Using the power of citizen science will allow us to collect the huge amounts of data that we need to train these deep learning systems, something that would be impossible by virtually any other means.

We are now busily capturing images that we plan to upload to Etch a cell to allow us to analyse data from a range of experiments. Differences in cell type, sub-cellular organelle, microscope, sample preparation and other factors mean the images can look different across experiments, so analysing cells from a range of different conditions will allow us to build an atlas of information about sub-cellular structure. The results from Etch a cell will mean that whenever new data arrives, we can quickly extract information that will help us work towards treatments and cures for many different diseases.