Categories
biology articles code imaging

Improving Z-resolution of lightsheet data using CARE upsampling

After posting the first version of our cephalic furrow pre-print, I received an email asking for further technical details about using the CARE upsampling module to restore the Z-resolution of lightsheet datasets:

I am curious about some details of your “Pre-patterned epithelial invagination prevents mechanical instability during fly gastrulation” paper. You mention that you trained a model to increase the z resolution of your microscopy data using the care framework. Would you mind sharing the model, or do you have some code on how you trained the model that you are willing to share? Additionally, I am trying to wrap my head around your mounting method in the Z1 Lightsheet, but I am having a hard time imagining it. Would you mind shooting a small video or simply one or two images the next time you or someone else mounts some fly embryos on the lightsheet?

Edited for clarity and to remove personal information.

I wrote a fairly detailed reply but had forgotten about it. Because the information might be useful for other people, I converted my reply into a blog post. Read it below.

This email could have been a blog post

Old blogging philosophy from mid-2000s ([citation needed])

CARE workflow

The code for training the CARE model is available in the paper’s GitHub repository and in the Zenodo archive.

CARE upsampling documentation.
README describing our CARE upsampling workflow at https://github.com/bruvellu/cephalic-furrow/tree/main/0-data/care

Take a look at the Python notebook file named CoverCARE.ipynb. It’s a notebook modified from the original upsampling3D example provided by the developers of CARE. I used it to test the best training parameters. However, to train a proper model, which can take several days to finish, I made scripts for running the training in a cluster.

Essentially, there’s a config.py file with information about the dataset and parameters for the UpsamplingCARE model; there’s a train.py which is the code that does the training; and there’s a predict.py which is the code that restores your other datasets using the trained model. The files train.sh and predict.sh are just bash scripts that create the jobs for running the python code in the cluster.

Re-usability of CARE models

The models I trained for the paper will be in the Zenodo repository for the next release. I can send them to you but, unless you’ve got the same imaging setup, they’re not going to work well.

The model is specific for the data it got trained on (fluorescent signal, optics, lasers, medium, stage, etc.). If it’s only denoising, there’s a chance that other people’s models could work as long as the type of signal and the microscopy setup are the same. But I wouldn’t trust it.

Furthermore, the CARE upsampling parameters used to train the model are specific for the dataset and for the acquisition parameters, making it even more unlikely to work well—or at all. Therefore, I highly recommend training your own model.

Acquiring training data

The trick with CARE is that you need to acquire the training dataset properly. You need identical images with high and low signal-to-noise ratio. If you have 3D data, that means acquiring both conditions slice by slice. And when you do it, you’ll probably need to customize the microscope parameters to optimize this acquisition. Requires some work.

But most importantly, the acquisition parameters of your training data need to match the acquisition parameters of your experiments. After all, they’re the data that you want to restore. That means you must optimize your experimental conditions before acquiring the training data for the CARE model.

The good news is that, if done right, the results are excellent.

Of course, one needs to be sure that the model is not making structures up. But using CARE upsampling on a membrane marker for improving segmentation, for example, works great. I’m also not aware of other denoising methods that do upsampling.

High-throughput mounting

As for the mounting, I’ll include a supplementary figure to describe it better. The approach is sticking several embryos on a coverslip or glass capillary in a single row. It’s based on this protocol. Here’s how it looks in my hands:

High-throughput mounting of Drosophila embryos for lightsheet microscopy.
High-throughput mounting strategy used for the paper “Patterned embryonic invagination evolved in response to mechanical instabilityhttps://doi.org/10.1101/2023.03.30.534554

I hope this helps! Happy to answer other questions if you have.

Reply by Email

or

Leave a Comment

Your email address will not be published. Required fields are marked *