Whole brain microscopy analysis

With BrainGlobe and napari

Welcome

Schedule

  • Installing BrainGlobe and Napari (30 mins)
  • Introduction to Image analysis with Napari (60 mins)
  • Introduction to BrainGlobe (30 mins)
  • Registering whole brain microscopy images with brainreg (30 mins)
  • Segmenting structures in whole brain microscopy images with brainglobe-segmentation (30 mins)
  • Detecting cells in large 3D images with cellfinder (90 mins)
  • Analysing cell positions in atlas space (30 mins)
  • Visualisation of data in atlas space with brainrender and brainrender-napari (30 mins)
  • Tour of other BrainGlobe tools and wrap-up (30 mins)

Installation

Install napari and BrainGlobe

Everyone

conda create -n brainglobe-env python=3.12
conda activate brainglobe-env
pip install brainglobe

Silicon mac users (additionally)

conda install niftyreg

Double-check installation

Double-check that running

napari

opens a new napari window, with brainglobe plugins available under Plugins.

Install BrainGlobe atlases

Run

brainglobe install -a allen_mouse_50um
brainglobe install -a allen_mouse_10um

To check whether this worked:

brainglobe list

Image Analysis

Adapted from https://github.com/HealthBioscienceIDEAS/microscopy-novice/ (under CC BY 4.0 license)

Opening Napari

napari
A screenshot of the default Napari user  interface

Opening images

File > Open Files(s), then navigate to calcium imaging folder and open translation1_00001_ce.tif

A screenshot of a some fluorescent cells

Napari’s User interface

A screenshot of Napari with the main user  interface sections labelled

Canvas

Try moving around the image with the following commands:

Pan - Click and drag
Zoom - Scroll in/out
A screenshot of a some fluorescent cells, closer up than before.

Dimension sliders

Closeup of Napari's dimension slider with labels

Three screenshots of the cells image in napari, at  different z depths Three screenshots of the cells image in napari, at  different z depths Three screenshots of the cells image in napari, at  different z depths

Viewer buttons

The viewer buttons (the row of buttons at the bottom left of Napari) control various aspects of the Napari viewer:

  • Console A screenshot of Napari's console button

  • 2D/3D A screenshot of Napari's 2D button / A screenshot of Napari's 3D button

  • Roll dimensions A screenshot of Napari's roll dimensions button

  • Transpose dimensions A screenshot of Napari's transpose dimensions button

  • Grid A screenshot of Napari's grid button

  • Home A screenshot of Napari's home button

A screenshot of a some fluorescent cells

Layer list

A screenshot of a some fluorescent cells

Layer controls

This area shows controls only for the currently selected layer (i.e. the one that is highlighted in blue in the layer list).

  • Opacity
  • Contrast limits

A screenshot of a some fluorescent cells

Layer buttons

Create and remove.

  • Points A screenshot of Napari's point layer button

  • Shapes A screenshot of Napari's shape layer button

  • Labels A screenshot of Napari's labels layer button

  • Remove layer A screenshot of Napari's delete layer button

A screenshot of a some fluorescent cells

Other layer types

Note that there are some layer types that can’t be added via clicking buttons in the user interface, like

These require calling python commands in Napari’s console or an external python script.

Key points

  • Napari’s user interface is split into a few main sections including the canvas, layer list, layer controls…
  • Layers can be of different types e.g. Image, Point, Label
  • Different layer types have different layer controls

A deeper look

File > Open Files(s), then navigate to calcium imaging folder and open translation1_00001_ce.tif

A screenshot of a some fluorescent cells

Pixels

A screenshot of Napari - with the mouse cursor  hovering over a pixel and highlighting the corresponding pixel value

Images are arrays of numbers

Open Napari’s built-in console A screenshot of Napari's console button and run:

# Get the image data for the first layer in Napari
image = viewer.layers[0].data

# Print the image values and type
print(image)
print(type(image))

A screenshot of Napari's console

Image dimensions

We can find out the dimensions by running the following in Napari’s console:

image.shape

A screenshot of Napari's console

Image data type

The other key feature of an image array is its ‘data type’ - this controls which values can be stored inside of it.

image.dtype.name

The data type consists of type and bit-depth.

A screenshot of Napari's console

Type

The type determines what kind of values can be stored in the array, for example:

  • Unsigned integer: positive whole numbers
  • Signed integer: positive and negative whole numbers
  • Float: positive and negative numbers with a decimal point e.g. 3.14

Bit depth

The bit depth determines the range of values that can be stored e.g. only values between 0 and \(2^{16}-1\).

print(image.min())
print(image.max())

Common data types

NumPy supports a very wide range of data types, but there are a few that are most common for image data:

NumPy datatype Full name Range of values
uint8 Unsigned integer 8-bit 0…255
uint16 Unsigned integer 16-bit 0…65535
float32 Float 32-bit \(-3.4 \times 10^{38}...+3.4 \times 10^{38}\)
float64 Float 64-bit \(-1.7 \times 10^{308}...+1.7 \times 10^{308}\)

uint8 and uint16 are most common for images from light microscopes. float32 and float64 are common during image processing (as we will see in later episodes).

Coordinate system

Diagram comparing a standard graph  coordinate system (left) and the image coordinate system (right) A diagram showing how pixel coordinates change over a simple 4x4 image

  • For 2d images, y is the first coordinate
  • For 3d images, z is the first coordinate

Key points

  • Digital images are made of pixels
  • Digital images store these pixels as arrays of numbers
  • Napari (and Python more widely) use NumPy arrays to store images - these have a shape and dtype
  • Most images are 8-bit or 16-bit unsigned integer
  • Images use a coordinate system with (0,0) at the top left, x increasing to the right, and y increasing down

Processing images

Now we understand what an image is, and how to look at it in napari, we can start measuring things! But we need to find (“segment”) “things” first!

A screenshot of a some fluorescent cells

Reduce noise with a median filter

from scipy.signal import medfilt2d
image = viewer.layers[0].data
filtered = medfilt2d(image)
viewer.add_image(filtered)
A median-filtered version of the example image

Isolate neurons with a threshold

Example of “semantic” segmentation

thresholded = filtered > 8000 # True or False array
thresholded = thresholded*255 # convert to unsigned int 8-bit
viewer.add_image(thresholded)
A thresholded version of the example image

Label each neuron with a number

Example of “instance” segmentation

from skimage.measure import regionprops, label
labelled = label(thresholded)
viewer.add_labels(labelled)
A labelled version of the example image

Pixels in each neuron

properties = regionprops(labelled)
pixels_in_each_region = [prop.area for prop in properties]
print(pixels_in_each_region)
A list of pixel counts for each region of the example image

Key points

  • Segmentation can be broadly split into ‘semantic segmentation’ (e.g. neuron vs background) and ‘instance segmentation’ (e.g. individual neuron).
  • Segmentations are represented in the computer in the same way as images, but pixels represent an abstraction, rather than light intensity.
  • Napari uses “Labels” layers for segmentations.
  • Segmentation is helpful for analysis.

BrainGlobe

BrainGlobe Initiative

Established 2020 with three aims:

  1. Develop general-purpose tools to help others build interoperable software
  2. Develop specialist software for specific analysis and visualisation needs
  3. Reduce barriers of entry, and facilitate the building of an ecosystem of computational neuroanatomy tools.

BrainGlobe Atlas API

Initial observation - lots of similar communities working independently

  • Model species
  • Imaging modality
  • Anatomical focus
  • Developmental stage

BrainGlobe Atlas API

brainglobe-atlasapi

Current atlases

Atlas Name Resolution Ages Reference Images
Allen Mouse Brain Atlas 10, 25, 50, and 100 micron P56 STPT
Allen Human Brain Atlas 100 micron Adult MRI
Max Planck Zebrafish Brain Atlas 1 micron 6-dpf FISH
Enhanced and Unified Mouse Brain Atlas 10, 25, 50, and 100 micron P56 STPT
Smoothed version of the Kim et al. mouse reference atlas 10, 25, 50 and 100 micron P56 STPT
Gubra’s LSFM mouse brain atlas 20 micron 8 to 10 weeks post natal LSFM
3D version of the Allen mouse spinal cord atlas 20 x 10 x 10 micron Adult Nissl
AZBA: A 3D Adult Zebrafish Brain Atlas 4 micron 15-16 weeks post natal LSFM
Waxholm Space atlas of the Sprague Dawley rat brain 39 micron P80 MRI
3D Edge-Aware Refined Atlases Derived from the Allen Developing Mouse Brain Atlases 16, 16.75, and 25 micron E13, E15, E18, P4, P14, P28 & P56 Nissl
Princeton Mouse Brain Atlas 20 micron >P56 (older animals included) LSFM
Kim Lab Developmental CCF 10 micron P56 STP, LSFM (iDISCO) and MRI (a0, adc, dwo, fa, MTR, T2)
Blind Mexican Cavefish Brain Atlas 2 micron 1 year IHC
BlueBrain Barrel Cortex Atlas 10 and 25 micron P56 STPT
UNAM Axolotl Brain Atlas 40 micron ~ 3 months post hatching MRI

BrainGlobe Atlas API

from brainglobe_atlasapi.bg_atlas import BrainGlobeAtlas
atlas = BrainGlobeAtlas("allen_mouse_25um")

reference_image = atlas.reference
print(reference_image.shape)
# (528, 320, 456)

annotation_image = atlas.annotation
print(annotation_image.shape)
# (528, 320, 456)

from pprint import pprint
VISp = atlas.structures["VISp"]
pprint(VISp)

# {'acronym': 'VISp',
#  'id': 385,
#  'mesh': None,
#  'mesh_filename': PosixPath('/home/user/.brainglobe/allen_mouse_25um_v0.3/meshes/385.obj'),
#  'name': 'Primary visual area',
#  'rgb_triplet': [8, 133, 140],
#  'structure_id_path': [997, 8, 567, 688, 695, 315, 669, 385]}

Version 1

Whole brain microscopy

Serial section two-photon tomography

Fluorescence micro-optical sectioning tomography

Light sheet fluorescence microscopy

Whole-brain registration

brainreg

3D cell detection

cellfinder

3D cell detection

cellfinder

3D cell detection

cellfinder

3D cell detection

cellfinder

Spatial analysis

brainglobe-segmentation

Spatial analysis

brainglobe-segmentation

Spatial analysis

brainglobe-segmentation

Visualisation

brainrender

Visualisation

brainrender

Version 2

Expanding access

Adding new atlases

  • Latest version of Waxholm rat
  • Axolotl
  • Prarie Vole
  • Princeton RAtlas
  • Cuttlefish
  • Developmental Mouse Brain atlas
  • NHP
  • Human

Building novel atlases

More raw data processing

Consistent napari environment

Support for more data types

BrainGlobe website

Tutorials

Registering whole brain microscopy images with brainreg

Tutorial

Segmenting structures in whole brain microscopy images with brainglobe-segmentation

Tutorial (1D)

Tutorial (2D/3D)

Detecting cells in large 3D images with cellfinder

Cell detection tutorial

Retraining tutorial

Analysing cell positions in atlas space

Tutorial

Visualisation of data in atlas space with brainrender and brainrender-napari

Tutorial (brainrender-napari)

Example brainrender scripts

Tour of other BrainGlobe tools and wrap-up

Wrap up

Resources

You are welcome to contribute to BrainGlobe - get in touch anytime and we will support you!

Thank you!