With BrainGlobe and napari
Installing BrainGlobe and Napari (30 mins)
Introduction to Image analysis with Napari (60 mins)
Introduction to BrainGlobe (30 mins)
Registering whole brain microscopy images with brainreg (30 mins)
Segmenting structures in whole brain microscopy images with brainglobe-segmentation (30 mins)
Detecting cells in large 3D images with cellfinder (90 mins)
Combining brainreg
and cellfinder
on the command line (brainmapper
)(30 mins)
Unstructured time (debugging problems, networking, discussions) (60 mins)
Analysing brainmapper
outputs in napari (30 mins)
Visualising atlases with brainrender-napari
(30 mins)
Visualising cells in atlas space (30 mins)
Visualising cell density with brainglobe-heatmap
(30 mins)
Scripting a visualisation with brainrender
(30 mins)
Tour of other BrainGlobe tools (30 mins)
Where to get help (30 mins)
Contributing to BrainGlobe and the Scientific Python ecosystem (30 mins)
Contibuting an atlas to BrainGlobe (30 mins)
Unstructured time (debugging problems, networking, discussions, scripting) (60 mins)
Hackday ideas (30 mins)
Run
brainglobe install -a allen_mouse_50um
brainglobe install -a allen_mouse_10um
brainglobe install -a ccfv3augmented_mouse_25um
To check whether this worked:
Adapted from https://github.com/HealthBioscienceIDEAS/microscopy-novice/ (under CC BY 4.0 license)
File > Open Files(s)
, then navigate to calcium imaging folder
and open translation1_00001_ce.tif
Try moving around the image with the following commands:
Pan - Click and drag
Zoom - Scroll in/out
The viewer buttons (the row of buttons at the bottom left of Napari) control various aspects of the Napari viewer:
Console
2D/3D /
Roll dimensions
Transpose dimensions
Grid
Home
This area shows controls only for the currently selected layer (i.e. the one that is highlighted in blue in the layer list).
Create and remove.
Points
Shapes
Labels
Remove layer
Note that there are some layer types that can’t be added via clicking buttons in the user interface, like
These require calling python commands in Napari’s console or an external python script.
Image
, Point
, Label
File > Open Files(s)
, then navigate to calcium imaging folder
and open translation1_00001_ce.tif
The type determines what kind of values can be stored in the array, for example:
The bit depth determines the range of values that can be stored e.g. only values between 0 and \(2^{16}-1\).
NumPy supports a very wide range of data types, but there are a few that are most common for image data:
NumPy datatype | Full name | Range of values |
---|---|---|
uint8 |
Unsigned integer 8-bit | 0…255 |
uint16 |
Unsigned integer 16-bit | 0…65535 |
float32 |
Float 32-bit | \(-3.4 \times 10^{38}...+3.4 \times 10^{38}\) |
float64 |
Float 64-bit | \(-1.7 \times 10^{308}...+1.7 \times 10^{308}\) |
uint8
and uint16
are most common for images from light microscopes. float32
and float64
are common during image processing (as we will see in later episodes).
y
is the first coordinatez
is the first coordinateshape
and dtype
Now we understand what an image is, and how to look at it in napari, we can start measuring things! But we need to find (“segment”) “things” first!
from scipy.signal import medfilt2d
image = viewer.layers[0].data
filtered = medfilt2d(image)
viewer.add_image(filtered)
Example of “semantic” segmentation
Example of “instance” segmentation
from skimage.measure import regionprops, label
labelled = label(thresholded)
viewer.add_labels(labelled)
properties = regionprops(labelled)
pixels_in_each_region = [prop.area for prop in properties]
print(pixels_in_each_region)
Established 2020 with three aims:
Initial observation - lots of similar communities working independently
Atlas Name | Resolution | Ages | Reference Images |
---|---|---|---|
Allen Mouse Brain Atlas | 10, 25, 50, and 100 micron | P56 | STPT |
Allen Human Brain Atlas | 100 micron | Adult | MRI |
Max Planck Zebrafish Brain Atlas | 1 micron | 6-dpf | FISH |
Enhanced and Unified Mouse Brain Atlas | 10, 25, 50, and 100 micron | P56 | STPT |
Smoothed version of the Kim et al. mouse reference atlas | 10, 25, 50 and 100 micron | P56 | STPT |
Gubra’s LSFM mouse brain atlas | 20 micron | 8 to 10 weeks post natal | LSFM |
3D version of the Allen mouse spinal cord atlas | 20 x 10 x 10 micron | Adult | Nissl |
AZBA: A 3D Adult Zebrafish Brain Atlas | 4 micron | 15-16 weeks post natal | LSFM |
Waxholm Space atlas of the Sprague Dawley rat brain | 39 micron | P80 | MRI |
3D Edge-Aware Refined Atlases Derived from the Allen Developing Mouse Brain Atlases | 16, 16.75, and 25 micron | E13, E15, E18, P4, P14, P28 & P56 | Nissl |
Princeton Mouse Brain Atlas | 20 micron | >P56 (older animals included) | LSFM |
Kim Lab Developmental CCF | 10 micron | P56 | STP, LSFM (iDISCO) and MRI (a0, adc, dwo, fa, MTR, T2) |
Blind Mexican Cavefish Brain Atlas | 2 micron | 1 year | IHC |
BlueBrain Barrel Cortex Atlas | 10 and 25 micron | P56 | STPT |
UNAM Axolotl Brain Atlas | 40 micron | ~ 3 months post hatching | MRI |
from brainglobe_atlasapi import BrainGlobeAtlas
atlas = BrainGlobeAtlas("allen_mouse_25um")
reference_image = atlas.reference
print(reference_image.shape)
# (528, 320, 456)
annotation_image = atlas.annotation
print(annotation_image.shape)
# (528, 320, 456)
from pprint import pprint
VISp = atlas.structures["VISp"]
pprint(VISp)
# {'acronym': 'VISp',
# 'id': 385,
# 'mesh': None,
# 'mesh_filename': PosixPath('/home/user/.brainglobe/allen_mouse_25um_v0.3/meshes/385.obj'),
# 'name': 'Primary visual area',
# 'rgb_triplet': [8, 133, 140],
# 'structure_id_path': [997, 8, 567, 688, 695, 315, 669, 385]}
Serial section two-photon tomography
Fluorescence micro-optical sectioning tomography
Light sheet fluorescence microscopy
brainreg
cellfinder
cellfinder
cellfinder
cellfinder
brainglobe-segmentation
brainglobe-segmentation
brainglobe-segmentation
brainrender
brainrender
Expanding access
By the end of the course (maybe on your own data):
cellfinder
to fine-tune cell detectionnapari
MS_cx_left
psl
- pixel (0,0,0) is Posterior, Superior, Leftasr
brainmapper
CLIbrainmapper
is the name of our command line tool to combine cell detection and atlas registration.brainmapper
CLI-s
The primary signal channel-b
The secondary autofluorescence channel-o
The output directory--orientation
e.g. `psl-v
The voxel spacing (microns) in the same order as the data orientation (psl
): 5 2 2.--atlas
psl
- pixel (0,0,0) is Posterior, Superior, Leftasr
Installing BrainGlobe and Napari (30 mins)
Introduction to Image analysis with Napari (60 mins)
Introduction to BrainGlobe (30 mins)
Registering whole brain microscopy images with brainreg (30 mins)
Segmenting structures in whole brain microscopy images with brainglobe-segmentation (30 mins)
Detecting cells in large 3D images with cellfinder (90 mins)
Combining brainreg
and cellfinder
on the command line (brainmapper
)(30 mins)
Unstructured time (debugging problems, networking, discussions) (60 mins)
Analysing brainmapper
outputs in napari (30 mins)
Visualising atlases with brainrender-napari
(30 mins)
Visualising cells in atlas space (30 mins)
Visualising cell density with brainglobe-heatmap
(30 mins)
Scripting a visualisation with brainrender
(30 mins)
Tour of other BrainGlobe tools (30 mins)
Where to get help (30 mins)
Contributing to BrainGlobe and the Scientific Python ecosystem (30 mins)
Contibuting an atlas to BrainGlobe (30 mins)
Unstructured time (debugging problems, networking, discussions, scripting) (60 mins)
Hackday ideas (30 mins)
open-software-week/brainglobe-course-data/MS_CX_left_cellfinder
brainmapper
on MS_cx_left
databrainglobe-heatmap
scriptsTask: * Visualise the cell density/region of a coronal slice
brainrender
scriptsTasks: * Visualise a region in the atlas * Visualise the cells from the large sample data * (Bonus) Make an animation
You are welcome to contribute to BrainGlobe - get in touch anytime and we will support you!
BrainGlobe makes the scientific Python ecosystem accessible to neuroscientists.
We depend on the scientific Python ecosystem.
Other neuroscience tools depend on BrainGlobe
Before starting, you will need:
zarr
to “stream” atlasesMore info on the BrainGlobe website. Achievable (?) hackday idea:
Please fill out the feedback form.