Whole brain microscopy analysis
Create a new conda environment and install napari
conda create --name brainglobe python=3.10 -y
conda activate brainglobe
conda install -c conda-forge napari pyqt
Double-check that running
opens a new napari
window.
More details
More details about using conda are available at brainglobe.info
Create a new conda environment and install napari with pip
Double-check that running
opens a new napari
window.
More details
More details about using conda are available at brainglobe.info
Install all brainglobe tools with
Established 2020 with three aims:
Initial observation - lots of similar communities working independently
Currently implemented atlases
from pprint import pprint
from bg_atlasapi.bg_atlas import BrainGlobeAtlas
atlas = BrainGlobeAtlas("allen_mouse_25um")
# reference image
reference_image = atlas.reference
print(reference_image.shape)
# (528, 320, 456)
# hemispheres image (value 1 in left hemisphere, 2 in right)
hemispheres_image = atlas.hemispheres
print(hemispheres_image.shape)
# (528, 320, 456)
VISp = atlas.structures["VISp"]
pprint(VISp)
# {'acronym': 'VISp',
# 'id': 385,
# 'mesh': None,
# 'mesh_filename': PosixPath('/home/user/.brainglobe/allen_mouse_25um_v0.3/meshes/385.obj')
# 'name': 'Primary visual area',
# 'rgb_triplet': [8, 133, 140],
# 'structure_id_path': [997, 8, 567, 688, 695, 315, 669, 385]}
Serial section two-photon tomography
Fluorescence micro-optical sectioning tomography
Light sheet fluorescence microscopy
On it’s own, napari can:
It leverages Python well:
The immature ecosystem means:
Video re-used from Napari-napari under BSD-3 license.
napari
and follow allow with the live demoAfter following along, your screen should look something like:
After following along, your screen should look something like:
Window > Console
import napari
from pathlib import Path
from cellfinder_core.tools.IO import read_with_dask
viewer = napari.viewer.current_viewer()
# adapt "path/to/data" to your folder of tiffs
path_to_data = Path("path/to/data")
viewer.open(path_to_data)
print(viewer.layers)
# which of these is quicker?
%timeit -r 3 -n 1 viewer.open(path_to_data)
%timeit -r 3 -n 1 viewer.add_image(read_with_dask(str(path_to_data)))
After following along, your screen should look something like:
Plugins > Install/Uninstall plugins...
brainrender-napari
Install
buttonnapari
Plugins > Brainrender (brainrender-napari)
After following along, your screen should look something like:
napari
napari
consolebg-atlasapi
)brainrender-napari
)import napari
from bg_atlasapi import BrainGlobeAtlas
from brainrender_napari.napari_atlas_representation import NapariAtlasRepresentation
# setup a napari viewer and a brainglobe atlas
viewer = napari.viewer.Viewer()
viewer.dims.ndisplay = 3 # set to 3d mode
atlas = BrainGlobeAtlas("allen_mouse_100um")
# find all hippocampal regions
hip_id = 1080 # the id of the hippocampus is 1080
hip_regions = [
region["acronym"]
for region in atlas.structures_list
if hip_id in region["structure_id_path"]
]
# make a representation of the brainglobe atlas in napari
napari_atlas = NapariAtlasRepresentation(atlas, viewer)
# add all hippocampal regions to the napari viewer
for hip_region in hip_regions:
napari_atlas.add_structure_to_viewer(hip_region)
# add the whole brain mesh as a help for orientation
napari_atlas.add_structure_to_viewer("root")
# run this script
if __name__ == "__main__":
napari.run()
You are welcome to contribute to BrainGlobe - get in touch anytime and we will support you!
Please give us some feedback on this pilot course.
SWC | 2023-12-06