Image¶
The Image
class, representing one medical image,
stores a 4D tensor, whose voxels encode, e.g., signal intensity or segmentation
labels, and the corresponding affine transform,
typically a rigid (Euclidean) transform, to convert
voxel indices to world coordinates in mm.
Arbitrary fields such as acquisition parameters may also be stored.
Subclasses are used to indicate specific types of images,
such as ScalarImage
and LabelMap
,
which are used to store, e.g., CT scans and segmentations, respectively.
An instance of Image
can be created using a filepath, a PyTorch tensor,
or a NumPy array.
This class uses lazy loading, i.e., the data is not loaded from disk at
instantiation time.
Instead, the data is only loaded when needed for an operation
(e.g., if a transform is applied to the image).
The figure below shows two instances of Image
.
The instance of ScalarImage
contains a 4D tensor representing a
diffusion MRI, which contains four 3D volumes (one per gradient direction),
and the associated affine matrix.
Additionally, it stores the strength and direction for each of the four
gradients.
The instance of LabelMap
contains a brain parcellation of the same
subject, the associated affine matrix, and the name and color of each brain
structure.
- class torchio.ScalarImage(*args, **kwargs)[source]¶
Bases:
torchio.data.image.Image
Image whose pixel values represent scalars.
Example
>>> import torch >>> import torchio as tio >>> # Loading from a file >>> t1_image = tio.ScalarImage('t1.nii.gz') >>> dmri = tio.ScalarImage(tensor=torch.rand(32, 128, 128, 88)) >>> image = tio.ScalarImage('safe_image.nrrd', check_nans=False) >>> data, affine = image.data, image.affine >>> affine.shape (4, 4) >>> image.data is image[tio.DATA] True >>> image.data is image.tensor True >>> type(image.data) torch.Tensor
See
Image
for more information.
- class torchio.LabelMap(*args, **kwargs)[source]¶
Bases:
torchio.data.image.Image
Image whose pixel values represent categorical labels.
Example
>>> import torch >>> import torchio as tio >>> labels = tio.LabelMap(tensor=torch.rand(1, 128, 128, 68) > 0.5) >>> labels = tio.LabelMap('t1_seg.nii.gz') # loading from a file >>> tpm = tio.LabelMap( # loading from files ... 'gray_matter.nii.gz', ... 'white_matter.nii.gz', ... 'csf.nii.gz', ... )
Intensity transforms are not applied to these images.
Nearest neighbor interpolation is always used to resample label maps, independently of the specified interpolation type in the transform instantiation.
See
Image
for more information.
- class torchio.Image(path: Optional[Union[pathlib.Path, str, Sequence[Union[pathlib.Path, str]]]] = None, type: Optional[str] = None, tensor: Optional[Union[torch.Tensor, numpy.ndarray]] = None, affine: Optional[Union[torch.Tensor, numpy.ndarray]] = None, check_nans: bool = False, channels_last: bool = False, reader: Callable = <function read_image>, **kwargs: Dict[str, Any])[source]¶
Bases:
dict
TorchIO image.
For information about medical image orientation, check out NiBabel docs, the 3D Slicer wiki, Graham Wideman’s website, FSL docs or SimpleITK docs.
- Parameters
path – Path to a file or sequence of paths to files that can be read by
SimpleITK
ornibabel
, or to a directory containing DICOM files. Iftensor
is given, the data inpath
will not be read. If a sequence of paths is given, data will be concatenated on the channel dimension so spatial dimensions must match.type – Type of image, such as
torchio.INTENSITY
ortorchio.LABEL
. This will be used by the transforms to decide whether to apply an operation, or which interpolation to use when resampling. For example, preprocessing and augmentation intensity transforms will only be applied to images with typetorchio.INTENSITY
. Spatial transforms will be applied to all types, and nearest neighbor interpolation is always used to resample images with typetorchio.LABEL
. The typetorchio.SAMPLING_MAP
may be used with instances ofWeightedSampler
.tensor – If
path
is not given,tensor
must be a 4Dtorch.Tensor
or NumPy array with dimensions \((C, W, H, D)\).affine – \(4 \times 4\) matrix to convert voxel coordinates to world coordinates. If
None
, an identity matrix will be used. See the NiBabel docs on coordinates for more information.check_nans – If
True
, issues a warning if NaNs are found in the image. IfFalse
, images will not be checked for the presence of NaNs.channels_last – If
True
, the read tensor will be permuted so the last dimension becomes the first. This is useful, e.g., when NIfTI images have been saved with the channels dimension being the fourth instead of the fifth.reader – Callable object that takes a path and returns a 4D tensor and a 2D, \(4 \times 4\) affine matrix. This can be used if your data is saved in a custom format, such as
.npy
(see example below). If the affine matrix isNone
, an identity matrix will be used.**kwargs – Items that will be added to the image dictionary, e.g. acquisition parameters.
TorchIO images are lazy loaders, i.e. the data is only loaded from disk when needed.
Example
>>> import torchio as tio >>> import numpy as np >>> image = tio.ScalarImage('t1.nii.gz') # subclass of Image >>> image # not loaded yet ScalarImage(path: t1.nii.gz; type: intensity) >>> times_two = 2 * image.data # data is loaded and cached here >>> image ScalarImage(shape: (1, 256, 256, 176); spacing: (1.00, 1.00, 1.00); orientation: PIR+; memory: 44.0 MiB; type: intensity) >>> image.save('doubled_image.nii.gz') >>> numpy_reader = lambda path: np.load(path), np.eye(4) >>> image = tio.ScalarImage('t1.npy', reader=numpy_reader)
- property affine: numpy.ndarray¶
Affine matrix to transform voxel indices into world coordinates.
- as_pil(transpose=True)[source]¶
Get the image as an instance of
PIL.Image
.Note
Values will be clamped to 0-255 and cast to uint8.
Note
To use this method, Pillow needs to be installed: pip install Pillow.
- axis_name_to_index(axis: str) → int[source]¶
Convert an axis name to an axis index.
- Parameters
axis – Possible inputs are
'Left'
,'Right'
,'Anterior'
,'Posterior'
,'Inferior'
,'Superior'
. Lower-case versions and first letters are also valid, as only the first letter will be used.
Note
If you are working with animals, you should probably use
'Superior'
,'Inferior'
,'Anterior'
and'Posterior'
for'Dorsal'
,'Ventral'
,'Rostral'
and'Caudal'
, respectively.Note
If your images are 2D, you can use
'Top'
,'Bottom'
,'Left'
and'Right'
.
- property bounds: numpy.ndarray¶
Position of centers of voxels in smallest and largest coordinates.
- property data: torch.Tensor¶
Tensor data. Same as
Image.tensor
.
- get_bounds() → Tuple[Tuple[float, float], Tuple[float, float], Tuple[float, float]][source]¶
Get minimum and maximum world coordinates occupied by the image.
- get_center(lps: bool = False) → Tuple[float, float, float][source]¶
Get image center in RAS+ or LPS+ coordinates.
- Parameters
lps – If
True
, the coordinates will be in LPS+ orientation, i.e. the first dimension grows towards the left, etc. Otherwise, the coordinates will be in RAS+ orientation.
- property height: int¶
Image height, if 2D.
- property itemsize¶
Element size of the data type.
- load() → None[source]¶
Load the image from disk.
- Returns
Tuple containing a 4D tensor of size \((C, W, H, D)\) and a 2D \(4 \times 4\) affine matrix to convert voxel indices to world coordinates.
- property memory: float¶
Number of Bytes that the tensor takes in the RAM.
- property num_channels: int¶
Get the number of channels in the associated 4D tensor.
- numpy() → numpy.ndarray[source]¶
Get a NumPy array containing the image data.
- property orientation: Tuple[str, str, str]¶
Orientation codes.
- save(path: Union[pathlib.Path, str], squeeze: bool = True) → None[source]¶
Save image to disk.
- Parameters
path – String or instance of
pathlib.Path
.squeeze – If
True
, singleton dimensions will be removed before saving.
- set_data(tensor: Union[torch.Tensor, numpy.ndarray])[source]¶
Store a 4D tensor in the
data
key and attribute.- Parameters
tensor – 4D tensor with dimensions \((C, W, H, D)\).
- property shape: Tuple[int, int, int, int]¶
Tensor shape as \((C, W, H, D)\).
- property spacing: Tuple[float, float, float]¶
Voxel spacing in mm.
- property spatial_shape: Tuple[int, int, int]¶
Tensor spatial shape as \((W, H, D)\).
- property tensor: torch.Tensor¶
Tensor data. Same as
Image.data
.
- property width: int¶
Image width, if 2D.