Preprocessing¶
Intensity¶
RescaleIntensity
¶
-
class
torchio.transforms.
RescaleIntensity
(out_min_max: Union[float, Tuple[float, float]] = (0, 1), percentiles: Union[float, Tuple[float, float]] = (0, 100), masking_method: Optional[Union[str, Callable[[torch.Tensor], torch.Tensor], int, Tuple[int, int, int], Tuple[int, int, int, int, int, int]]] = None, in_min_max: Optional[Tuple[float, float]] = None, **kwargs)[source]¶ Bases:
torchio.transforms.preprocessing.intensity.normalization_transform.NormalizationTransform
Rescale intensity values to a certain range.
- Parameters
out_min_max – Range \((n_{min}, n_{max})\) of output intensities. If only one value \(d\) is provided, \((n_{min}, n_{max}) = (-d, d)\).
percentiles – Percentile values of the input image that will be mapped to \((n_{min}, n_{max})\). They can be used for contrast stretching, as in this scikit-image example. For example, Isensee et al. use
(0.5, 99.5)
in their nn-UNet paper. If only one value \(d\) is provided, \((n_{min}, n_{max}) = (0, d)\).masking_method – See
NormalizationTransform
.in_min_max – Range \((m_{min}, m_{max})\) of input intensities that will be mapped to \((n_{min}, n_{max})\). If
None
, the minimum and maximum input intensities will be used.**kwargs – See
Transform
for additional keyword arguments.
Example
>>> import torchio as tio >>> ct = tio.ScalarImage('ct_scan.nii.gz') >>> ct_air, ct_bone = -1000, 1000 >>> rescale = tio.RescaleIntensity(out_min_max=(-1, 1), in_min_max=(ct_air, ct_bone)) >>> ct_normalized = rescale(ct)
ZNormalization
¶
-
class
torchio.transforms.
ZNormalization
(masking_method: Optional[Union[str, Callable[[torch.Tensor], torch.Tensor], int, Tuple[int, int, int], Tuple[int, int, int, int, int, int]]] = None, **kwargs)[source]¶ Bases:
torchio.transforms.preprocessing.intensity.normalization_transform.NormalizationTransform
Subtract mean and divide by standard deviation.
- Parameters
masking_method – See
NormalizationTransform
.**kwargs – See
Transform
for additional keyword arguments.
HistogramStandardization
¶

-
class
torchio.transforms.
HistogramStandardization
(landmarks: Union[pathlib.Path, str, Dict[str, Union[pathlib.Path, str, numpy.ndarray]]], masking_method: Optional[Union[str, Callable[[torch.Tensor], torch.Tensor], int, Tuple[int, int, int], Tuple[int, int, int, int, int, int]]] = None, **kwargs)[source]¶ Bases:
torchio.transforms.preprocessing.intensity.normalization_transform.NormalizationTransform
Perform histogram standardization of intensity values.
Implementation of New variants of a method of MRI scale standardization.
See example in
torchio.transforms.HistogramStandardization.train()
.- Parameters
landmarks – Dictionary (or path to a PyTorch file with
.pt
or.pth
extension in which a dictionary has been saved) whose keys are image names in the subject and values are NumPy arrays or paths to NumPy arrays defining the landmarks after training withtorchio.transforms.HistogramStandardization.train()
.masking_method – See
NormalizationTransform
.**kwargs – See
Transform
for additional keyword arguments.
Example
>>> import torch >>> import torchio as tio >>> landmarks = { ... 't1': 't1_landmarks.npy', ... 't2': 't2_landmarks.npy', ... } >>> transform = tio.HistogramStandardization(landmarks) >>> torch.save(landmarks, 'path_to_landmarks.pth') >>> transform = tio.HistogramStandardization('path_to_landmarks.pth')
-
classmethod
train
(images_paths: Sequence[Union[pathlib.Path, str]], cutoff: Optional[Tuple[float, float]] = None, mask_path: Optional[Union[pathlib.Path, str, Sequence[Union[pathlib.Path, str]]]] = None, masking_function: Optional[Callable] = None, output_path: Optional[Union[pathlib.Path, str]] = None) → numpy.ndarray[source]¶ Extract average histogram landmarks from images used for training.
- Parameters
images_paths – List of image paths used to train.
cutoff – Optional minimum and maximum quantile values, respectively, that are used to select a range of intensity of interest. Equivalent to \(pc_1\) and \(pc_2\) in Nyúl and Udupa’s paper.
mask_path – Path (or list of paths) to a binary image that will be used to select the voxels use to compute the stats during histogram training. If
None
, all voxels in the image will be used.masking_function – Function used to extract voxels used for histogram training.
output_path – Optional file path with extension
.txt
or.npy
, where the landmarks will be saved.
Example
>>> import torch >>> import numpy as np >>> from pathlib import Path >>> from torchio.transforms import HistogramStandardization >>> >>> t1_paths = ['subject_a_t1.nii', 'subject_b_t1.nii.gz'] >>> t2_paths = ['subject_a_t2.nii', 'subject_b_t2.nii.gz'] >>> >>> t1_landmarks_path = Path('t1_landmarks.npy') >>> t2_landmarks_path = Path('t2_landmarks.npy') >>> >>> t1_landmarks = ( ... t1_landmarks_path ... if t1_landmarks_path.is_file() ... else HistogramStandardization.train(t1_paths) ... ) >>> torch.save(t1_landmarks, t1_landmarks_path) >>> >>> t2_landmarks = ( ... t2_landmarks_path ... if t2_landmarks_path.is_file() ... else HistogramStandardization.train(t2_paths) ... ) >>> torch.save(t2_landmarks, t2_landmarks_path) >>> >>> landmarks_dict = { ... 't1': t1_landmarks, ... 't2': t2_landmarks, ... } >>> >>> transform = HistogramStandardization(landmarks_dict)
NormalizationTransform
¶
-
class
torchio.transforms.preprocessing.intensity.
NormalizationTransform
(masking_method: Optional[Union[str, Callable[[torch.Tensor], torch.Tensor], int, Tuple[int, int, int], Tuple[int, int, int, int, int, int]]] = None, **kwargs)[source]¶ Bases:
torchio.transforms.intensity_transform.IntensityTransform
Base class for intensity preprocessing transforms.
- Parameters
masking_method –
Defines the mask used to compute the normalization statistics. It can be one of:
None
: the mask image is all ones, i.e. all values in the image are used.A string: key to a
torchio.LabelMap
in the subject which is used as a mask, OR an anatomical label:'Left'
,'Right'
,'Anterior'
,'Posterior'
,'Inferior'
,'Superior'
which specifies a side of the mask volume to be ones.A function: the mask image is computed as a function of the intensity image. The function must receive and return a
torch.Tensor
**kwargs – See
Transform
for additional keyword arguments.
Example
>>> import torchio as tio >>> subject = tio.datasets.Colin27() >>> subject Colin27(Keys: ('t1', 'head', 'brain'); images: 3) >>> transform = tio.ZNormalization() # ZNormalization is a subclass of NormalizationTransform >>> transformed = transform(subject) # use all values to compute mean and std >>> transform = tio.ZNormalization(masking_method='brain') >>> transformed = transform(subject) # use only values within the brain >>> transform = tio.ZNormalization(masking_method=lambda x: x > x.mean()) >>> transformed = transform(subject) # use values above the image mean
Spatial¶
CropOrPad
¶
-
class
torchio.transforms.
CropOrPad
(target_shape: Union[int, Tuple[int, int, int]], padding_mode: Union[str, float] = 0, mask_name: Optional[str] = None, **kwargs)[source]¶ Bases:
torchio.transforms.preprocessing.spatial.bounds_transform.BoundsTransform
Crop and/or pad an image to a target shape.
This transform modifies the affine matrix associated to the volume so that physical positions of the voxels are maintained.
- Parameters
target_shape – Tuple \((W, H, D)\). If a single value \(N\) is provided, then \(W = H = D = N\).
padding_mode – Same as
padding_mode
inPad
.mask_name – If
None
, the centers of the input and output volumes will be the same. If a string is given, the output volume center will be the center of the bounding box of non-zero values in the image namedmask_name
.**kwargs – See
Transform
for additional keyword arguments.
Example
>>> import torchio as tio >>> subject = tio.Subject( ... chest_ct=tio.ScalarImage('subject_a_ct.nii.gz'), ... heart_mask=tio.LabelMap('subject_a_heart_seg.nii.gz'), ... ) >>> subject.chest_ct.shape torch.Size([1, 512, 512, 289]) >>> transform = tio.CropOrPad( ... (120, 80, 180), ... mask_name='heart_mask', ... ) >>> transformed = transform(subject) >>> transformed.chest_ct.shape torch.Size([1, 120, 80, 180])
-
static
_get_six_bounds_parameters
(parameters: numpy.ndarray) → Tuple[int, int, int, int, int, int][source]¶ Compute bounds parameters for ITK filters.
- Parameters
parameters – Tuple \((w, h, d)\) with the number of voxels to be cropped or padded.
- Returns
Tuple \((w_{ini}, w_{fin}, h_{ini}, h_{fin}, d_{ini}, d_{fin})\), where \(n_{ini} = \left \lceil \frac{n}{2} \right \rceil\) and \(n_{fin} = \left \lfloor \frac{n}{2} \right \rfloor\).
Example
>>> p = np.array((4, 0, 7)) >>> CropOrPad._get_six_bounds_parameters(p) (2, 2, 0, 0, 4, 3)
EnsureShapeMultiple
¶
-
class
torchio.transforms.
EnsureShapeMultiple
(target_multiple: Union[int, Tuple[int, int, int]], *, method: Optional[str] = 'pad', **kwargs)[source]¶ Bases:
torchio.transforms.spatial_transform.SpatialTransform
Crop or pad an image to a shape that is a multiple of \(N\).
- Parameters
target_multiple – Tuple \((w, h, d)\). If a single value \(n\) is provided, then \(w = h = d = n\).
method – Either
'crop'
or'pad'
.**kwargs – See
Transform
for additional keyword arguments.
Example
>>> import torchio as tio >>> image = tio.datasets.Colin27().t1 >>> image.shape (1, 181, 217, 181) >>> transform = tio.EnsureShapeMultiple(8, method='pad') >>> transformed = transform(image) >>> transformed.shape (1, 184, 224, 184) >>> transform = tio.EnsureShapeMultiple(8, method='crop') >>> transformed = transform(image) >>> transformed.shape (1, 176, 216, 176) >>> image_2d = image.data[..., :1] >>> image_2d.shape torch.Size([1, 181, 217, 1]) >>> transformed = transform(image_2d) >>> transformed.shape torch.Size([1, 176, 216, 1])
Crop
¶
-
class
torchio.transforms.
Crop
(cropping: Union[int, Tuple[int, int, int], Tuple[int, int, int, int, int, int]], **kwargs)[source]¶ Bases:
torchio.transforms.preprocessing.spatial.bounds_transform.BoundsTransform
Crop an image.
- Parameters
cropping – Tuple \((w_{ini}, w_{fin}, h_{ini}, h_{fin}, d_{ini}, d_{fin})\) defining the number of values cropped from the edges of each axis. If the initial shape of the image is \(W \times H \times D\), the final shape will be \((- w_{ini} + W - w_{fin}) \times (- h_{ini} + H - h_{fin}) \times (- d_{ini} + D - d_{fin})\). If only three values \((w, h, d)\) are provided, then \(w_{ini} = w_{fin} = w\), \(h_{ini} = h_{fin} = h\) and \(d_{ini} = d_{fin} = d\). If only one value \(n\) is provided, then \(w_{ini} = w_{fin} = h_{ini} = h_{fin} = d_{ini} = d_{fin} = n\).
**kwargs – See
Transform
for additional keyword arguments.
Pad
¶
-
class
torchio.transforms.
Pad
(padding: Union[int, Tuple[int, int, int], Tuple[int, int, int, int, int, int]], padding_mode: Union[str, float] = 0, **kwargs)[source]¶ Bases:
torchio.transforms.preprocessing.spatial.bounds_transform.BoundsTransform
Pad an image.
- Parameters
padding – Tuple \((w_{ini}, w_{fin}, h_{ini}, h_{fin}, d_{ini}, d_{fin})\) defining the number of values padded to the edges of each axis. If the initial shape of the image is \(W \times H \times D\), the final shape will be \((w_{ini} + W + w_{fin}) \times (h_{ini} + H + h_{fin}) \times (d_{ini} + D + d_{fin})\). If only three values \((w, h, d)\) are provided, then \(w_{ini} = w_{fin} = w\), \(h_{ini} = h_{fin} = h\) and \(d_{ini} = d_{fin} = d\). If only one value \(n\) is provided, then \(w_{ini} = w_{fin} = h_{ini} = h_{fin} = d_{ini} = d_{fin} = n\).
padding_mode – See possible modes in NumPy docs. If it is a number, the mode will be set to
'constant'
.**kwargs – See
Transform
for additional keyword arguments.
Resample
¶
-
class
torchio.transforms.
Resample
(target: Optional[Union[float, Tuple[float, float, float], str, pathlib.Path, torchio.data.image.Image]] = 1, image_interpolation: str = 'linear', pre_affine_name: Optional[str] = None, scalars_only: bool = False, **kwargs)[source]¶ Bases:
torchio.transforms.spatial_transform.SpatialTransform
Change voxel spacing by resampling.
- Parameters
target – Tuple \((s_h, s_w, s_d)\). If only one value \(n\) is specified, then \(s_h = s_w = s_d = n\). If a string or
Path
is given, all images will be resampled using the image with that name as reference or found at the path. An instance ofImage
can also be passed.pre_affine_name – Name of the image key (not subject key) storing an affine matrix that will be applied to the image header before resampling. If
None
, the image is resampled with an identity transform. See usage in the example below.image_interpolation – See Interpolation.
scalars_only – Apply only to instances of
ScalarImage
. SeeRandomAnisotropy
.**kwargs – See
Transform
for additional keyword arguments.
Example
>>> import torch >>> import torchio as tio >>> transform = tio.Resample(1) # resample all images to 1mm iso >>> transform = tio.Resample((2, 2, 2)) # resample all images to 2mm iso >>> transform = tio.Resample('t1') # resample all images to 't1' image space >>> # Example: using a precomputed transform to MNI space >>> ref_path = tio.datasets.Colin27().t1.path # this image is in the MNI space, so we can use it as reference/target >>> affine_matrix = tio.io.read_matrix('transform_to_mni.txt') # from a NiftyReg registration. Would also work with e.g. .tfm from SimpleITK >>> image = tio.ScalarImage(tensor=torch.rand(1, 256, 256, 180), to_mni=affine_matrix) # 'to_mni' is an arbitrary name >>> transform = tio.Resample(colin.t1.path, pre_affine_name='to_mni') >>> transformed = transform(image) # "image" is now in the MNI space
ToCanonical
¶
-
class
torchio.transforms.
ToCanonical
(p: float = 1, copy: bool = True, include: Optional[Sequence[str]] = None, exclude: Optional[Sequence[str]] = None, keys: Optional[Sequence[str]] = None, keep: Optional[Dict[str, str]] = None)[source]¶ Bases:
torchio.transforms.spatial_transform.SpatialTransform
Reorder the data to be closest to canonical (RAS+) orientation.
This transform reorders the voxels and modifies the affine matrix so that the voxel orientations are nearest to:
First voxel axis goes from left to Right
Second voxel axis goes from posterior to Anterior
Third voxel axis goes from inferior to Superior
See NiBabel docs about image orientation for more information.
- Parameters
**kwargs – See
Transform
for additional keyword arguments.
Note
The reorientation is performed using
nibabel.as_closest_canonical()
.
Label¶
RemapLabels
¶
-
class
torchio.transforms.
RemapLabels
(remapping: Dict[int, int], masking_method: Optional[Union[str, Callable[[torch.Tensor], torch.Tensor], int, Tuple[int, int, int], Tuple[int, int, int, int, int, int]]] = None, **kwargs)[source]¶ Bases:
torchio.transforms.preprocessing.label.label_transform.LabelTransform
Remap the integer ids of labels in a LabelMap.
This transformation may not be invertible if two labels are combined by the remapping. A masking method can be used to correctly split the label into two during the inverse transformation (see example).
- Parameters
remapping – Dictionary that specifies how labels should be remapped. The keys are the old label ids, and the corresponding values replace them.
masking_method –
Defines a mask for where the label remapping is applied. It can be one of:
None
: the mask image is all ones, i.e. all values in the image are used.A string: key to a
torchio.LabelMap
in the subject which is used as a mask, OR an anatomical label:'Left'
,'Right'
,'Anterior'
,'Posterior'
,'Inferior'
,'Superior'
which specifies a side of the mask volume to be ones.A function: the mask image is computed as a function of the intensity image. The function must receive and return a
torch.Tensor
.
**kwargs – See
Transform
for additional keyword arguments.
Example
>>> import torchio as tio >>> # Target label map has the following labels: >>> # {'left_ventricle': 1, 'right_ventricle': 2, 'left_caudate': 3, 'right_caudate': 4, >>> # 'left_putamen': 5, 'right_putamen': 6, 'left_thalamus': 7, 'right_thalamus': 8} >>> transform = tio.RemapLabels({2:1, 4:3, 6:5, 8:7}) >>> # Merge right side labels with left side labels >>> transformed = transform(subject) >>> # Undesired behavior: The inverse transform will remap ALL left side labels to right side labels >>> # so the label map only has right side labels. >>> inverse_transformed = transformed.apply_inverse_transform() >>> # Here's the *right* way to do it with masking: >>> transform = tio.RemapLabels({2:1, 4:3, 6:5, 8:7}, masking_method="Right") >>> # Remap the labels on the right side only (no difference yet). >>> transformed = transform(subject) >>> # Apply the inverse on the right side only. The labels are correctly split into left/right. >>> inverse_transformed = transformed.apply_inverse_transform()
RemoveLabels
¶
-
class
torchio.transforms.
RemoveLabels
(labels: Sequence[int], background_label: int = 0, masking_method: Optional[Union[str, Callable[[torch.Tensor], torch.Tensor], int, Tuple[int, int, int], Tuple[int, int, int, int, int, int]]] = None, **kwargs)[source]¶ Bases:
torchio.transforms.preprocessing.label.remap_labels.RemapLabels
Remove labels from a label map by remapping them to the background label.
This transformation is not invertible.
- Parameters
labels – A sequence of label integers that will be removed.
background_label – integer that specifies which label is considered to be background (generally 0).
masking_method – See
RemapLabels
.**kwargs – See
Transform
for additional keyword arguments.
SequentialLabels
¶
-
class
torchio.transforms.
SequentialLabels
(masking_method: Optional[Union[str, Callable[[torch.Tensor], torch.Tensor], int, Tuple[int, int, int], Tuple[int, int, int, int, int, int]]] = None, **kwargs)[source]¶ Bases:
torchio.transforms.preprocessing.label.label_transform.LabelTransform
Remap the integer IDs of labels in a LabelMap to be sequential.
For example, if a label map has 6 labels with IDs (3, 5, 9, 15, 16, 23), then this will apply a
RemapLabels
transform withremapping={3: 1, 5: 2, 9: 3, 15: 4, 16: 5, 23: 6}
. This transformation is always fully invertible.- Parameters
masking_method – See
RemapLabels
.**kwargs – See
Transform
for additional keyword arguments.
OneHot
¶
Contour
¶
-
class
torchio.transforms.
Contour
(p: float = 1, copy: bool = True, include: Optional[Sequence[str]] = None, exclude: Optional[Sequence[str]] = None, keys: Optional[Sequence[str]] = None, keep: Optional[Dict[str, str]] = None)[source]¶ Bases:
torchio.transforms.preprocessing.label.label_transform.LabelTransform
Keep only the borders of each connected component in a binary image.
- Parameters
**kwargs – See
Transform
for additional keyword arguments.
KeepLargestComponent
¶
-
class
torchio.transforms.
KeepLargestComponent
(p: float = 1, copy: bool = True, include: Optional[Sequence[str]] = None, exclude: Optional[Sequence[str]] = None, keys: Optional[Sequence[str]] = None, keep: Optional[Dict[str, str]] = None)[source]¶ Bases:
torchio.transforms.preprocessing.label.label_transform.LabelTransform
Keep only the largest connected component in a binary label map.
- Parameters
**kwargs – See
Transform
for additional keyword arguments.
Note
For now, this transform only works for binary images, i.e., label maps with a background and a foreground class. If you are interested in extending this transform open a new issue.