Commit 5f98efe2 by simonabottani

Rename PET SPM pipeline

parent a279a7a6
Pipeline #815 passed with stages
in 20 seconds
# `pet-preprocess-volume`` - <VERY_SHORT_DESCRIPTION>
<SHORT_DESCRIPTION>
#### Contents
- [Dependencies](#dependencies)
- [Running the pipeline (command line)](#running-the-pipeline-command-line)
- [Outputs](#outputs)
- [Visualization of the results](#visualization-of-the-results)
- [Describing this pipeline in your paper](#describing-this-pipeline-in-your-paper)
- [Appendix](#appendix)
## Dependencies
If you installed the docker image of Clinica, nothing is required.
If you only installed the core of Clinica, this pipeline needs the installation of **<software_package>** on your computer. You can find how to install this software on the [installation](docs/BeforeYouInstall) page.
## Running the pipeline (command line)
The pipeline can be run with the following command line:
```
clinica run pet-preprocess-volume bids_directory caps_directory
```
where:
- `bids_directory` is the input folder containing the dataset in a [BIDS](docs/BIDS) hierarchy
- `caps_directory` is the output folder containing the results in a [CAPS](docs/CAPS) hierarchy
- `<ARG_1>` <ARG_1_DESCRIPTION>
- `<ARG_2>` <ARG_2_DESCRIPTION>
## Outputs
Results are stored in the following folder of the [CAPS hierarchy](docs/CAPS): `subjects/sub-<participant_label>/ses-<session_label>/<some_folder>`.
The main output files are:
- `<source_file>_main_ouput_1`: description main output 1.
- `<source_file>_main_ouput_2`: description main output 2.
The full list of output files can be found in the [ClinicA Processed Structure (CAPS) Specification](https://docs.google.com/document/d/14mjXbqRceHK0fD0BIONniLK713zY7DbQHJEV7kxqsd8/edit#heading=h.f4ddnk971gkn).
## Visualization of the results
After the execution of the pipeline, you can check the outputs of a subject by running the command:
> **Notes:**
>
> _The visualization command is not available for the moment. Please come back later, this section will be updated ASAP._
## Describing this pipeline in your paper
> **Example of paragraph:**
>
>_These results have been obtained using the my-pipeline pipeline of Clinica. More precisely ..._
## Appendix
Further information can be found on [this supplementary page](docs/Pipelines/<My_Pipeline_Appendix>).
\ No newline at end of file
{
"id": "aramislab/pet-preprocess-volume",
"author": "SAMPER-GONZALEZ Jorge",
"version": "0.1.0",
"dependencies": [
{
"type": "software",
"name": "spm",
"version": ">=12"
}
]
}
\ No newline at end of file
# coding: utf8
import clinica.engine as ce
__author__ = "Jorge Samper Gonzalez"
__copyright__ = "Copyright 2016-2018, The Aramis Lab Team"
__credits__ = ["Jorge Samper Gonzalez"]
__license__ = "See LICENSE.txt file"
__version__ = "0.1.0"
__maintainer__ = "Jorge Samper Gonzalez"
__email__ = "jorge.samper-gonzalez@inria.fr"
__status__ = "Development"
class PETPreprocessVolumeCLI(ce.CmdParser):
def define_name(self):
"""Define the sub-command name to run this pipelines.
"""
self._name = 'pet-volume'
def define_description(self):
"""Define a description of this pipeline.
"""
self._description = 'SPM-based pre-processing of PET images:\nhttp://clinica.run/doc/Pipelines/PET_Preprocessing/'
def define_options(self):
"""Define the sub-command arguments
"""
self._args.add_argument("bids_directory",
help='Path to the BIDS directory.')
self._args.add_argument("caps_directory",
help='Path to the CAPS directory.')
self._args.add_argument("group_id",
help='Current group name')
self._args.add_argument("-tsv", "--subjects_sessions_tsv",
help='TSV file containing the subjects with their sessions.')
self._args.add_argument("-fwhm", "--fwhm_tsv",
help='TSV file containing the fwhm_x, fwhm_y and fwhm_z for each pet image.')
self._args.add_argument("-pet_type", "--pet_type", type=str, default='fdg', choices=['fdg', 'av45'],
help='PET image type. Possible values are fdg and av45.')
self._args.add_argument("-mask", "--mask_tissues", nargs='+', type=int, default=[1, 2, 3], choices=range(1, 7),
help="Tissue classes (gray matter, GM; white matter, WM; cerebro-spinal fluid, CSF...) to use for masking the PET image. Ex: 1 2 3 is GM, WM and CSF")
self._args.add_argument("-threshold", "--mask_threshold", type=float, default=0.3,
help='Value to be used as threshold to binarize the tissues mask.')
self._args.add_argument("-pvc_mask", "--pvc_mask_tissues", nargs='+', type=int, default=[1, 2, 3], choices=range(1, 7),
help="Tissue classes (gray matter, GM; white matter, WM; cerebro-spinal fluid, CSF...) to use as mask for PVC. Ex: 1 2 3 is GM, WM and CSF")
self._args.add_argument("-smooth", "--smooth", nargs='+', type=int, default=[8],
help="A list of integers specifying the different isomorphic fwhm in milimeters to smooth the image")
self._args.add_argument("-atlases", "--atlases", nargs='+', type=str,
default=['AAL2', 'LPBA40', 'Neuromorphometrics', 'AICHA', 'Hammers'],
choices=['AAL2', 'LPBA40', 'Neuromorphometrics', 'AICHA', 'Hammers'],
help='A list of atlases to use to calculate the mean GM concentration at each region')
self._args.add_argument("-wd", "--working_directory",
help='Temporary directory to store pipelines intermediate results')
self._args.add_argument("-np", "--n_procs", type=int,
help='Number of cores used to run in parallel')
self._args.add_argument("-sl", "--slurm", action='store_true',
help='Run the pipelines using SLURM')
def run_command(self, args):
"""
"""
from pet_preprocess_volume_pipeline import PETPreprocessVolume
pipeline = PETPreprocessVolume(bids_directory=self.absolute_path(args.bids_directory),
caps_directory=self.absolute_path(args.caps_directory),
tsv_file=self.absolute_path(args.subjects_sessions_tsv),
group_id=args.group_id,
fwhm_tsv=self.absolute_path(args.fwhm_tsv)
)
pipeline.parameters.update({'pet_type': args.pet_type,
'mask_tissues': args.mask_tissues,
'mask_threshold': args.mask_threshold,
'pvc_mask_tissues': args.pvc_mask_tissues,
'smooth': args.smooth,
'atlas_list': args.atlases
})
pipeline.base_dir = self.absolute_path(args.working_directory)
if args.n_procs:
pipeline.run(plugin='MultiProc', plugin_args={'n_procs': args.n_procs})
elif args.slurm:
pipeline.run(plugin='SLURM')
else:
pipeline.run()
# coding: utf8
__author__ = "Jorge Samper Gonzalez"
__copyright__ = "Copyright 2016-2018, The Aramis Lab Team"
__credits__ = ["Jorge Samper Gonzalez"]
__license__ = "See LICENSE.txt file"
__version__ = "0.1.0"
__maintainer__ = "Jorge Samper Gonzalez"
__email__ = "jorge.samper-gonzalez@inria.fr"
__status__ = "Development"
def read_psf(json_path):
"""readpsf is able to read the json file associated with the PET volume, and extract information needed to perform
partial volume correction : the effective resolution in plane, and the effective resolution axial.
Args:
(string) json_path : Path to the json file containing the information
Returns:
(float) The effective resolution in plane
(float) The effective axial resolution
"""
import json
import os
if not os.path.exists(json_path):
raise Exception('File ' + json_path + ' does not exist. Point spread function information must be provided.')
# This is a standard way of reading data in a json
with open(json_path) as df:
data = json.load(df)
in_plane = data['Psf'][0]['EffectiveResolutionInPlane']
axial = data['Psf'][0]['EffectiveResolutionAxial']
return in_plane, axial
def create_binary_mask(tissues, threshold=0.3):
"""
Args:
tissues:
threshold:
Returns:
"""
import nibabel as nib
import numpy as np
from os import getcwd
from os.path import join, basename
if len(tissues) == 0:
raise RuntimeError('The length of the list of tissues must be greater than zero.')
img_0 = nib.load(tissues[0])
shape = list(img_0.get_data().shape)
data = np.zeros(shape=shape)
for image in tissues:
data = data + nib.load(image).get_data()
data = (data > threshold) * 1.0
out_mask = join(getcwd(), basename(tissues[0]) + '_brainmask.nii')
mask = nib.Nifti1Image(data, img_0.affine, header=img_0.header)
nib.save(mask, out_mask)
return out_mask
def apply_binary_mask(image, binary_mask):
import nibabel as nib
from os import getcwd
from os.path import join, basename
original_image = nib.load(image)
mask = nib.load(binary_mask)
data = original_image.get_data() * mask.get_data()
masked_image_path = join(getcwd(), 'masked_' + basename(image))
masked_image = nib.Nifti1Image(data, original_image.affine, header=original_image.header)
nib.save(masked_image, masked_image_path)
return masked_image_path
def create_pvc_mask(tissues):
import nibabel as nib
import numpy as np
from os import getcwd
from os.path import join
if len(tissues) == 0:
raise RuntimeError('The length of the list of tissues must be greater than zero.')
img_0 = nib.load(tissues[0])
shape = img_0.get_data().shape
background = np.zeros(shape=shape)
shape += tuple([len(tissues) + 1])
data = np.empty(shape=shape, dtype=np.float64)
for i in range(len(tissues)):
image = nib.load(tissues[i])
data[..., i] = np.array(image.get_data())
background = background + image.get_data()
background = 1.0 - background
data[..., len(tissues)] = np.array(background)
out_mask = join(getcwd(), 'pvc_mask.nii')
mask = nib.Nifti1Image(data, img_0.affine, header=img_0.header)
nib.save(mask, out_mask)
return out_mask
def pet_pvc_name(pet_image, pvc_method):
from os.path import basename
pet_pvc_path = 'pvc-' + pvc_method.lower() + '_' + basename(pet_image)
return pet_pvc_path
def normalize_to_reference(pet_image, region_mask):
import nibabel as nib
import numpy as np
from os import getcwd
from os.path import basename, join
pet = nib.load(pet_image)
ref = nib.load(region_mask)
region = np.multiply(pet.get_data(), ref.get_data())
region_mean = np.nanmean(np.where(region != 0, region, np.nan))
data = pet.get_data() / region_mean
suvr_pet_path = join(getcwd(), 'suvr_' + basename(pet_image))
suvr_pet = nib.Nifti1Image(data, pet.affine, header=pet.header)
nib.save(suvr_pet, suvr_pet_path)
return suvr_pet_path
def atlas_statistics(in_image, in_atlas_list):
"""
Args:
in_image:
in_atlas_list:
Returns:
For each atlas name provided it calculates for the input image the mean for each region in the atlas and saves it to a tsv file.
:param in_image: A Nifti image
:param in_atlas_list: List of names of atlas to be applied
:return: List of paths to tsv files
"""
from os import getcwd
from os.path import abspath, join
from nipype.utils.filemanip import split_filename
from clinica.utils.atlas import AtlasAbstract
from clinica.utils.statistics import statistics_on_atlas
orig_dir, base, ext = split_filename(in_image)
atlas_classes = AtlasAbstract.__subclasses__()
atlas_statistics_list = []
for atlas in in_atlas_list:
for atlas_class in atlas_classes:
if atlas_class.get_name_atlas() == atlas:
out_atlas_statistics = abspath(join(getcwd(), base + '_space-' + atlas + '_statistics.tsv'))
statistics_on_atlas(in_image, atlas_class(), out_atlas_statistics)
atlas_statistics_list.append(out_atlas_statistics)
break
return atlas_statistics_list
def pet_container_from_filename(pet_filename):
"""
Args:
pet_filename:
Returns:
"""
import re
from os.path import join
m = re.search(r'(sub-[a-zA-Z0-9]+)_(ses-[a-zA-Z0-9]+)_', pet_filename)
if m is None:
raise ValueError('Input filename is not in a BIDS or CAPS compliant format. It doesn\'t contain the subject and session informations.')
subject = m.group(1)
session = m.group(2)
return join('subjects', subject, session, 'pet/preprocessing')
def expand_into_list(in_field, n_tissues):
return [in_field] * n_tissues
def get_from_list(in_list, index):
return in_list[index]
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment