mdt.lib package

Submodules

mdt.lib.batch_utils module

Routines for fitting models on multiple subjects.

The most important part of this are the batch profiles. These encapsulate information about the subjects and about the modelling settings. Suppose you have a directory full of subjects that you want to analyze with a few models. One way to do that is to write some scripts yourself that walk through the directory and fit the models to the subjects. The other way would be to implement a BatchProfile that contains details about your directory structure and let mdt.batch_fit() fetch all the subjects for you.

Batch profiles contain a list with subject information (see SubjectInfo) and a list of models we wish to apply to these subjects. Furthermore each profile should support some functionality that checks if this profile is suitable for a given directory. Using those functions the mdt.batch_fit() can try to auto-recognize the batch profile to use based on the profile that is suitable and returns the most subjects.

class mdt.lib.batch_utils.AllSubjects[source]

Bases: mdt.lib.batch_utils.BatchSubjectSelection

Selects all subjects for use in the processing

get_selection(subject_ids)[source]

Get the selection of subjects from the given list of subjects.

Parameters:subject_ids (list of str) – the list of subject ids from which we can choose which one to process
Returns:the subject ids we want to use
Return type:list of str
get_subjects(subjects)[source]
Parameters:subjects (list of SubjectInfo) – the subjects loaded from the batch profile
Returns:the (sub)set of subjects we will actually use during the computations
Return type:list of SubjectInfo
class mdt.lib.batch_utils.BatchFitProtocolLoader(base_dir, protocol_fname=None, protocol_columns=None, bvec_fname=None, bval_fname=None)[source]

Bases: object

A simple protocol loader for loading a protocol from a protocol file or bvec/bval files.

This either loads the protocol file if present, or autoloads the protocol using the auto_load_protocol from the protocol module.

get_protocol()[source]
class mdt.lib.batch_utils.BatchFitSubjectOutputInfo(output_path, subject_id, model_name)[source]

Bases: object

This class is used in conjunction with the function run_function_on_batch_fit_output().

Parameters:
  • output_path (str) – the full path to the directory with the maps
  • subject_id (str) – the id of the current subject
  • model_name (str) – the name of the model (not a path)
class mdt.lib.batch_utils.BatchProfile[source]

Bases: object

get_subjects(data_folder)[source]

Get the information about all the subjects in the current folder.

Parameters:data_folder (str) – the data folder from which to load the subjects
Returns:the information about the found subjects
Return type:list of SubjectInfo
is_suitable(data_folder)[source]

Check if this directory can be used to use subjects from using this batch fitting profile.

This is used for auto detecting the best batch fitting profile to use for loading subjects from the given base dir.

Parameters:data_folder (str) – the data folder from which to load the subjects
Returns:
true if this batch fitting profile can use the subjects in the current base directory,
false otherwise.
Return type:boolean
class mdt.lib.batch_utils.BatchSubjectSelection[source]

Bases: object

get_selection(subject_ids)[source]

Get the selection of subjects from the given list of subjects.

Parameters:subject_ids (list of str) – the list of subject ids from which we can choose which one to process
Returns:the subject ids we want to use
Return type:list of str
get_subjects(subjects)[source]
Parameters:subjects (list of SubjectInfo) – the subjects loaded from the batch profile
Returns:the (sub)set of subjects we will actually use during the computations
Return type:list of SubjectInfo
class mdt.lib.batch_utils.SelectedSubjects(subject_ids=None, indices=None)[source]

Bases: mdt.lib.batch_utils.BatchSubjectSelection

Only process the selected subjects.

This method allows either a selection by index (unsafe for the order may change) or by subject name/ID (more safe in general). If start_from is given it additionally limits the list of selected subjects to include only those after that index.

This essentially creates three different subsets of the given list of subjects and it will only process the subjects in the intersection of all those sets.

Set any one of the options to None to ignore that option.

Parameters:
  • subject_ids (str or Iterable[str]) – the list of names of subjects to process
  • indices (int or Iterable[int]) – the list of indices of subjects we wish to process
get_selection(subject_ids)[source]

Get the selection of subjects from the given list of subjects.

Parameters:subject_ids (list of str) – the list of subject ids from which we can choose which one to process
Returns:the subject ids we want to use
Return type:list of str
get_subjects(subjects)[source]
Parameters:subjects (list of SubjectInfo) – the subjects loaded from the batch profile
Returns:the (sub)set of subjects we will actually use during the computations
Return type:list of SubjectInfo
class mdt.lib.batch_utils.SimpleBatchProfile(*args, **kwargs)[source]

Bases: mdt.lib.batch_utils.BatchProfile

A base class for quickly implementing a batch profile.

Implementing classes need only implement the method _get_subjects(), then this class will handle the rest.

Parameters:base_directory (str) – the base directory from which we will load the subjects information
get_subjects(data_folder)[source]

Get the information about all the subjects in the current folder.

Parameters:data_folder (str) – the data folder from which to load the subjects
Returns:the information about the found subjects
Return type:list of SubjectInfo
is_suitable(data_folder)[source]

Check if this directory can be used to use subjects from using this batch fitting profile.

This is used for auto detecting the best batch fitting profile to use for loading subjects from the given base dir.

Parameters:data_folder (str) – the data folder from which to load the subjects
Returns:
true if this batch fitting profile can use the subjects in the current base directory,
false otherwise.
Return type:boolean
class mdt.lib.batch_utils.SimpleSubjectInfo(data_folder, subject_base_folder, subject_id, dwi_fname, protocol_loader, mask_fname, gradient_deviations=None, noise_std=None)[source]

Bases: mdt.lib.batch_utils.SubjectInfo

This class contains all the information about found subjects during batch fitting.

It is returned by the method get_subjects() from the class BatchProfile.

Parameters:
  • data_folder (str) – the data folder used by the batch profile
  • subject_base_folder (str) – the base folder of this subject
  • subject_id (str) – the subject id
  • dwi_fname (str) – the filename with path to the dwi image
  • protocol_loader (ProtocolLoader) – the protocol loader that can use us the protocol
  • mask_fname (str) – the filename of the mask to use. If None a mask is auto generated.
  • gradient_deviations (str) –
  • noise_std (float, ndarray, str) – either None for automatic noise detection or a float with the noise STD to use during fitting or an ndarray with one value per voxel.
data_folder

The data folder in which this subject was found.

Returns:the data folder used by the batch profile when loading this subject.
Return type:str
get_input_data(use_gradient_deviations=False)[source]

Get the input data for this subject.

This is the data we will use during model fitting.

Parameters:use_gradient_deviations (boolean) – if we should enable the use of the gradient deviations
Returns:the input data to use during model fitting
Return type:MRIInputData
subject_base_folder

Get the data base folder of this subject.

Returns:the folder with the main data of this subject, this subject’s home folder.
Return type:str
subject_id

Get the ID of this subject.

Returns:the id of this subject
Return type:str
class mdt.lib.batch_utils.SubjectInfo[source]

Bases: object

data_folder

The data folder in which this subject was found.

Returns:the data folder used by the batch profile when loading this subject.
Return type:str
get_input_data(use_gradient_deviations=False)[source]

Get the input data for this subject.

This is the data we will use during model fitting.

Parameters:use_gradient_deviations (boolean) – if we should enable the use of the gradient deviations
Returns:the input data to use during model fitting
Return type:MRIInputData
subject_base_folder

Get the data base folder of this subject.

Returns:the folder with the main data of this subject, this subject’s home folder.
Return type:str
subject_id

Get the ID of this subject.

Returns:the id of this subject
Return type:str
mdt.lib.batch_utils.batch_apply(data_folder, func, batch_profile=None, subjects_selection=None, extra_args=None)[source]

Apply a function on the subjects found in the batch profile.

Parameters:
  • func (callable) – the function we will apply for every subject, should accept as single argument an instance of SubjectInfo.
  • data_folder (str) – The data folder to process
  • batch_profile (BatchProfile or str) – the batch profile to use, or the name of a batch profile to use. If not given it is auto detected.
  • subjects_selection (BatchSubjectSelection or Iterable[Union[str, int]]) – the subjects to use for processing. If None, all subjects are processed. If a list is given instead of a BatchSubjectSelection instance, we apply the following. If the elements in that list are string we use it as subject ids, if they are integers we use it as subject indices.
  • extra_args (list) – a list of additional arguments that are passed to the function. If this is set, the callback function must accept these additional args.
Returns:

per subject id the output from the function

Return type:

dict

mdt.lib.batch_utils.batch_profile_factory(batch_profile, base_directory)[source]

Wrapper function for getting a batch profile.

Parameters:
  • batch_profile (None, string or BatchProfile) – indication of the batch profile to use. If a string is given it is loaded from the users home folder. Else the best matching profile is returned.
  • base_directory (str) – the data folder we want to use the batch profile on.
Returns:

If the given batch profile is None we return the output from get_best_batch_profile().

If batch profile is a string we use it from the batch profiles loader. Else we return the input.

Return type:

BatchProfile

mdt.lib.batch_utils.get_best_batch_profile(data_folder)[source]

Get the batch profile that best matches the given directory.

Parameters:data_folder (str) – the directory for which to get the best batch profile.
Returns:the best matching batch profile.
Return type:BatchProfile
mdt.lib.batch_utils.get_subject_information(data_folder, subject_ids, batch_profile=None)[source]

Loads a batch profile and finds the subject with the given subject id.

Parameters:
  • data_folder (str) – The data folder from which to load the subjects
  • subject_ids (str or list of str) – the subject we would like to retrieve, or a list of subject ids.
  • batch_profile (BatchProfile or str) – the batch profile to use, or the name of a batch profile to use. If not given it is auto detected.
Returns:

the subject info or list of

subject info’s of the requested subjects.

Return type:

Optional[mdt.lib.batch_utils.SubjectInfo, List[mdt.lib.batch_utils.SubjectInfo]]

Raises:

ValueError – if one of the subjects could not be found.

mdt.lib.batch_utils.get_subject_selection(subjects_selection)[source]

Load a subject selection object from the polymorphic input.

Parameters:subjects_selection (BatchSubjectSelection or iterable) – the subjects to use for processing. If None, all subjects are processed. If a list is given instead of a BatchSubjectSelection instance, we apply the following. If the elements in that list are string we use it as subject ids, if they are integers we use it as subject indices.
Returns:a subject selection object.
Return type:mdt.lib.batch_utils.BatchSubjectSelection
Raises:ValueError – if a list is given with mixed strings and integers.
mdt.lib.batch_utils.run_function_on_batch_fit_output(func, output_folder, subjects_selection=None, model_names=None)[source]

Run a function on the output of a batch fitting routine.

This enables you to run a function on every model output from every subject. This expects the output directory to contain directories and files like <subject_id>/<model_name>/<map_name>.nii.gz

Parameters:
  • func (Callable[[BatchFitSubjectOutputInfo], any]) – the python function we should call for every map and model. This should accept as single parameter a BatchFitSubjectOutputInfo.
  • output_folder (str) – The data input folder
  • subjects_selection (BatchSubjectSelection or iterable) – the subjects to use for processing. If None all subjects are processed.
  • model_names (list) – the list of model names to process. If not given we will run the function on all models.
Returns:

indexed by subject->model_name, values are the return values of the users function

Return type:

dict

mdt.lib.components module

mdt.lib.components.add_component(component_type, name, cls, meta_info=None)[source]

Adds a component class to the library.

Parameters:
  • component_type (str) – the type of the component, see supported_component_types.
  • name (str) – the name of the component
  • component_class (class) – the class or constructor function for the component
  • meta_info (dict) – a dictionary with meta information about the component
mdt.lib.components.add_template_component(template)[source]

Adds a component template to the library.

Parameters:template (mdt.component_templates.base.ComponentTemplateMeta) – the template for constructing the component class.
mdt.lib.components.get_batch_profile(batch_profile)[source]

Load the class of one of the available batch profiles

Parameters:batch_profile (str) – The name of the batch profile class to load
Returns:the batch profile class
Return type:cls
mdt.lib.components.get_component(component_type, name)[source]

Get the component class for the component of the given type and name.

Parameters:
  • component_type (str) – the type of the component, see supported_component_types.
  • name (str) – the name of the component
Returns:

the component class.

Return type:

class

mdt.lib.components.get_component_list(component_type)[source]

Get a list of available components by component type.

Parameters:component_type (str) – the type of the component, see supported_component_types.
Returns:list of available components
Return type:list of str
mdt.lib.components.get_meta_info(component_type, name)[source]

Get the meta information dictionary for the component of the given type and name.

Parameters:
  • component_type (str) – the type of the component, see supported_component_types.
  • name (str) – the name of the component
Returns:

the meta information

Return type:

dict

mdt.lib.components.get_model(model_name)[source]

Load the class of one of the available models.

Parameters:model_name (str) – One of the models from the composite models
Returns:A composite model.
Return type:class
mdt.lib.components.get_template(component_type, name)[source]

Get the template class for the given component.

This may not be supported for all component types and components. That is, since components can either be added as classes or as templates, we can not guarantee a template class for any requested component.

Parameters:
  • component_type (str) – the type of the component, see supported_component_types.
  • name (str) – the name of the component
Returns:

a template class if possible.

Return type:

mdt.component_templates.base.ComponentTemplateMeta

Raises:

ValueError – if no component of the given name could be found.

mdt.lib.components.has_component(component_type, name)[source]

Check if a component is available.

Parameters:
  • component_type (str) – the type of the component, see supported_component_types.
  • name (str) – the name of the component
Returns:

if we have a component available of the given type and given name.

Return type:

boolean

mdt.lib.components.reload()[source]

Clear the component library and reload all default components.

This will load the components from the user home folder and from the MOT library.

mdt.lib.components.remove_last_entry(component_type, name)[source]

Removes the last entry of the given component.

Parameters:
  • component_type (str) – the type of the component, see supported_component_types.
  • name (str) – the name of the component
mdt.lib.components.temporary_component_updates()[source]

Creates a context that keeps track of the component mutations and undoes them when the context exits.

This can be useful to temporarily add or remove some components from the library.

mdt.lib.deferred_mappings module

class mdt.lib.deferred_mappings.DeferredActionDict(func, items, cache=True)[source]

Bases: collections.abc.MutableMapping

Applies the given function on the given items at the moment of data request.

On the moment one of the keys of this dict class is requested we apply the given function on the given items and return the result of that function. The advantage of this class is that it defers an expensive operation until it is needed.

Items added to this dictionary after creation are assumed to be final, that is, we won’t run the function on them.

Parameters:
  • func (Function) –

    the callback function to apply on the given items at request, with signature:

    def callback(key, value)
    
  • items (collections.MutableMapping) – the items on which we operate
  • cache (boolean) – if we want to cache computed results
class mdt.lib.deferred_mappings.DeferredActionTuple(func, items, cache=True)[source]

Bases: collections.abc.Sequence

Applies the given function on the given items at moment of request.

On the moment one of the elements is requested we apply the given function on the given items and return the result of that function. The advantage of this class is that it defers an expensive operation until it is needed.

Parameters:
  • func (Function) –

    the callback function to apply on the given items at request, with signature:

    def callback(index, value)
    
  • items (list, tuple) – the items on which we operate
  • cache (boolean) – if we want to cache computed results
class mdt.lib.deferred_mappings.DeferredFunctionDict(items, cache=True)[source]

Bases: collections.abc.MutableMapping

The items should contain a list of functions that we apply at the moment of request.

On the moment one of the keys of this dict class is requested we apply the function stored in the items dict for that key and return the result of that function. The advantage of this class is that it defers an expensive operation until it is needed.

Items set to this dictionary are assumed to be final, that is, we won’t run the function on them.

Parameters:
  • items (collections.MutableMapping) – the items on which we operate, each value should contain a function with no parameters that we run to return the results.
  • cache (boolean) – if we want to cache computed results

mdt.lib.exceptions module

exception mdt.lib.exceptions.DoubleModelNameException[source]

Bases: Exception

Thrown when there are two models with the same name.

exception mdt.lib.exceptions.InsufficientProtocolError[source]

Bases: Exception

Indicates that the protocol constains insufficient information for fitting a specific model.

This can be raised if a model misses a column it needs in the protocol, or if there are not enough shells, etc.

exception mdt.lib.exceptions.NoiseStdEstimationNotPossible[source]

Bases: Exception

An exception that can be raised by any ComplexNoiseStdEstimator.

This indicates that the noise std can not be estimated by the estimation routine.

exception mdt.lib.exceptions.NonUniqueComponent[source]

Bases: Exception

Raised when there are two components of the same type with the same name in the dynamically loadable components.

If this is raised, please double check your components for items with non-unique names.

exception mdt.lib.exceptions.ProtocolIOError[source]

Bases: Exception

Custom exception class for protocol input output errors.

This can be raised if a protocol is inconsistent or incomplete. It should not be raised for general IO errors, use the IO exception for that.

mdt.lib.fsl_sampling_routine module

mdt.lib.log_handlers module

Implements multiple handles that hook into the Python logging module.

These handlers can for example echo the log entry to the terminal, write it to a file or dispatch it to another class. They are typically configured in the MDT configuration file.

class mdt.lib.log_handlers.LogDispatchHandler(*args, **kwargs)[source]

Bases: logging.StreamHandler

This class is able to dispatch messages to all the attached log listeners.

You can add listeners by adding them to the list of listeners. This list is a class variable and as such is available to all instances and subclasses.

The listeners should be of instance LogListenerInterface.

This enables for example the GUI to hook a log listener indirectly into the logging module.

In general only one copy of this class should be used.

static add_listener(listener)[source]

Add a listener to the dispatch handler.

Parameters:listener (LogListenerInterface) – listener that implements the log listener interface.
Returns:the listener id number. You can use this to remove the listener again.
Return type:int
emit(record)[source]

Emit a record.

If a formatter is specified, it is used to format the record. The record is then written to the stream with a trailing newline. If exception information is present, it is formatted using traceback.print_exception and appended to the stream. If the stream has an ‘encoding’ attribute, it is used to determine how to do the output to the stream.

static remove_listener(listener_id)[source]

Remove a listener from the log dispatcher.

Parameters:listener_id (int) – the id of the listener to remove
class mdt.lib.log_handlers.LogListenerInterface[source]

Bases: object

Interface for listeners to work in conjunction with LogDispatchHandler

emit(record, formatted_message)[source]
class mdt.lib.log_handlers.ModelOutputLogHandler(mode='a', encoding=None)[source]

Bases: logging.StreamHandler

This logger logs information about a model optimization to the folder of the model that is being optimized.

It is by default (see the MDT configuration) already constructed and added to the logging module. To set a new file, or to disable this logger set the file using the output_file property.

close()[source]

Tidy up any resources used by the handler.

This version removes the handler from an internal map of handlers, _handlers, which is used for handler lookup by name. Subclasses should ensure that this gets called from overridden close() methods.

emit(record)[source]

Emit a record.

If a formatter is specified, it is used to format the record. The record is then written to the stream with a trailing newline. If exception information is present, it is formatted using traceback.print_exception and appended to the stream. If the stream has an ‘encoding’ attribute, it is used to determine how to do the output to the stream.

output_file
class mdt.lib.log_handlers.StdOutHandler(stream=None)[source]

Bases: logging.StreamHandler

A redirect for stdout.

Emits all log entries to the stdout.

Parameters:stream – the IO stream to which to emit the log entries. If not given we use sys.stdout.
emit(record)[source]

Emit a record.

If a formatter is specified, it is used to format the record. The record is then written to the stream with a trailing newline. If exception information is present, it is formatted using traceback.print_exception and appended to the stream. If the stream has an ‘encoding’ attribute, it is used to determine how to do the output to the stream.

mdt.lib.masking module

mdt.lib.masking.create_median_otsu_brain_mask(dwi_info, protocol, mask_threshold=0, fill_holes=True, **kwargs)[source]

Create a brain mask using the given volume.

Parameters:
  • dwi_info (string or tuple or image) –

    The information about the volume, either:

    • the filename of the input file
    • or a tuple with as first index a ndarray with the DWI and as second index the header
    • or only the image as an ndarray
  • protocol (string or Protocol) – The filename of the protocol file or a Protocol object
  • mask_threshold (float) – everything below this b-value threshold is masked away (value in s/m^2)
  • fill_holes (boolean) – if we will fill holes after the median otsu algorithm and before the thresholding
  • **kwargs – the additional arguments for median_otsu.
Returns:

The created brain mask

Return type:

ndarray

mdt.lib.masking.create_write_median_otsu_brain_mask(dwi_info, protocol, output_fname, **kwargs)[source]

Write a brain mask using the given volume and output as the given volume.

Parameters:
  • dwi_info (string or tuple or ndarray) – the filename of the input file or a tuple with as first index a ndarray with the DWI and as second index the header or only the image.
  • protocol (string or Protocol) – The filename of the protocol file or a Protocol object
  • output_fname (string) – the filename of the output file (the extracted brain mask) If None, no output is written. If dwi_info is an ndarray also no file is written (we don’t have the header).
Returns:

The created brain mask

Return type:

ndarray

mdt.lib.masking.generate_simple_wm_mask(scalar_map, whole_brain_mask, threshold=0.3, median_radius=1, nmr_filter_passes=2)[source]

Generate a simple white matter mask by thresholding the given map and smoothing it using a median filter.

Everything below the given threshold will be masked (not used). It also applies the regular brain mask to only retain values inside the brain.

Parameters:
  • scalar_map (str or ndarray) – the path to the FA file
  • whole_brain_mask (str or ndarray) – the general brain mask used in the FA model fitting
  • threshold (double) – the FA threshold. Everything below this threshold is masked (set to 0). To be precise: where fa_data < fa_threshold set the value to 0.
  • median_radius (int) – the radius of the median filter
  • nmr_filter_passes (int) – the number of passes we apply the median filter
mdt.lib.masking.median_otsu(unweighted_volume, median_radius=4, numpass=4, dilate=1)[source]

Simple brain extraction tool for dMRI data.

This function is inspired from the median_otsu function from dipy and is copied here to remove a dependency.

It uses a median filter smoothing of the unweighted_volume automatic histogram Otsu thresholding technique, hence the name median_otsu.

This function is inspired from Mrtrix’s bet which has default values median_radius=3, numpass=2. However, from tests on multiple 1.5T and 3T data. From GE, Philips, Siemens, the most robust choice is median_radius=4, numpass=4.

Parameters:
  • unweighted_volume (ndarray) – ndarray of the unweighted volumes brain volumes
  • median_radius (int) – Radius (in voxels) of the applied median filter (default 4)
  • numpass (int) – Number of pass of the median filter (default 4)
  • dilate (None or int) – optional number of iterations for binary dilation
Returns:

a 3D ndarray with the binary brain mask

Return type:

ndarray

mdt.lib.model_fitting module

mdt.lib.model_sampling module

mdt.lib.nifti module

class mdt.lib.nifti.NiftiInfo(header=None, filepath=None)[source]

Bases: object

A nifti information object to store meta data alongside an array.

Parameters:
  • header – the nibabel nifti header
  • filepath (str) – the on-disk filepath of the corresponding nifti file
class mdt.lib.nifti.NiftiInfoDecorated[source]

Bases: object

The additional type of an array after it has been subclassed by nifti_info_decorate_array().

This subclass can be used to check if an array has nifti info attached to it.

nifti_info

Get the nifti information attached to the subclass.

Returns:the nifti information object
Return type:NiftiInfo
mdt.lib.nifti.get_all_nifti_data(directory, map_names=None, deferred=True)[source]

Get the data of all the nifti volumes in the given directory.

If map_names is given we will only load the given map names. Else, we load all .nii and .nii.gz files in the given directory.

Parameters:
  • directory (str) – the directory from which we want to read a number of maps
  • map_names (list of str) – the names of the maps we want to use. If given, we only use and return these maps.
  • deferred (boolean) – if True we return an deferred loading dictionary instead of a dictionary with the values loaded as arrays.
Returns:

A dictionary with the volumes. The keys of the dictionary are the filenames

without the extension of the .nii(.gz) files in the given directory.

Return type:

dict

mdt.lib.nifti.is_nifti_file(file_name)[source]

Check if the given file is a nifti file.

This only checks if the extension of the given file ends with .nii or with .nii.gz

Parameters:file_name (str) – the name of the file
Returns:true if the file looks like a nifti file, false otherwise
Return type:boolean
mdt.lib.nifti.load_all_niftis(directory, map_names=None)[source]

Loads all niftis in the given directory as nibabel nifti files.

This does not load the data directly, it loads the niftis in a dictionary. To get a direct handle to the image data use the function get_all_nifti_data().

If map_names is given we will only load the given maps. Else, we will load all .nii and .nii.gz files. The map name is the filename of a nifti without the extension.

In the case both an .nii and a .nii.gz with the same name exists we will load the .nii as the main map and the .nii.gz with its extension.

Parameters:
  • directory (str) – the directory from which we want to load the niftis
  • map_names (list of str) – the names of the maps we want to use. If given, we only use and return these maps.
Returns:

A dictionary with the loaded nibabel proxies (see load_nifti()).

The keys of the dictionary are the filenames without the extension of the .nii(.gz) files in the given directory.

Return type:

dict

mdt.lib.nifti.load_nifti(nifti_volume)[source]

Load and return a nifti file.

This will apply path resolution if a filename without extension is given. See the function nifti_filepath_resolution() for details.

Parameters:nifti_volume (string) – The filename of the volume to use.
Returns:nibabel.nifti2.Nifti2Image
mdt.lib.nifti.nifti_filepath_resolution(file_path)[source]

Tries to resolve the filename to a nifti based on only the filename.

For example, this resolves the path: /tmp/mask to:

  • /tmp/mask if exists
  • /tmp/mask.nii if exist
  • /tmp/mask.nii.gz if exists

Hence, the lookup order is: path, path.nii, path.nii.gz

If a file with an extension is given we will do no further resolving and return the path as is.

Parameters:file_path (str) – the path to the nifti file, can be without extension.
Returns:the file path we resolved to the final file.
Return type:str
Raises:ValueError – if no nifti file could be found
mdt.lib.nifti.nifti_info_decorate_array(array, nifti_info=None)[source]

Decorate the provided numpy array with nifti information.

This can be used to ensure that an array is of additional subclass NiftiInfoDecorated.

Parameters:
  • array (ndarray) – the numpy array to decoreate
  • nifti_info (NiftiInfo) – the nifti info to attach to the array
mdt.lib.nifti.nifti_info_decorate_nibabel_image(nifti_obj)[source]

Decorate the nibabel image container such that the get_data method returns a NiftiInfoDecorated ndarray.

Parameters:nifti_obj – a nibabel nifti object
mdt.lib.nifti.unzip_nifti(input_filename, output_filename)[source]

Unzips the given nifti file.

This will create the output directories if they do not yet exist.

Parameters:
  • input_filename (str) – the nifti file we would like to unzip. Should have the extension .gz.
  • output_filename (str) – the location for the output file. Should have the extension .nii.
Raises:

ValueError – if the extensions of either the input or output filename are not correct.

mdt.lib.nifti.write_all_as_nifti(volumes, directory, nifti_header=None, overwrite_volumes=True, gzip=True)[source]

Write a number of volume maps to the specific directory.

Parameters:
  • volumes (dict) – the volume maps (in 3d) with the results we want to write. The filenames are generated using the keys in the given volumes
  • directory (str) – the directory to write to
  • nifti_header – the nifti header to use for each of the volumes.
  • overwrite_volumes (boolean) – defaults to True, if we want to overwrite the volumes if they exists
  • gzip (boolean) – if True we write the files as .nii.gz, if False we write the files as .nii
mdt.lib.nifti.write_nifti(data, output_fname, header=None, affine=None, use_data_dtype=True, **kwargs)[source]

Write data to a nifti file.

This will write the output directory if it does not exist yet.

Parameters:
  • data (ndarray) – the data to write to that nifti file
  • output_fname (str) – the name of the resulting nifti file, this function will append .nii.gz if no suitable extension is given.
  • header (nibabel header) – the nibabel header to use as header for the nifti file. If None we will use a default header.
  • affine (ndarray) – the affine transformation matrix
  • use_data_dtype (boolean) – if we want to use the dtype from the data instead of that from the header when saving the nifti.
  • **kwargs – other arguments to Nifti2Image from NiBabel
mdt.lib.nifti.yield_nifti_info(directory)[source]

Get information about the nifti volumes in the given directory.

Parameters:directory (str) – the directory to get the names of the available maps from
Yields:tuple – (path, map_name, extension) for every map found

mdt.lib.post_processing module

This module contains various standard post-processing routines for use after optimization or sample.

class mdt.lib.post_processing.DKIMeasures[source]

Bases: object

static extra_optimization_maps(parameters_dict)[source]

Calculate DKI statistics like the mean, axial and radial kurtosis.

The Mean Kurtosis (MK) is calculated by averaging the Kurtosis over orientations on the unit sphere. The Axial Kurtosis (AK) is obtained using the principal direction of diffusion (fe; first eigenvec) from the Tensor as its direction and then averaging the Kurtosis over +fe and -fe. Finally, the Radial Kurtosis (RK) is calculated by averaging the Kurtosis over a circle of directions around the first eigenvec.

Parameters:parameters_dict (dict) – the fitted Kurtosis parameters, this requires a dictionary with at least the elements: ‘d’, ‘dperp0’, ‘dperp1’, ‘theta’, ‘phi’, ‘psi’, ‘W_0000’, ‘W_1000’, ‘W_1100’, ‘W_1110’, ‘W_1111’, ‘W_2000’, ‘W_2100’, ‘W_2110’, ‘W_2111’, ‘W_2200’, ‘W_2210’, ‘W_2211’, ‘W_2220’, ‘W_2221’, ‘W_2222’.
Returns:maps for the Mean Kurtosis (MK), Axial Kurtosis (AK) and Radial Kurtosis (RK).
Return type:dict
class mdt.lib.post_processing.DTIMeasures[source]

Bases: object

static extra_optimization_maps(results)[source]

Return some interesting measures like FA, MD, RD and AD.

This function is meant to be used as a post processing routine in Tensor-like compartment models.

Parameters:results (dict) – Dictionary containing at least theta, phi, psi, d, dperp0 and dperp1 We will use this to generate some standard measures from the diffusion Tensor.
Returns:
as keys typical elements like ‘FA and ‘MD’ as interesting output and as per values the maps.
These maps are per voxel, and optionally per instance per voxel
Return type:dict
static extra_sampling_maps(results)[source]

Return some interesting measures derived from the samples.

Please note that this function expects the result dictionary only with the parameter names, that is, it expects the elements d, dperp0 and dperp1 to be present.

Parameters:(dict[str (results) – ndarray]): a dictionary containing the samples for each of the parameters.
Returns:a set of additional maps with one value per voxel.
Return type:dict
static fractional_anisotropy(d, dperp0, dperp1)[source]

Calculate the fractional anisotropy (FA).

Returns:the fractional anisotropy for each voxel.
Return type:ndarray
static fractional_anisotropy_std(d, dperp0, dperp1, d_std, dperp0_std, dperp1_std, covariances=None)[source]

Calculate the standard deviation of the fractional anisotropy (FA) using error propagation.

Parameters:
  • d (ndarray) – an 1d array
  • dperp0 (ndarray) – an 1d array
  • dperp1 (ndarray) – an 1d array
  • d_std (ndarray) – an 1d array
  • dperp0_std (ndarray) – an 1d array
  • dperp1_std (ndarray) – an 1d array
  • covariances (dict) – optionally, a matrix holding the covariances. This expects the keys to be like: ‘<param_0>_to_<param_1>’. The order of the parameter names does not matter.
Returns:

the standard deviation of the fraction anisotropy using error propagation of the diffusivities.

Return type:

ndarray

class mdt.lib.post_processing.NODDIMeasures[source]

Bases: object

static noddi_bingham_extra_optimization_maps(results)[source]

Computes the ODI’s and Dispersion Anisotropic Index (DAI) for the NODDI Bingham model

static noddi_bingham_extra_sampling_maps(results)[source]

Computes the ODI’s and Dispersion Anisotropic Index (DAI) for the NODDI Bingham model.

This computes the indices per sample and takes the mean and std. over that.

static noddi_watson_extra_optimization_maps(results)[source]

Computes the NDI and ODI for the NODDI Watson model

static noddi_watson_extra_sampling_maps(results)[source]

Computes the NDI and ODI per sample and average over the derived values.

mdt.lib.post_processing.noddi_dti_maps(results)[source]

Compute NODDI-like statistics from Tensor/Kurtosis parameter fits.

This uses mdt.utils.compute_noddi_dti() for the computation, see there for more information.

Parameters:results (mdt.models.composite.ExtraOptimizationMapsInfo) –

the results data, should contain at least:

  • d (ndarray): principal diffusivity
  • dperp0 (ndarray): primary perpendicular diffusion
  • dperp1 (ndarray): primary perpendicular diffusion

And, if present, we also use these:

  • FA (ndarray): if computed already, the Fractional Anisotropy of the given diffusivities
  • MD (ndarray): if computed already, the Mean Diffusivity of the given diffusivities
  • MK (ndarray): if computing for Kurtosis, the computed Mean Kurtosis. If not given, we assume unity.
Returns:maps for the the NODDI-DTI, NDI and ODI measures.
Return type:dict

mdt.lib.processing_strategies module

mdt.lib.shell_utils module

class mdt.lib.shell_utils.BasicShellApplication[source]

Bases: object

classmethod console_script()[source]

Method used to start the command when launched from a distutils console script.

get_documentation_arg_parser()[source]

Get the argument parser that can be used for writing the documentation

Returns:the argument parser
Return type:argparse.ArgumentParser
run(args, extra_args)[source]

Run the application with the given arguments.

Parameters:
  • extra_args
  • args – the arguments from the argparser.
start(run_args=None)[source]

Starts a command and registers single handlers.

Parameters:run_args (list) – the list of run arguments. If None we use sys.argv[1:].
mdt.lib.shell_utils.get_argparse_extension_checker(choices, dir_allowed=False)[source]

Get an argparge.Action class that can check for correct extensions.

Returns:a class (not an instance) of an argparse action.
Return type:argparse.Action

mdt.lib.sorting module

This module contains some routines for sorting volumes and lists voxel-wise.

For example, in some applications it can be desired to sort volume fractions voxel-wise over an entire volume. This module contains functions for creating sort index matrices (determining the sort order), sorting volumes and lists and anti-sorting volumes (reversing the sort operation based on the sort index).

mdt.lib.sorting.create_2d_sort_matrix(input_volumes, reversed_sort=False)[source]

Create an index matrix that sorts the given input on the 2th dimension from small to large values.

Parameters:
  • input_volumes (ndarray or list) – either a list with 1d matrices or a 2d volume to use directly.
  • reversed_sort (boolean) – if True we reverse the sort and we sort from large to small.
Returns:

a 2d matrix with on the 2th dimension the indices of the elements in sorted order.

Return type:

ndarray

mdt.lib.sorting.create_4d_sort_matrix(input_volumes, reversed_sort=False)[source]

Create an index matrix that sorts the given input on the 4th dimension from small to large values (per element).

Parameters:
  • input_volumes (ndarray or list) – either a list with 3d volumes (or 4d with a singleton on the fourth dimension), or a 4d volume to use directly.
  • reversed_sort (boolean) – if True we reverse the sort and we sort from large to small.
Returns:

a 4d matrix with on the 4th dimension the indices of the elements in sorted order.

Return type:

ndarray

mdt.lib.sorting.sort_orientations(data_input, weight_names, extra_sortable_maps)[source]

Sort the orientations of multi-direction models voxel-wise.

This expects as input 3d/4d volumes. Do not use this with 2d arrays.

This can be used to sort, for example, simulations of the BallStick_r3 model (with three Sticks). There is no voxel-wise order over Sticks since for the model they are all equal compartments. However, when using optimization or ARD with sample, there is order within the compartments since the ARD is commonly placed on the second and third Sticks meaning these Sticks and there corresponding orientations are compressed to zero if they are not supported. In that case, the Stick with the primary orientation of diffusion has to be the first.

This method accepts as input results from (MDT) model fitting and is able to sort all the maps belonging to a given set of equal compartments per voxel.

Example:

sort_orientations('./output/BallStick_r3',
                  ['w_stick0.w', 'w_stick1.w', 'w_stick2.w'],
                  [['Stick0.theta', 'Stick1.theta', 'Stick2.theta'],
                   ['Stick0.phi', 'Stick1.phi', 'Stick2.phi'], ...])
Parameters:
  • data_input (str or dict) – either a directory or a dictionary containing the maps
  • weight_names (iterable of str) – The names of the maps we use for sorting all other maps. These will be sorted as well.
  • extra_sortable_maps (iterable of iterable) – the list of additional maps to sort. Every element in the given list should be another list with the names of the maps. The length of these second layer of lists should match the length of the weight_names.
Returns:

the sorted results in a new dictionary. This returns all input maps with some of them sorted.

Return type:

dict

mdt.lib.sorting.sort_volumes_per_voxel(input_volumes, sort_matrix)[source]

Sort the given volumes per voxel using the sort index in the given matrix.

What this essentially does is to look per voxel from which map we should take the first value. Then we place that value in the first volume and we repeat for the next value and finally for the next voxel.

If the length of the 4th dimension is > 1 we shift the 4th dimension to the 5th dimension and sort the array as if the 4th dimension values where a single value. This is useful for sorting (eigen)vector matrices.

Parameters:
  • input_volumes (list) – list of 4d ndarray
  • sort_matrix (ndarray) – 4d ndarray with for every voxel the sort index
Returns:

the same input volumes but then with every voxel sorted according to the given sort index.

Return type:

list

mdt.lib.sorting.undo_sort_volumes_per_voxel(input_volumes, sort_matrix)[source]

Undo the voxel-wise sorting of volumes based on the original sort matrix.

This uses the original sort matrix to place the elements back into the original order. For example, suppose we had data [a, b, c] with sort matrix [1, 2, 0] then the new results are [b, c, a]. This function will, given the sort matrix [1, 2, 0] and results [b, c, a] return the original matrix [a, b, c].

Parameters:
  • input_volumes (list) – list of 4d ndarray
  • sort_matrix (ndarray) – 4d ndarray with for every voxel the sort index
Returns:

the same input volumes but then with every voxel anti-sorted according to the given sort index.

Return type:

list

Module contents