mdt package

Subpackages

Submodules

mdt.configuration module

Contains the runtime configuration of MDT.

This consists of two parts, functions to get the current runtime settings and configuration actions to update these settings. To set a new configuration, create a new ConfigAction and use this within a context environment using config_context(). Example:

from mdt.configuration import YamlStringAction, config_context

config = '''
    optimization:
        general:
            name: 'Powell'
            settings:
                patience: 2
'''
with mdt.config_context(YamlStringAction(config)):
    mdt.fit_model(...)
class mdt.configuration.ActivePostProcessingLoader[source]

Bases: mdt.configuration.ConfigSectionLoader

Load the default settings for the post sample calculations.

load(value)[source]

Load the given configuration value into the current configuration.

Parameters:value – the value to use in the configuration
class mdt.configuration.ConfigAction[source]

Bases: object

Defines a configuration action for the use in a configuration context.

This should define an apply and an unapply function that sets and unsets the given configuration options.

The applying action needs to remember the state before applying the action.

apply()[source]

Apply the current action to the current runtime configuration.

unapply()[source]

Reset the current configuration to the previous state.

class mdt.configuration.ConfigSectionLoader[source]

Bases: object

load(value)[source]

Load the given configuration value into the current configuration.

Parameters:value – the value to use in the configuration
update(config_dict, updates)[source]

Update the given configuration dictionary with the values in the given updates dict.

This enables automating updating a configuration file. Updates are written in place.

Parameters:
  • config_dict (dict) – the current configuration dict
  • updates (dict) – the updated values to add to the given config dict.
class mdt.configuration.LoggingLoader[source]

Bases: mdt.configuration.ConfigSectionLoader

Loader for the top level key logging.

load(value)[source]

Load the given configuration value into the current configuration.

Parameters:value – the value to use in the configuration
class mdt.configuration.OptimizationSettingsLoader[source]

Bases: mdt.configuration.ConfigSectionLoader

Loads the optimization section

load(value)[source]

Load the given configuration value into the current configuration.

Parameters:value – the value to use in the configuration
class mdt.configuration.OutputFormatLoader[source]

Bases: mdt.configuration.ConfigSectionLoader

Loader for the top level key output_format.

load(value)[source]

Load the given configuration value into the current configuration.

Parameters:value – the value to use in the configuration
class mdt.configuration.ProcessingStrategySectionLoader[source]

Bases: mdt.configuration.ConfigSectionLoader

Loads the config section processing_strategies

load(value)[source]

Load the given configuration value into the current configuration.

Parameters:value – the value to use in the configuration
class mdt.configuration.RuntimeSettingsLoader[source]

Bases: mdt.configuration.ConfigSectionLoader

load(value)[source]

Load the given configuration value into the current configuration.

Parameters:value – the value to use in the configuration
update(config_dict, updates)[source]

Update the given configuration dictionary with the values in the given updates dict.

This enables automating updating a configuration file. Updates are written in place.

Parameters:
  • config_dict (dict) – the current configuration dict
  • updates (dict) – the updated values to add to the given config dict.
class mdt.configuration.SampleSettingsLoader[source]

Bases: mdt.configuration.ConfigSectionLoader

Loads the sample section

load(value)[source]

Load the given configuration value into the current configuration.

Parameters:value – the value to use in the configuration
class mdt.configuration.SetGeneralOptimizer(optimizer_name, settings=None)[source]

Bases: mdt.configuration.SimpleConfigAction

classmethod from_object(optimizer)[source]
class mdt.configuration.SetGeneralSampler(sampler_name, settings=None)[source]

Bases: mdt.configuration.SimpleConfigAction

class mdt.configuration.SimpleConfigAction[source]

Bases: mdt.configuration.ConfigAction

Defines a default implementation of a configuration action.

This simple config implements a default apply() method that saves the current state and a default unapply() that restores the previous state.

It is easiest to implement _apply() for extra actions.

apply()[source]

Apply the current action to the current runtime configuration.

unapply()[source]

Reset the current configuration to the previous state.

class mdt.configuration.TmpResultsDirSectionLoader[source]

Bases: mdt.configuration.ConfigSectionLoader

Load the section tmp_results_dir

load(value)[source]

Load the given configuration value into the current configuration.

Parameters:value – the value to use in the configuration
class mdt.configuration.VoidConfigAction[source]

Bases: mdt.configuration.ConfigAction

Does nothing. Meant as a container to not have to check for None’s everywhere.

Defines a configuration action for the use in a configuration context.

This should define an apply and an unapply function that sets and unsets the given configuration options.

The applying action needs to remember the state before applying the action.

apply()[source]

Apply the current action to the current runtime configuration.

unapply()[source]

Reset the current configuration to the previous state.

class mdt.configuration.YamlStringAction(yaml_str)[source]

Bases: mdt.configuration.SimpleConfigAction

mdt.configuration.config_context(config_action)[source]

Creates a temporary configuration context with the given config action.

This will temporarily alter the given configuration keys to the given values. After the context is executed the configuration will revert to the original settings.

Example usage:

config = '''
    optimization:
        general:
            name: 'Nelder-Mead'
            options:
                patience: 10
'''
with mdt.config_context(mdt.configuration.YamlStringAction(config)):
    mdt.fit_model(...)

or, equivalently:

config = '''
    ...
'''
with mdt.config_context(config):
    ...

This loads the configuration from a YAML string and uses that configuration as the context.

Parameters:config_action (mdt.configuration.ConfigAction or str) – the configuration action to apply. If a string is given we will use it using the YamlStringAction config action.
mdt.configuration.ensure_exists(keys)[source]

Ensure the given layer of keys exists.

Parameters:keys (list of str) – the positions to ensure exist
mdt.configuration.get_active_post_processing()[source]

Get the overview of active post processing switches.

Returns:
a dictionary holding two dictionaries, one called ‘optimization’ and one called ‘sampling’.
Both these dictionaries hold keys of elements to add to the respective post processing phase.
Return type:dict
mdt.configuration.get_config_dir()[source]

Get the location of the components.

Returns:the path to the components
Return type:str
mdt.configuration.get_config_option(option_name)[source]

Get the current configuration option for the given option name.

Parameters:option_name (list of str or str) – the name of the option, or a path to the option.
Returns:the raw configuration value defined for that option
Return type:object
mdt.configuration.get_general_optimizer_name()[source]

Get the name of the currently configured general optimizer

Returns:the name of the currently configured optimizer
Return type:str
mdt.configuration.get_general_optimizer_options()[source]

Get the settings of the currently configured general optimizer

Returns:the settings of the currently configured optimizer
Return type:dict
mdt.configuration.get_general_sampling_settings()[source]

Get the general sample settings.

Returns:the configured sampler for use in MDT
Return type:Sampler
mdt.configuration.get_logging_configuration_dict()[source]

Get the configuration dictionary for the logging.dictConfig().

MDT uses a few special logging configuration options to log to the files and GUI’s. These options are defined using a configuration dictionary that this function returns.

Returns:the configuration dict for use with dictConfig of the Python logging modules
Return type:dict
mdt.configuration.get_model_config(model_name, config)[source]

Get from the given dictionary the config for the given model.

This tries to find the best match between the given config items (by key) and the given model name.

Parameters:
  • model_name (str) – the name of the model we want to match.
  • config (dict) – the config items with as keys a composite model regex
Returns:

The config content of a matching key.

mdt.configuration.get_optimizer_for_model(model_name)[source]

Get the optimizer for this specific model.

Parameters:model_name (str) – the name of the composite model for which we want to get the optimizer to use.
Returns:the optimizer to use for optimizing the specific model
Return type:Optimizer
mdt.configuration.get_processing_strategy(processing_type, *args, **kwargs)[source]

Get the correct processing strategy for the given model.

Parameters:
  • processing_type (str) – ‘optimization’, ‘sampling’ or any other of the processing_strategies defined in the config
  • *args – passed to the constructor of the loaded processing strategy.
  • **kwargs – passed to the constructor of the loaded processing strategy.
Returns:

the processing strategy to use for this model

Return type:

ModelProcessingStrategy

mdt.configuration.get_section_loader(section)[source]

Get the section loader to use for the given top level section.

Parameters:section (str) – the section key we want to get the loader for
Returns:the config section loader for this top level section of the configuration.
Return type:ConfigSectionLoader
mdt.configuration.get_tmp_results_dir()[source]

Get the default tmp results directory.

This is the default directory for saving temporary computation results. Set to None to disable this and use the model directory.

Returns:the tmp results dir to use during optimization and sample
Return type:str or None
mdt.configuration.gzip_optimization_results()[source]

Check if we should write the volume maps from the optimization gzipped or not.

Returns:True if the results of optimization computations should be gzipped, False otherwise.
Return type:boolean
mdt.configuration.gzip_sampling_results()[source]

Check if we should write the volume maps from the sample gzipped or not.

Returns:True if the results of sample computations should be gzipped, False otherwise.
Return type:boolean
mdt.configuration.load_builtin()[source]

Load the config file from the skeleton in mdt/data/mdt.conf

mdt.configuration.load_from_dict(config_dict)[source]

Load configuration options from a given dictionary.

Please note that this will change the global configuration, i.e. this is a persistent change. If you do not want a persistent state change, consider using config_context() instead.

Parameters:config_dict (dict) – the dictionary from which to use the configurations
mdt.configuration.load_from_yaml(yaml_str)[source]

Can be called to use configuration options from a YAML string.

This will update the current configuration with the new options.

Please note that this will change the global configuration, i.e. this is a persistent change. If you do not want a persistent state change, consider using config_context() instead.

Parameters:yaml_str (str) – The string containing the YAML config to parse.
mdt.configuration.load_specific(file_name)[source]

Can be called by the application to use the config from a specific file.

This assumes that the given file contains YAML content, that is, we want to process it with the function load_from_yaml().

Please note that the last configuration loaded overwrites the values of the previously loaded config files.

Also, please note that this will change the global configuration, i.e. this is a persistent change. If you do not want a persistent state change, consider using config_context() instead.

Parameters:file_name (str) – The name of the file to use.
mdt.configuration.load_user_gui()[source]

Load the gui specific config file from the user home directory

mdt.configuration.load_user_home()[source]

Load the config file from the user home directory

mdt.configuration.set_config_option(option_name, value)[source]

Set the current configuration option for the given option name.

This will overwrite the current configuration for that option with the given value. Be careful, this will change the global configuration value.

Provided values should be objects and not YAML strings. For updating the configuration with YAML strings, please use the function load_from_yaml().

Parameters:
  • option_name (list of str or str) – the name of the option, or a path to the option.
  • value – the object to set for that option
Returns:

the raw configuration value defined for that option

Return type:

object

mdt.configuration.update_gui_config(update_dict)[source]

Update the GUI configuration file with the given settings.

Parameters:update_dict (dict) – the items to update in the GUI config file
mdt.configuration.update_write_config(config_file, update_dict)[source]

Update a given configuration file with updated values.

If the configuration file does not exist, a new one is created.

Parameters:
  • config_file (str) – the location of the config file to update
  • update_dict (dict) – the items to update in the config file

mdt.protocols module

class mdt.protocols.Protocol(columns=None)[source]

Bases: collections.abc.Mapping

Create a new protocol. Optionally initializes the protocol with the given set of columns.

Please note that we use SI units throughout MDT. Take care when loading the data that you load it in SI units.

For example:

  • G (gradient amplitude) in T/m (Tesla per meter)
  • Delta (time interval) in seconds
  • delta (gradient duration) in seconds
Parameters:columns (dict) – The initial list of columns used by this protocol, the keys should be the name of the parameter (the same as those used in the model functions). The values should be numpy arrays of equal length.
append_protocol(protocol)[source]

Append another protocol to this protocol and return the result as a new protocol.

This will add the columns of the other protocol to the columns of (a copy of) this protocol This supposes that both protocols have the same columns.

column_names

Get the names of the columns.

This only lists the real columns, not the estimated ones.

Returns:The names of the columns.
Return type:list of str
deepcopy()[source]

Return a deep copy of this protocol.

Returns:A deep copy of this protocol.
Return type:Protocol
estimated_column_names

Get the names of the virtual columns.

This will only return the names of the virtual columns for which no real column exists.

gamma_h

Get the used gamma of the H atom used by this protocol.

Returns:The used gamma of the H atom used by this protocol.
Return type:float
get_all_columns()[source]

Get all real (known) columns as a big array.

Returns:All the real columns of this protocol.
Return type:ndarray
get_b_values_shells(width=100000000.0)[source]

Get the b-values of the unique shells in this protocol.

Parameters:width (float) – assume a certain bandwidth of b-values around each shell. This will group b-values together if they are not more than
Returns:
per b-value the information about that shell as a dictionary. Each of these dicts contains the
b_value and the nmr_volumes keys.
Return type:list
Raises:KeyError – This function may throw a key error if the ‘b’ column in the protocol could not be loaded.
get_column(column_name)[source]

Get the column associated by the given column name.

Parameters:column_name (str) – The name of the column we want to return.
Returns:The column we would like to return. This is returned as a 2d matrix with shape (n, 1).
Return type:ndarray
Raises:KeyError – If the column could not be found.
get_columns(column_names)[source]

Get a matrix containing the requested column names in the order given.

Returns:A 2d matrix with the column requested concatenated.
Return type:ndarrray
get_indices_bval_in_range(start=0, end=1000000000.0)[source]

Get the indices of the b-values in the range [start, end].

This can be used to get the indices of gradients whose b-value is in the range suitable for a specific analysis.

Note that we use SI units and you need to specify the values in units of s/m^2 and not in s/mm^2.

Also note that specifying 0 as start of the range does not automatically mean that the unweighted volumes are returned. It can happen that the b-value of the unweighted volumes is higher then 0 even if the the gradient g is [0 0 0]. This function does not make any assumptions about that and just returns indices in the given range.

If you want to include the unweighted volumes, make a call to get_unweighted_indices() yourself.

Parameters:
  • start (float) – b-value of the start of the range (inclusive) we want to get the indices of the volumes from. Should be positive. We subtract epsilon for float comparison
  • end (float) – b-value of the end of the range (inclusive) we want to get the indices of the volumes from. Should be positive. We add epsilon for float comparison
  • epsilon (float) – the epsilon we use in the range.
Returns:

a list of indices of all volumes whose b-value is in the given range.

If you want to include the unweighted volumes, make a call to get_unweighted_indices() yourself.

Return type:

list

get_new_protocol_with_indices(indices)[source]

Create a new protocol object with all the columns but as rows only those of the given indices.

Parameters:indices – the indices we want to use in the new protocol
Returns:a protocol with all the data of the given indices
Return type:Protocol
get_nmr_shells()[source]

Get the number of unique shells in this protocol.

This is measured by counting the number of unique weighted bvals in this protocol.

Returns:The number of unique weighted b-values in this protocol
Return type:int
Raises:KeyError – This function may throw a key error if the ‘b’ column in the protocol could not be loaded.
get_unweighted_indices(unweighted_threshold=None)[source]

Get the indices to the unweighted volumes.

If the column ‘b’ could not be found, assume that all measurements are unweighted.

Parameters:unweighted_threshold (float) – the threshold under which we call it unweighted.
Returns:A list of indices to the unweighted volumes.
Return type:list of int
get_weighted_indices(unweighted_threshold=None)[source]

Get the indices to the weighted volumes.

Parameters:unweighted_threshold (float) – the threshold under which we call it unweighted.
Returns:A list of indices to the weighted volumes.
Return type:list of int
has_column(column_name)[source]

Check if this protocol has a column with the given name.

This will also return true if the column can be estimated from the other columns. See is_column_real() to get information for columns that are really known.

Returns:true if there is a column with the given name, false otherwise.
Return type:boolean
is_column_real(column_name)[source]

Check if this protocol has real column information for the column with the given name.

For example, the other function has_column(‘G’) will normally return true since ‘G’ can be estimated from ‘b’. This function will return false if the column needs to be estimated and will return true if real data is available for the columnn.

Returns:true if there is really a column with the given name, false otherwise.
Return type:boolean
length

Get the length of this protocol.

Returns:The length of the protocol.
Return type:int
number_of_columns

Get the number of columns in this protocol.

This only counts the real columns, not the estimated ones.

Returns:The number columns in this protocol.
Return type:int
with_added_column_from_file(name, file_name, multiplication_factor=1)[source]

Create a copy of this protocol with the given column (loaded from a file) added to this protocol.

The given file can either contain a single value or one value per protocol line.

Parameters:
  • name (str) – The name of the column to add.
  • file_name (str) – The file to get the column from.
  • multiplication_factor (double) – we might need to scale the data by a constant. For example, if the data in the file is in ms we might need to scale it to seconds by multiplying with 1e-3
Returns:

for chaining

Return type:

self

with_column_removed(column_name)[source]

Create a copy of this protocol with the given column removed.

Parameters:column_name (str) – The name of the column to remove
Returns:the new updated protocol
Return type:Protocol
with_columns_removed(column_names)[source]

Create a copy of this protocol with the given columns removed.

Parameters:column_names (list of str) – The name of the columns to remove
Returns:the new updated protocol
Return type:Protocol
with_new_column(name, data)[source]

Create a copy of this protocol with the given column updated/added.

Parameters:
  • name (str) – The name of the column to add
  • data (ndarray) – The vector to add to this protocol.
Returns:

the new protocol with the updated columns

Return type:

Protocol

with_rows_removed(rows)[source]

Create a copy of the protocol with a list of rows removed from all the columns.

Please note that the protocol is 0 indexed.

Parameters:rows (list of int) – List with indices of the rows to remove
with_update(name, data)[source]

Create a copy of the protocol with the given column updated to a new value.

Synonymous to with_new_column().

Parameters:
  • name (str) – The name of the column to add
  • data (ndarray or float) – The value or vector to add to this protocol.
Returns:

the updated protocol

Return type:

Protocol

with_updates(additional_columns)[source]

Creates a copy of this protocol with the given columns added.

Parameters:additional_columns (dict) – the additional columns to add
Returns:the new updated protocol
Return type:Protocol
class mdt.protocols.SimpleVirtualColumn(name, generate_function)[source]

Bases: mdt.protocols.VirtualColumn

Create a simple virtual column that uses the given generate function to get the column.

Parameters:
  • name (str) – the name of the column
  • generate_function (python function) – the function to generate the column
get_values(parent_protocol)[source]

Get the column given the information in the given protocol.

Parameters:parent_protocol (Protocol) – the protocol object to use as a basis for generating the column
Returns:the single column as a row vector or 2d matrix of shape nx1
Return type:ndarray
class mdt.protocols.VirtualColumn(name)[source]

Bases: object

The interface for generating virtual columns.

Virtual columns are columns generated on the fly from the other parts of the protocol. They are generally only generated if the column it tries to generate is not in the protocol.

In the Protocol they are used separately from the RealColumns. The VirtualColumns can always be added to the Protocol, but are only used when needed. The RealColumns can overrule VirtualColumns by their presence.

Parameters:name (str) – the name of the column this object generates.
get_values(parent_protocol)[source]

Get the column given the information in the given protocol.

Parameters:parent_protocol (Protocol) – the protocol object to use as a basis for generating the column
Returns:the single column as a row vector or 2d matrix of shape nx1
Return type:ndarray
class mdt.protocols.VirtualColumnB[source]

Bases: mdt.protocols.VirtualColumn

get_values(parent_protocol)[source]

Get the column given the information in the given protocol.

Parameters:parent_protocol (Protocol) – the protocol object to use as a basis for generating the column
Returns:the single column as a row vector or 2d matrix of shape nx1
Return type:ndarray
class mdt.protocols.VirtualColumn_g_spherical[source]

Bases: mdt.protocols.VirtualColumn

get_values(parent_protocol)[source]

Get the column given the information in the given protocol.

Parameters:parent_protocol (Protocol) – the protocol object to use as a basis for generating the column
Returns:the single column as a row vector or 2d matrix of shape nx1
Return type:ndarray
mdt.protocols.auto_load_protocol(directory, bvec_fname=None, bval_fname=None, bval_scale='auto', protocol_columns=None)[source]

Load a protocol from the given directory.

This function will only auto-search files in the top directory and not in the sub-directories.

This will first try to use the first .prtcl file found. If none present, it will try to find bval and bvec files to use and then try to find the protocol options.

The protocol_options should be a dictionary mapping protocol items to filenames. If given, we only use the items in that dictionary. If not given we try to autodetect the protocol option files from the given directory.

The search order is (continue until matched):

  1. anything ending in .prtcl
    1. the given bvec and bval file
    2. anything containing bval or b-val
    3. anything containing bvec or b-vec
      1. This will prefer a bvec file that also has ‘fsl’ in the name. This to be able to auto use
        HCP MGH bvec directions.
    4. protocol options
      1. using dict
      2. matching filenames exactly to the available protocol options. (e.g, finding a file named TE for the TE’s)

The available protocol options are:

  • TE: the TE in seconds, either a file or, one value or one value per bvec
  • TR: the TR in seconds, either a file or, either one value or one value per bvec
  • Delta: the big Delta in seconds, either a file or, either one value or one value per bvec
    can alternatively named big_delta
  • delta: the small delta in seconds, either a file or, either one value or one value per bvec
  • maxG: the maximum gradient amplitude G in T/m. Used in estimating G, Delta and delta if not given.
Parameters:
  • directory (str) – the directory to use the protocol from
  • bvec_fname (str) – if given, the filename of the bvec file (as a subpath of the given directory)
  • bval_fname (str) – if given, the filename of the bvec file (as a subpath of the given directory)
  • bval_scale (double) – The scale by which to scale the values in the bval file. If we use from bvec and bval we will use this scale. If ‘auto’ we try to guess the units/scale.
  • protocol_columns (dict) – mapping protocol items to filenames (as a subpath of the given directory) or mapping them to values (one value or one value per bvec line)
Returns:

a loaded protocol file.

Return type:

Protocol

Raises:

ValueError – if not enough information could be found. (No protocol or no bvec/bval combo).

mdt.protocols.create_protocol(out_file=None, bvecs=None, bvals=None, **kwargs)[source]

Create and write a protocol from the given keywords.

Please note that all given columns should be in SI units.

Parameters:
  • out_file (str) – the output filename, if not given we will not write the protocol.
  • bvecs (str or ndarray) – either an [n, 3] array or a string to a bvec file
  • bvals (str or ndarray) – either an [n, 1] array or a string to a bval file. This expects a typical bval file with units in s/mm^2.
  • kwargs – other protocol columns, for example Delta=30e-3
Returns:

the created protocol

Return type:

Protocol

mdt.protocols.get_g_columns(bvec_file, column_based='auto')[source]

Get the columns of a bvec file. Use auto transpose if needed.

Parameters:
  • bvec_file (str) – The filename of the bvec file
  • column_based (boolean) – If true, this supposes that the bvec (the vector file) has 3 rows (gx, gy, gz) and is space or tab seperated If false, the vectors are each one a different line. If ‘auto’ it is autodetected, this is the default.
Returns:

the loaded bvec matrix separated into ‘gx’, ‘gy’ and ‘gz’

Return type:

dict

mdt.protocols.get_sequence_timings(protocol)[source]

Return G, Delta and delta, estimate them if necessary.

If Delta and delta are available, they are used instead of estimated Delta and delta.

Parameters:protocol (Protocol) – the protocol for which we want to get the sequence timings.
Returns:the columns G, Delta and delta
Return type:dict
mdt.protocols.load_bvec_bval(bvec, bval, column_based='auto', bval_scale='auto')[source]

Load an protocol from a bvec and bval file.

This supposes that the bvec (the vector file) has 3 rows (gx, gy, gz) and is space or tab seperated. The bval file (the b values) are one one single line with space or tab separated b values.

Parameters:
  • bvec (str) – The filename of the bvec file
  • bval (str) – The filename of the bval file
  • column_based (boolean) – If true, this supposes that the bvec (the vector file) has 3 rows (gx, gy, gz) and is space or tab seperated and that the bval file (the b values) are one one single line with space or tab separated b values. If false, the vectors and b values are each one a different line. If ‘auto’ it is autodetected, this is the default.
  • bval_scale (float) – The amount by which we want to scale (multiply) the b-values. Typically bval files are in units of s/mm^2, while MDT uses s/m^2 in computations. To rescale, this function checks if the b-val is lower then 1e4 and if so multiplies it by 1e6.
Returns:

Protocol the loaded protocol.

mdt.protocols.load_protocol(data_source)[source]

Load an protocol from the given protocol file, with as column names the given list of names.

If the given file could not be found it tries once more by appending .prtcl to the end of the file.

Parameters:data_source (string, Protocol) – Either a filename, a directory path or a Protocol object to load. If a filename is given we load the protocol from file, if a directory is given we try to load a protocol from that directory. If an Protocol object is given we return it directly.
Returns:An protocol object with all the columns loaded.
Return type:Protocol
mdt.protocols.write_bvec_bval(protocol, bvec_fname, bval_fname, column_based=True, bval_scale=1)[source]

Write the given protocol to bvec and bval files.

This writes the bvector and bvalues to the given filenames.

Parameters:
  • protocol (Protocol) – The protocol to write to bvec and bval files.
  • bvec_fname (string) – The bvector filename
  • bval_fname (string) – The bval filename
  • column_based (boolean, optional, default true) – If true, this supposes that the bvec (the vector file) will have 3 rows (gx, gy, gz) and will be space or tab seperated and that the bval file (the b values) are one one single line with space or tab separated b values.
  • bval_scale (double or str) – the amount by which we want to scale (multiply) the b-values. The default is auto, this checks if the first b-value is higher than 1e4 and if so multiplies it by 1e-6 (sets bval_scale to 1e-6 and multiplies), else multiplies by 1.
mdt.protocols.write_protocol(protocol, fname, columns_list=None)[source]

Write the given protocol to a file.

Parameters:
  • protocol (Protocol) – The protocol to write to file
  • fname (string) – The filename to write to
  • columns_list (tuple) – The tuple with the columns names to write (and in that order). If None, all the columns are written to file.
Returns:

the parameters that where written (and in that order)

Return type:

tuple

mdt.simulations module

mdt.simulations.add_rician_noise(signals, noise_level, seed=None)[source]

Make the given signal Rician distributed.

To calculate the noise level divide the signal of the unweighted volumes by the SNR you want. For example, for a unweighted signal b0=1e4 and a desired SNR of 20, you need an noise level of 1e4/20 = 500.

Parameters:
  • signals – the signals to make Rician distributed
  • noise_level – the level of noise to add. The actual Rician stdev depends on the signal. See ricestat in the mathworks library. The noise level can be calculated using b0/SNR.
  • seed (int) – if given, the seed for the random number generation
Returns:

make every element of the input signals contain Rician distributed noise.

Return type:

ndarray

mdt.simulations.create_signal_estimates(model, input_data, parameters)[source]

Create the signals estimates for your estimated model parameters.

This function is typically used to obtain signal estimates from optimization results.

This function evaluates the model as it is in the model fitting and sample. That is, this method includes the gradient deviations (if set in the input data) and loads all static and fixed parameters maps.

Parameters:
  • model (str or model) – the model or the name of the model to use for estimating the signals
  • input_data (mdt.lib.input_data.MRIInputData) – the input data object, we will set this to the model
  • parameters (str or dict) – either a directory file name or a dictionary containing optimization results Each element is assumed to be a 4d volume with the voxels we are using for the simulations.
Returns:

the 4d array with the signal estimates per voxel

Return type:

ndarray

mdt.simulations.simulate_signals(model, protocol, parameters)[source]

Estimate the signals of a given model for the given combination of protocol and parameters.

In contrast to the function create_signal_estimates(), this function does not incorporate the gradient deviations. Furthermore, this function expects a two dimensional list of parameters and this function will simply evaluate the model for each set of parameters.

Parameters:
  • model (str or model) – the model or the name of the model to use for estimating the signals
  • protocol (mdt.protocols.Protocol) – the protocol we will use for the signal simulation
  • parameters (dict or ndarray) – the parameters for which to simulate the signal. It can either be a matrix with for every row every model parameter, or a dictionary with for every parameter a 1d array.
Returns:

a 2d array with for every parameter combination the simulated model signal

Return type:

ndarray

mdt.utils module

class mdt.utils.AutoDict[source]

Bases: collections.defaultdict

Create an auto-vivacious dictionary.

to_normal_dict()[source]

Convert this dictionary to a normal dict (recursive).

Returns:a normal dictionary with the items in this dictionary.
Return type:dict
class mdt.utils.InitializationData[source]

Bases: object

apply_to_model(model, input_data)[source]

Apply all information in this initialization data to the given model.

This applies the information in this init data to given model in place.

Parameters:
  • model – the model to apply the initializations on
  • input_data (mdt.lib.input_data.MRIInputData) – the input data used in the fit
get_fixes()[source]

Determines which parameters need to be fixed and to which values.

Returns:the initialization values with per map either a scalar or a 3d/4d volume
Return type:dict
get_inits()[source]

Get the initialization values.

Returns:the initialization values with per map either a scalar or a 3d/4d volume
Return type:dict
get_lower_bounds()[source]

Get the lower bounds to use in the model processing.

Returns:the lower bounds values with per map either a scalar or a 3d/4d volume
Return type:dict
get_upper_bounds()[source]

Get the upper bounds to use in the model processing.

Returns:the upper bounds values with per map either a scalar or a 3d/4d volume
Return type:dict
class mdt.utils.PathJoiner(*args, make_dirs=False)[source]

Bases: object

The path joining class.

To construct use something like:

>>> pjoin = PathJoiner(r'/my/images/dir/')

or:

>>> pjoin = PathJoiner('my', 'images', 'dir')

Then, you can call it like:

>>> pjoin()
/my/images/dir

At least, it returns the above on Linux. On windows it will return my\images\dir. You can also call it with an additional path element that is (temporarily) appended to the path:

>>> pjoin('/brain_mask.nii.gz')
/my/images/dir/brain_mask.nii.gz

To add a path permanently to the path joiner use:

>>> pjoin.append('results')

This will extend the stored path to /my/images/dir/results/:

>>> pjoin('/brain_mask.nii.gz')
/my/images/dir/results/brain_mask.nii.gz

You can reset the path joiner to the state of at object construction using:

>>> pjoin.reset()

You can also create a copy of this class with extended path elements by calling

>>> pjoin2 = pjoin.create_extended('results')

This returns a new PathJoiner instance with as path the current path plus the items in the arguments.

>>> pjoin2('brain_mask.nii.gz')
/my/images/dir/results/brain_mask.nii.gz
Parameters:
  • *args – the initial path element(s).
  • make_dirs (boolean) – make_dirs (boolean): if set to True we will automatically create the directory this path is pointing to. Similar to calling make_dirs() on the resulting object.
append(*args)[source]

Extend the stored path with the given elements

create_extended(*args, make_dirs=False, make_dirs_mode=None)[source]

Create and return a new PathJoiner instance with the path extended by the given arguments.

Parameters:
  • make_dirs (boolean) – if set to True we will automatically create the directory this path is pointing to. Similar to calling make_dirs() on the resulting object.
  • make_dirs_mode (int) – the mode for the call to make_dirs().
make_dirs(dir=None, mode=None)[source]

Create the directories if they do not exists.

This first creates the directory mentioned in the path joiner. Afterwards, it will create the additional specified directory.

This uses os.makedirs to make the directories. The given argument mode is handed to os.makedirs.

Parameters:
  • dir (str or list or str) – single additional directory to create, can be a nested directory.
  • mode (int) – the mode parameter for os.makedirs, defaults to 0o777
reset()[source]

Reset the path to the path at construction time

class mdt.utils.SimpleInitializationData(inits=None, fixes=None, lower_bounds=None, upper_bounds=None, unfix=None)[source]

Bases: mdt.utils.InitializationData

A storage class for initialization data during model fitting and sample.

Every element is supposed to be a dictionary with as keys the name of a parameter and as value a scalar value or a 3d/4d volume.

Parameters:
  • inits (dict) –

    indicating the initialization values for the parameters. Example of use:

    inits = {'Stick.theta': np.pi,
             'Stick.phi': './my_init_map.nii.gz'}
    
  • fixes (dict) –

    indicating fixations of a parameter. Example of use:

    fixes = {'Ball.d': 3.0e-9}
    

    As values it accepts scalars and maps but also strings defining dependencies.

  • lower_bounds (dict) – the lower bounds per parameter
  • upper_bounds (dict) – the upper bounds per parameter
  • unfix (list or tuple) – the list of parameters to unfix
apply_to_model(model, input_data)[source]

Apply all information in this initialization data to the given model.

This applies the information in this init data to given model in place.

Parameters:
  • model – the model to apply the initializations on
  • input_data (mdt.lib.input_data.MRIInputData) – the input data used in the fit
get_fixes()[source]

Determines which parameters need to be fixed and to which values.

Returns:the initialization values with per map either a scalar or a 3d/4d volume
Return type:dict
get_inits()[source]

Get the initialization values.

Returns:the initialization values with per map either a scalar or a 3d/4d volume
Return type:dict
get_lower_bounds()[source]

Get the lower bounds to use in the model processing.

Returns:the lower bounds values with per map either a scalar or a 3d/4d volume
Return type:dict
get_upper_bounds()[source]

Get the upper bounds to use in the model processing.

Returns:the upper bounds values with per map either a scalar or a 3d/4d volume
Return type:dict
mdt.utils.apply_mask(volumes, mask, inplace=True)[source]

Apply a mask to the given input.

Parameters:
  • volumes (str, ndarray, list, tuple or dict) – The input file path or the image itself or a list, tuple or dict.
  • mask (str or ndarray) – The filename of the mask or the mask itself
  • inplace (boolean) – if True we apply the mask in place on the volume image. If false we do not.
Returns:

Depending on the input either a single image of the same size as the input image, or a list, tuple or dict. This will set for all the output images the the values to zero where the mask is zero.

mdt.utils.apply_mask_to_file(input_fname, mask, output_fname=None)[source]

Apply a mask to the given input (nifti) file.

If no output filename is given, the input file is overwritten.

Parameters:
  • input_fname (str) – The input file path
  • mask (str or ndarray) – The mask to use
  • output_fname (str) – The filename for the output file (the masked input file).
mdt.utils.calculate_point_estimate_information_criterions(log_likelihoods, k, n)[source]

Calculate various point estimate information criterions.

These are meant to be used after maximum likelihood estimation as they assume you have a point estimate of your likelihood per problem.

Parameters:
  • log_likelihoods (1d np array) – the array with the log likelihoods
  • k (int) – number of parameters
  • n (int) – the number of instances, protocol length
Returns:

dict with therein the BIC, AIC and AICc which stand for the

Bayesian, Akaike and Akaike corrected Information Criterion

mdt.utils.cartesian_to_spherical(vectors, ensure_right_hemisphere=True)[source]

Create spherical coordinates (theta and phi) from the given cartesian coordinates.

This expects a n-dimensional matrix with on the last axis a set of cartesian coordinates as (x, y, z). From that, this function will calculate two n-dimensional matrices for the inclinations theta and the azimuths phi.

By default the range of the output is [0, pi] for both theta and phi, meaning that the y-coordinate must be positive (such that all points are on the right hemisphere). For points with negative y-coordinate, this function will transform the coordinate to the antipodal point on the sphere and return the angles for that point. This behaviour can be disabled by setting ensure_right_hemisphere to false.

Also note that this will consider the input to be unit vectors. If not, it will normalize the vectors beforehand.

Parameters:vectors (ndarray) – the n-dimensional set of cartesian coordinates (last axis should have 3 items).
Returns:the matrices for theta and phi.
Return type:tuple
mdt.utils.check_user_components()[source]

Check if the components in the user’s home folder are up to date with this version of MDT

Returns:True if the .mdt folder for this version exists. False otherwise.
Return type:bool
mdt.utils.combine_dict_to_array(data, param_names)[source]

Create an array out of the given data dictionary.

The final array will consist of elements of the data dictionary, concatenated on the second dimension based on the order and names of the param_names list.

This is basically the inverse of split_array_to_dict().

Parameters:
  • data (dict) – matrices, each of shape (n, 1) or (n,) which we will concatenate on the second dimension
  • param_names (List[str]) – the items we extract from the data, in that order
Returns:

the dictionary elements compressed as a 2d array of size (n, len(param_names)).

Return type:

ndarray

mdt.utils.compute_noddi_dti(model, input_data, results, noddi_d=1.7e-09)[source]

Compute NODDI-like statistics from Tensor/Kurtosis parameter fits.

Several authors noted correspondence between NODDI parameters and DTI parameters [1, 2]. This function computes the neurite density index (NDI) and NODDI’s measure of neurite dispersion using Tensor parameters.

Parameters:
  • model (str or EstimableModel) – The model we used to compute the results. Can be the name of a composite model or an implementation of a composite model. Can be any model that uses the Tensor or KurtosisTensor compartment models. We need this information because we need to determine the volumes used to compute the results.
  • input_data (mdt.lib.input_data.MRIInputData) – the input data used for computing the model results
  • results (dict) –

    the results data, should contain at least:

    • <model>.d (ndarray): principal diffusivity
    • <model>.dperp0 (ndarray): primary perpendicular diffusion
    • <model>.dperp1 (ndarray): primary perpendicular diffusion

    And, if present, we also use these:

    • <model>.FA (ndarray): if computed already, the Fractional Anisotropy of the given diffusivities
    • <model>.MD (ndarray): if computed already, the Mean Diffusivity of the given diffusivities
    • <model>.MK (ndarray): if computing for Kurtosis, the computed Mean Kurtosis.
      If not given, we assume unity.

    Where <model> can be either ‘Tensor’ or ‘KurtosisTensor’ or the empty string in which case we take maps without a model name prefix.

  • noddi_d (float) – the intrinsic diffusivity of the intra-neurite compartment of NODDI. This is typically assumed to be d = 1.7x10^-9 m^2/s.
Returns:

maps for the the NODDI-DTI, NDI and ODI measures.

Return type:

dict

References

  1. Edwards LJ, Pine KJ, Ellerbrock I, Weiskopf N, Mohammadi S. NODDI-DTI: Estimating neurite orientation and
    dispersion parameters from a diffusion tensor in healthy white matter. Front Neurosci. 2017;11(DEC):1-15. doi:10.3389/fnins.2017.00720.
  2. Lampinen B, Szczepankiewicz F, Martensson J, van Westen D, Sundgren PC, Nilsson M. Neurite density
    imaging versus imaging of microscopic anisotropy in diffusion MRI: A model comparison using spherical tensor encoding. Neuroimage. 2017;147(July 2016):517-531. doi:10.1016/j.neuroimage.2016.11.053.
mdt.utils.configure_per_model_logging(output_path, overwrite=False)[source]

Set up logging for one specific model.

Parameters:
  • output_path – the output path where the model results are stored.
  • overwrite (boolean) – if we want to overwrite or append. If overwrite is True we overwrite the file, if False we append.
mdt.utils.covariance_to_correlation(input_maps)[source]

Transform the covariance maps to correlation maps.

This function is meant to be used on standard MDT output maps. It will look for maps named Covariance_{m0}_to_{m1} and {m[0-1]}.std where m0 and m1 are two map names. It will use the std. maps of m0 and m1 to transform the covariance map into a correlation map.

Typical use case examples (both are equal):

covariance_to_correlation('./BallStick_r1/')
covariance_to_correlation(mdt.load_volume_maps('./BallStick_r1/'))
Parameters:input_maps (dict or str) – either a dictionary containing the input maps or a string with a folder name
Returns:the correlation maps computed from the input maps. The naming scheme is Correlation_{m0}_to_{m1}.
Return type:dict
mdt.utils.create_blank_mask(volume4d_path, output_fname=None)[source]

Create a blank mask for the given 4d volume.

Sometimes you want to use all the voxels in the given dataset, without masking any voxel. Since the optimization routines require a mask, you have to submit one. The solution is to use a blank mask, that is, a mask that masks nothing.

Parameters:
  • volume4d_path (str) – the path to the 4d volume you want to create a blank mask for
  • output_fname (str) – the path to the result mask. If not given, we will use the name of the input file and append ‘_mask’ to it.
mdt.utils.create_brain_mask(dwi_info, protocol, output_fname=None, **kwargs)[source]

Create a brain mask.

At the moment this uses the median-otsu algorithm, in future versions this might support better masking algorithms.

Parameters:
  • dwi_info (string or tuple or image) –

    the dwi info, either:

    • the filename of the input file;
    • or a tuple with as first index a ndarray with the DWI and as second index the header;
    • or only the image as an ndarray
  • protocol (string or Protocol) – The filename of the protocol file or a Protocol object
  • output_fname (string) – the filename of the output file. If None, no output is written. If dwi_info is only an image also no file is written.
  • **kwargs – the additional arguments for the function median_otsu.
Returns:

The created brain mask

Return type:

ndarray

mdt.utils.create_covariance_matrix(nmr_voxels, results, names, result_covars=None)[source]

Create the covariance matrix for the given output maps.

Parameters:
  • nmr_voxels (int) – the number of voxels in the output covariance matrix.
  • results (dict) – the results dictionary from optimization, containing the standard deviation maps as ‘<name>.std’ for each of the given names. If a map is not present we will use 0 for that variance.
  • names (List[str]) – the names of the maps to load, the order of the names is the order of the diagonal elements.
  • result_covars (dict) – dictionary of covariance terms with the names specified as ‘<name>_to_<name>’. Since the order is undefined, this tests for <x>_to_<y> as <y>_to_<x>.
Returns:

matrix of size (n, m) for n voxels and m names.

If no covariance elements are given, we use zero for all off-diagonal terms.

Return type:

ndarray

mdt.utils.create_index_matrix(brain_mask)[source]

Get a matrix with on every 3d position the linear index number of that voxel.

This function is useful if you want to locate a voxel in the ROI given the position in the volume.

Parameters:brain_mask (str or 3d array) – the brain mask you would like to use
Returns:
a 3d volume of the same size as the given mask and with as every non-zero element the position
of that voxel in the linear ROI list.
Return type:3d ndarray
mdt.utils.create_median_otsu_brain_mask(dwi_info, protocol, output_fname=None, **kwargs)[source]

Create a brain mask and optionally write it.

It will always return the mask. If output_fname is set it will also write the mask.

Parameters:
  • dwi_info (string or tuple or image) –

    the dwi info, either:

    • the filename of the input file;
    • or a tuple with as first index a ndarray with the DWI and as second index the header;
    • or only the image as an ndarray
  • protocol (string or Protocol) – The filename of the protocol file or a Protocol object
  • output_fname (string) – the filename of the output file. If None, no output is written. If dwi_info is only an image also no file is written.
  • **kwargs – the additional arguments for the function median_otsu.
Returns:

The created brain mask

Return type:

ndarray

mdt.utils.create_roi(data, brain_mask)[source]

Create and return masked data of the given brain volume and mask

Parameters:
  • data (string, ndarray or dict) – a brain volume with four dimensions (x, y, z, w) where w is the length of the protocol, or a list, tuple or dictionary with volumes or a string with a filename of a dataset to use or a directory with the containing maps to load.
  • brain_mask (ndarray or str) – the mask indicating the region of interest with dimensions: (x, y, z) or the string to the brain mask to use
Returns:

If a single ndarray is given we will return the ROI for that array. If

an iterable is given we will return a tuple. If a dict is given we return a dict. For each result the axis are: (voxels, protocol)

Return type:

ndarray, tuple, dict

mdt.utils.create_slice_roi(brain_mask, roi_dimension, roi_slice)[source]

Create a region of interest out of the given brain mask by taking one specific slice out of the mask.

Parameters:
  • brain_mask (ndarray) – The brain_mask used to create the new brain mask
  • roi_dimension (int) – The dimension to take a slice out of
  • roi_slice (int) – The index on the given dimension.
Returns:

A brain mask of the same dimensions as the original mask, but with only one slice activated.

mdt.utils.estimate_noise_std(input_data)[source]

Estimate the noise standard deviation.

This calculates per voxel (in the brain mas) the std over all unweighted volumes and takes the mean of those estimates as the standard deviation of the noise.

The method is taken from Camino (http://camino.cs.ucl.ac.uk/index.php?n=Man.Estimatesnr).

Parameters:input_data (mdt.lib.input_data.SimpleMRIInputData) – the input data we can use to do the estimation
Returns:the noise std estimated from the data. This can either be a single float, or an ndarray.
Raises:NoiseStdEstimationNotPossible – if the noise could not be estimated
mdt.utils.extract_volumes(input_volume_fname, input_protocol, output_volume_fname, output_protocol, volume_indices)[source]

Extract volumes from the given volume and save them to separate files.

This will index the given input volume in the 4th dimension, as is usual in multi shell DWI files.

Parameters:
  • input_volume_fname (str) – the input volume from which to get the specific volumes
  • input_protocol (str or Protocol) – the input protocol, either a file or preloaded protocol object
  • output_volume_fname (str) – the output filename for the selected volumes
  • output_protocol (str) – the output protocol for the selected volumes
  • volume_indices (list) – the desired indices, indexing the input_volume
mdt.utils.flatten(input_it)[source]

Flatten an iterator with a new iterator

Parameters:it (iterable) – the input iterable to flatten
Returns:a new iterable with a flattened version of the original iterable.
mdt.utils.get_cl_devices(indices=None, device_type=None)[source]

Get a list of all CL devices in the system.

The indices of the devices can be used in the model fitting/sample functions for ‘cl_device_ind’.

Parameters:
  • indices (List[int] or int) – the indices of the CL devices to use. Either set this or preferred_device_type.
  • device_type (str) – the preferred device type, one of ‘CPU’, ‘GPU’ or ‘APU’. If set, we ignore the indices parameter.
Returns:

A list of CLEnvironments, one for each device in the system.

mdt.utils.get_example_data(output_directory)[source]

Get the MDT example data that is accompanying the installation.

This will write the MDT example data (b1k_b2k and b6k datasets) to the indicated directory. You can use this data for testing MDT on your computer. These example datasets are contained within the MDT package and as such are available on every machine with MDT installed.

Parameters:output_directory (str) – the directory to write the files to
mdt.utils.get_intermediate_results_path(output_dir, tmp_dir)[source]

Get a temporary results path for processing.

Parameters:
  • output_dir (str) – the output directory of the results
  • tmp_dir (str) – a preferred tmp dir. If not given we create a temporary directory in the output_dir.
Returns:

a path for saving intermediate computation results

Return type:

str

mdt.utils.get_slice_in_dimension(volume, dimension, index)[source]

From the given volume get a slice on the given dimension (x, y, z, …) and then on the given index.

Parameters:
  • volume (ndarray) – the volume, 3d, 4d or more
  • dimension (int) – the dimension on which we want a slice
  • index (int) – the index of the slice
Returns:

A slice (plane) or hyperplane of the given volume

Return type:

ndarray

mdt.utils.get_temporary_results_dir(user_value)[source]

Get the temporary results dir from the user value and from the config.

Parameters:user_value (string, boolean or None) – if a string is given we will use that directly. If a boolean equal to True is given we will use the configuration defined value. If None/False is given we will not use a specific temporary results dir.
Returns:either the temporary results dir or None
Return type:str or None
mdt.utils.init_user_settings(pass_if_exists=True)[source]

Initializes the user settings folder using a skeleton.

This will create all the necessary directories for adding components to MDT. It will also create a basic configuration file for setting global wide MDT options. Also, it will copy the user components from the previous version to this version.

Each MDT version will have it’s own sub-directory in the config directory.

Parameters:pass_if_exists (boolean) – if the folder for this version already exists, we might do nothing (if True)
Returns:the path the user settings skeleton was written to
Return type:str
mdt.utils.is_scalar(value)[source]

Test if the given value is a scalar.

This function also works with memmapped array values, in contrast to the numpy isscalar method.

Parameters:value – the value to test for being a scalar value
Returns:true if the value is a scalar, false otherwise.
Return type:boolean
mdt.utils.load_brain_mask(data_source)[source]

Load a brain mask from the given data.

Parameters:data_source (string, ndarray, tuple, nifti) – Either a filename, a ndarray, a tuple as (ndarray, nifti header) or finally a nifti object having the method ‘get_data()’.
Returns:boolean array with every voxel with a value higher than 0 set to 1 and all other values set to 0.
Return type:ndarray
mdt.utils.load_sample(fname, mode='r')[source]

Load an matrix of samples from a .samples.npy file.

This will open the samples as a numpy memory mapped array.

Parameters:
  • fname (str) – the name of the file to load, suffix of .samples.npy is not required.
  • mode (str) – the mode in which to open the memory mapped sample files (see numpy mode parameter)
Returns:

a memory mapped array with the results

Return type:

ndarray

mdt.utils.load_samples(data_folder, mode='r')[source]

Load sampled results as a dictionary of numpy memmap.

Parameters:
  • data_folder (str) – the folder from which to use the samples
  • mode (str) – the mode in which to open the memory mapped sample files (see numpy mode parameter)
Returns:

the memory loaded samples per sampled parameter.

Return type:

dict

mdt.utils.load_volume_maps(directory, map_names=None, deferred=True)[source]

Read a number of Nifti volume maps from a directory.

Parameters:
  • directory (str) – the directory from which we want to read a number of maps
  • map_names (list or tuple) – the names of the maps we want to use. If given we only use and return these maps.
  • deferred (boolean) – if True we return an deferred loading dictionary instead of a dictionary with the values loaded as arrays.
Returns:

A dictionary with the volumes. The keys of the dictionary are the filenames (without the extension) of the

files in the given directory.

Return type:

dict

mdt.utils.model_output_exists(model, output_folder, append_model_name_to_path=True)[source]

A rudimentary check if the output for the given model exists.

This checks if the output folder exists and contains at least the result file for each of the free parameters of the model.

When using this to try to skip subjects when batch fitting it might fail if one of the models can not be calculated for a given subject. For example NODDI requires two shells. If that is not given we can not calculate it and hence no maps will be generated. When we are testing if the output exists it will therefore return False.

Parameters:
  • model (AbstractModel, or str) – the model to check for existence. If a string is given the model is tried to be loaded from the components loader.
  • output_folder (str) – the folder where the output folder of the results should reside in
  • append_model_name_to_path (boolean) – by default we will append the name of the model to the output folder. This is to be consistent with the way the model fitting routine places the results in the <output folder>/<model_name> directories. Sometimes, however you might want to skip this appending.
Returns:

true if the output folder exists and contains files for all the parameters of the model.

Return type:

boolean

mdt.utils.natural_key_sort_cb(_str)[source]

Sorting transformation to use when wanting to sorting a list using natural key sorting.

Parameters:_str (str) – the string to sort
Returns:the key to use for sorting the current element.
Return type:list()
mdt.utils.per_model_logging_context(output_path, overwrite=False)[source]

A logging context wrapper for the function configure_per_model_logging.

Parameters:
  • output_path – the output path where the model results are stored.
  • overwrite (boolean) – if we want to overwrite an existing file (if True), or append to it (if False)
mdt.utils.protocol_merge(protocol_paths, output_fname, sort=False)[source]

Merge a list of protocols files. Writes the result as a file.

You can enable sorting the list of protocol names based on a natural key sort. This is the most convenient option in the case of globbing files. By default this behaviour is disabled.

Example usage with globbing:

mdt.protocol_merge(glob.glob('*.prtcl'), 'merged.prtcl', True)
Parameters:
  • protocol_paths (list of str) – the list with the input protocol filenames
  • output_fname (str) – the output filename
  • sort (boolean) – if true we natural sort the list of protocol files before we merge them. If false we don’t. The default is False.
Returns:

the list with the filenames in the order of concatenation.

Return type:

list of str

mdt.utils.restore_volumes(data, brain_mask, with_volume_dim=True)[source]

Restore the given data to a whole brain volume

The data can be a list, tuple or dictionary with two dimensional arrays, or a 2d array itself.

Parameters:
  • data (ndarray) – the data as a x dimensional list of voxels, or, a list, tuple, or dict of those voxel lists
  • brain_mask (ndarray) – the brain_mask which was used to generate the data list
  • with_volume_dim (boolean) – If true we always return values with at least 4 dimensions. The extra dimension is for the volume index. If false we return at least 3 dimensions.
Returns:

Either a single whole volume, a list, tuple or dict of whole volumes, depending on the given data. If with_volume_ind_dim is set we return values with 4 dimensions. (x, y, z, 1). If not set we return only three dimensions.

mdt.utils.roi_index_to_volume_index(roi_indices, brain_mask)[source]

Get the 3d index of a voxel given the linear index in a ROI created with the given brain mask.

This is the inverse function of volume_index_to_roi_index().

This function is useful if you, for example, have sample results of a specific voxel and you want to locate that voxel in the brain maps.

Please note that this function can be memory intensive for a large list of roi_indices

Parameters:
  • roi_indices (int or ndarray) – the index in the ROI created by that brain mask
  • brain_mask (str or 3d array) – the brain mask you would like to use
Returns:

the 3d voxel location(s) of the indicated voxel(s)

Return type:

ndarray

mdt.utils.rotate_orthogonal_vector(basis, to_rotate, psi)[source]

Uses Rodrigues’ rotation formula to rotate the given vector v by psi around k.

If a matrix is given the operation will by applied on the last dimension.

This function assumes that the given two vectors (or matrix of vectors) are orthogonal for every voxel. This assumption allows for some speedup in the rotation calculation.

Parameters:
  • basis – the unit vector defining the rotation axis (k)
  • to_rotate – the vector to rotate by the angle psi (v)
  • psi – the rotation angle (psi)
Returns:

the rotated vector

Return type:

vector

mdt.utils.rotate_vector(basis, to_rotate, psi)[source]

Uses Rodrigues’ rotation formula to rotate the given vector v by psi around k.

If a matrix is given the operation will by applied on the last dimension.

Parameters:
  • basis – the unit vector defining the rotation axis (k)
  • to_rotate – the vector to rotate by the angle psi (v)
  • psi – the rotation angle (psi)
Returns:

the rotated vector

Return type:

vector

mdt.utils.setup_logging(disable_existing_loggers=None)[source]

Setup global logging.

This uses the loaded config settings to set up the logging.

Parameters:disable_existing_loggers (boolean) – If we would like to disable the existing loggers when creating this one. None means use the default from the config, True and False overwrite the config.
mdt.utils.spherical_to_cartesian(theta, phi)[source]

Convert polar coordinates in 3d space to cartesian unit coordinates.

This might return points lying on the entire sphere. End-users will have to manually ensure the points to lie on the right hemisphere with a positive y-axis (multiply the vector by -1 if y < 0).

x = sin(theta) * cos(phi)
y = sin(theta) * sin(phi)
z = cos(theta)
Parameters:
  • theta (ndarray) – The matrix with the inclinations
  • phi (ndarray) – The matrix with the azimuths
Returns:

matrix with same shape as the input (minimal two dimensions though) with on the last axis

the [x, y, z] coordinates of each vector.

Return type:

ndarray

mdt.utils.split_array_to_dict(data, param_names)[source]

Create a dictionary out of an array.

This basically splits the given nd-matrix into sub matrices based on the second dimension. The length of the parameter names should match the length of the second dimension. If a two dimensional matrix of shape (d, p) is given we return p matrices of shape (d,). If a matrix of shape (d, p, s_1, s_2, …, s_n) is given, we return p matrices of shape (d, s_1, s_2, …, s_n).

This is basically the inverse of combine_dict_to_array().

Parameters:
  • data (ndarray) – a multidimensional matrix we index based on the second dimension.
  • param_names (list of str) – the names of the parameters, one per column
Returns:

the results packed in a dictionary

Return type:

dict

mdt.utils.split_dataset(dataset, split_dimension, split_index)[source]

Split the given dataset along the given dimension on the given index.

Parameters:
  • dataset (ndarray, list, tuple, dict, string) – The single volume or list of volumes to split in two
  • split_dimension (int) – The dimension along which to split the dataset
  • split_index (int) – The index on the given dimension to split the volume(s)
Returns:

If dataset is a single volume return a tuple of two volumes which concatenated

give the original volume back. If it is a list, tuple or dict we return a tuple containing two lists, tuples or dicts, with the same indices and with each holding half of the splitted data.

Return type:

ndarray, list, tuple, dict

mdt.utils.split_image_path(image_path)[source]

Split the path to an image into three parts, the directory, the basename and the extension.

Parameters:image_path (str) – the path to an image
Returns:the path, the basename and the extension (extension includes the dot)
Return type:list of str
mdt.utils.tensor_cartesian_to_spherical(first_eigen_vector, second_eigen_vector)[source]

Compute the spherical coordinates theta, phi and psi to match the given eigen vectors.

Only the first two eigen vectors are needed to calculate the correct angles, the last eigen vector follows automatically from the dot product of the first two eigen vectors.

Since the Tensor model in MDT uses theta, phi and psi in the range [0, pi], this function can reflect the given eigenvalues to comply with those ranges. In particular, there are two transformations possible. The first is if the first eigen vector is in the left hemisphere (negative y-value), if so, it is reflected to its antipodal point on the right hemisphere. The second transformation is if the second eigen vector does not lie in the semicircle described by psi in [0, pi]. If not, the second eigen vector is reflected to its antipodal point within the range of psi in [0, pi].

Parameters:
  • first_eigen_vector (ndarray) – the first eigen vectors, with on the last dimension 3 items for [x, y, z]
  • second_eigen_vector (ndarray) – the second eigen vectors, with on the last dimension 3 items for [x, y, z]
Returns:

theta, phi, psi for every voxel given.

Return type:

tuple

mdt.utils.tensor_spherical_to_cartesian(theta, phi, psi)[source]

Calculate the eigenvectors for a Tensor given the three angles.

This will return the eigenvectors unsorted, since this function knows nothing about the eigenvalues. The caller of this function will have to sort them by eigenvalue if necessary.

Parameters:
  • theta (ndarray) – matrix of list of theta’s
  • phi (ndarray) – matrix of list of phi’s
  • psi (ndarray) – matrix of list of psi’s
Returns:

The three eigenvector for every voxel given. The return matrix for every eigenvector is of the given shape + [3].

Return type:

tuple

mdt.utils.unzip_nifti(in_file, out_file=None, remove_old=False)[source]

Unzip a gzipped nifti file.

Parameters:
  • in_file (str) – the nifti file to unzip
  • out_file (str) – if given, the name of the output file. If not given, we will use the input filename without the .gz.
  • remove_old (boolean) – if we want to remove the old (zipped) file or not
mdt.utils.volume_index_to_roi_index(volume_index, brain_mask)[source]

Get the ROI index given the volume index (in 3d).

This is the inverse function of roi_index_to_volume_index().

This function is useful if you want to locate a voxel in the ROI given the position in the volume.

Parameters:
  • volume_index (tuple) – the volume index, a tuple or list of length 3
  • brain_mask (str or 3d array) – the brain mask you would like to use
Returns:

the index of the given voxel in the ROI created by the given mask

Return type:

int

mdt.utils.volume_merge(volume_paths, output_fname, sort=False)[source]

Merge a list of volumes on the 4th dimension. Writes the result as a file.

You can enable sorting the list of volume names based on a natural key sort. This is the most convenient option in the case of globbing files. By default this behaviour is disabled.

Example usage with globbing:

mdt.volume_merge(glob.glob('*.nii'), 'merged.nii.gz', True)
Parameters:
  • volume_paths (list of str) – the list with the input filenames
  • output_fname (str) – the output filename
  • sort (boolean) – if true we natural sort the list of DWI images before we merge them. If false we don’t. The default is False.
Returns:

the list with the filenames in the order of concatenation.

Return type:

list of str

mdt.utils.voxelwise_vector_matrix_vector_product(a, B, c)[source]

Compute the dot product of a*B*c assuming the first axii are voxel wise dimensions.

This function can be used in error propagation where you multiply the gradient (assuming univariate function) with the covariance matrix with the gradient transposed.

Parameters:
  • a (ndarray) – of size (n, m) or (x, y, z, m), vector elements per voxel
  • B (ndarray) – of size (n, m, m) or (x, y, z, m, m), matrix elements per voxel
  • c (ndarray) – of size (n, m) or (x, y, z, m), vector elements per voxel
Returns:

either of size (n, 1) or of size (x, y, z, 1), the voxelwise matrix multiplication of aBc.

Return type:

ndarray

mdt.utils.write_slice_roi(brain_mask_fname, roi_dimension, roi_slice, output_fname, overwrite_if_exists=False)[source]

Create a region of interest out of the given brain mask by taking one specific slice out of the mask.

This will both write and return the created slice ROI.

We need a filename as input brain mask since we need the header of the file to be able to write the output file with the same header.

Parameters:
  • brain_mask_fname (string) – The filename of the brain_mask used to create the new brain mask
  • roi_dimension (int) – The dimension to take a slice out of
  • roi_slice (int) – The index on the given dimension.
  • output_fname (string) – The output filename
  • overwrite_if_exists (boolean, optional, default false) – If we want to overwrite the file if it already exists
Returns:

A brain mask of the same dimensions as the original mask, but with only one slice set to one.

mdt.utils.zip_nifti(in_file, out_file=None, remove_old=False)[source]

Zip a nifti file.

Parameters:
  • in_file (str) – the nifti file to zip
  • out_file (str) – if given, the name of the output file. If not given, we will use the input filename with .gz appended at the end.
  • remove_old (boolean) – if we want to remove the old (non-zipped) file or not

Module contents

mdt.batch_fit(data_folder, models_to_fit, output_folder=None, batch_profile=None, subjects_selection=None, recalculate=False, cl_device_ind=None, dry_run=False, double_precision=False, tmp_results_dir=True, use_gradient_deviations=False)[source]

Run all the available and applicable models on the data in the given folder.

The idea is that a single folder is enough to fit_model the computations. One can optionally give it the batch_profile to use for the fitting. If not given, this class will attempt to use the batch_profile that fits the data folder best.

Parameters:
  • data_folder (str) – The data folder to process
  • models_to_fit (list of str) – A list of models to fit to the data.
  • output_folder (str) – the folder in which to place the output, if not given we per default put an output folder next to the data_folder.
  • batch_profile (BatchProfile or str) – the batch profile to use, or the name of a batch profile to use. If not given it is auto detected.
  • subjects_selection (BatchSubjectSelection or iterable) – the subjects to use for processing. If None, all subjects are processed. If a list is given instead of a BatchSubjectSelection instance, we apply the following. If the elements in that list are string we use it as subject ids, if they are integers we use it as subject indices.
  • recalculate (boolean) – If we want to recalculate the results if they are already present.
  • cl_device_ind (int or list of int) – the index of the CL device to use. The index is from the list from the function get_cl_devices().
  • dry_run (boolean) – a dry run will do no computations, but will list all the subjects found in the given directory.
  • double_precision (boolean) – if we would like to do the calculations in double precision
  • tmp_results_dir (str, True or None) – The temporary dir for the calculations. Set to a string to use that path directly, set to True to use the config value, set to None to disable.
  • use_gradient_deviations (boolean) – if you want to use the gradient deviations if present
Returns:

The list of subjects we will calculate / have calculated.

mdt.block_plots(use_qt=True)[source]

A small function to block matplotlib plots and Qt GUI instances.

This basically calls either plt.show() and QtApplication.exec_() depending on use_qt.

Parameters:use_qt (boolean) – if True we block Qt windows, if False we block matplotlib windows
mdt.bootstrap_model(model, input_data, optimization_results, output_folder, bootstrap_method=None, bootstrap_options=None, nmr_samples=None, optimization_method=None, optimizer_options=None, recalculate=False, cl_device_ind=None, double_precision=False, keep_samples=True, tmp_results_dir=True, initialization_data=None)[source]

Resample the model using residual bootstrapping.

This is typically used to construct confidence intervals on the optimized parameters.

Parameters:
  • model (str or EstimableModel) – the model to sample
  • input_data (MRIInputData) – the input data object containing all the info needed for the model fitting.
  • optimization_results (dict or str) – the optimization results, either a dictionary with results or the path to a folder.
  • output_folder (string) – The path to the folder where to place the output, we will make a subdir with the model name in it (for the optimization results) and then a subdir with the samples output.
  • bootstrap_method (str) – the bootstrap method we want to use, ‘residual’, or ‘wild’. Defaults to ‘wild’.
  • bootstrap_options (dict) – bootstrapping options specific for the bootstrap method in use
  • nmr_samples (int) – the number of samples we would like to compute. Defaults to 1000.
  • optimization_method (str) –

    The optimization method to use, one of: - ‘Levenberg-Marquardt’ - ‘Nelder-Mead’ - ‘Powell’ - ‘Subplex’

    If not given, defaults to ‘Powell’.

  • optimizer_options (dict) – extra options passed to the optimization routines.
  • recalculate (boolean) – If we want to recalculate the results if they are already present.
  • cl_device_ind (int) – the index of the CL device to use. The index is from the list from the function utils.get_cl_devices().
  • double_precision (boolean) – if we would like to do the calculations in double precision
  • keep_samples (boolean) – determines if we keep any of the chains. If set to False, the chains will be discarded after generating the mean and standard deviations.
  • tmp_results_dir (str, True or None) – The temporary dir for the calculations. Set to a string to use that path directly, set to True to use the config value, set to None to disable.
  • initialization_data (dict) –

    provides (extra) initialization data to use during model fitting. This dictionary can contain the following elements:

    • inits: dictionary with per parameter an initialization point
    • fixes: dictionary with per parameter a fixed point, this will remove that parameter from the fitting
    • lower_bounds: dictionary with per parameter a lower bound
    • upper_bounds: dictionary with per parameter a upper bound
    • unfix: a list of parameters to unfix

    For example:

    initialization_data = {
        'fixes': {'Stick0.theta: np.array(...), ...},
        'inits': {...}
    }
    
Returns:

if keep_samples is True we return the samples per parameter as a numpy memmap.

If store_samples is False we return None

Return type:

dict

mdt.compute_fim(model, input_data, optimization_results, output_folder=None, cl_device_ind=None, cl_load_balancer=None, initialization_data=None)[source]

Compute the Fisher Information Matrix (FIM).

This is typically done as post-processing step during the model fitting process, but can also be performed separately after optimization.

Since the FIM depends on which parameters were optimized, results will change if different parameters are fixed. That is, this function will compute the FIM for every estimable parameter (free-non-fixed parameters). If you want to have the exact same FIM results as when you computed the FIM as optimization post-processing it is important to have exactly the same maps fixed.

Contrary to the post-processing of the optimization maps, all FIM results are written to a single sub-folder in the provided output folder.

Parameters:
  • model (str or EstimableModel) – The name of a composite model or an implementation of a composite model.
  • input_data (MRIInputData) – the input data object containing all the info needed for the model fitting.
  • optimization_results (dict or str) – the optimization results, either a dictionary with results or the path to a folder.
  • output_folder (string) – Optionally, the path to the folder where to place the output
  • (List[Union[mot.lib.cl_environments.CLEnvironment, int]] (cl_device_ind) – or mot.lib.cl_environments.CLEnvironment or int): the CL devices to use. Either provide MOT CLEnvironment’s or indices from into the list from the function mdt.get_cl_devices().
  • cl_load_balancer (mot.lib.load_balancers.LoadBalancer or Tuple[float]) – the load balancer to use. Can also be an array of fractions (summing to 1) with one fraction per device. For example, for two devices one can specify cl_load_balancer = [0.3, 0.7] to let one device to more work than another.
  • initialization_data (dict) –

    provides (extra) initialization data to use during model fitting. This dictionary can contain the following elements:

    • inits: dictionary with per parameter an initialization point
    • fixes: dictionary with per parameter a fixed point, this will remove that parameter from the fitting
    • lower_bounds: dictionary with per parameter a lower bound
    • upper_bounds: dictionary with per parameter a upper bound
    • unfix: a list of parameters to unfix

    For example:

    initialization_data = {
        'fixes': {'Stick0.theta: np.array(...), ...},
        'inits': {...}
    }
    
Returns:

all the computed FIM maps in a flattened dictionary.

Return type:

dict

mdt.fit_model(model, input_data, output_folder, method=None, optimizer_options=None, recalculate=False, cl_device_ind=None, cl_load_balancer=None, double_precision=False, tmp_results_dir=True, initialization_data=None, use_cascaded_inits=True, post_processing=None)[source]

Run the optimizer on the given model.

Parameters:
  • model (str or EstimableModel) – The name of a composite model or an implementation of a composite model.
  • input_data (MRIInputData) – the input data object containing all the info needed for the model fitting.
  • output_folder (string) – The path to the folder where to place the output, we will make a subdir with the model name in it.
  • method (str) –

    The optimization method to use, one of: - ‘Levenberg-Marquardt’ - ‘Nelder-Mead’ - ‘Powell’ - ‘Subplex’

    If not given, defaults to ‘Powell’.

  • optimizer_options (dict) – extra options passed to the optimization routines.
  • recalculate (boolean) – If we want to recalculate the results if they are already present.
  • (List[Union[mot.lib.cl_environments.CLEnvironment, int]] (cl_device_ind) – or mot.lib.cl_environments.CLEnvironment or int): the CL devices to use. Either provide MOT CLEnvironment’s or indices from into the list from the function mdt.get_cl_devices().
  • cl_load_balancer (mot.lib.load_balancers.LoadBalancer or Tuple[float]) – the load balancer to use. Can also be an array of fractions (summing to 1) with one fraction per device. For example, for two devices one can specify cl_load_balancer = [0.3, 0.7] to let one device to more work than another.
  • double_precision (boolean) – if we would like to do the calculations in double precision
  • tmp_results_dir (str, True or None) – The temporary dir for the calculations. Set to a string to use that path directly, set to True to use the config value, set to None to disable.
  • initialization_data (dict) –

    provides (extra) initialization data to use during model fitting. This dictionary can contain the following elements:

    • inits: dictionary with per parameter an initialization point
    • fixes: dictionary with per parameter a fixed point, this will remove that parameter from the fitting
    • lower_bounds: dictionary with per parameter a lower bound
    • upper_bounds: dictionary with per parameter a upper bound
    • unfix: a list of parameters to unfix

    For example:

    initialization_data = {
        'fixes': {'Stick0.theta: np.array(...), ...},
        'inits': {...}
    }
    
  • use_cascaded_inits (boolean) – if set, we initialize the model parameters using get_optimization_inits(). You can also overrule the default initializations using the initialization_data attribute.
  • post_processing (dict) – a dictionary with flags for post-processing options to enable or disable. For valid elements, please see the configuration file settings for optimization under post_processing. Valid input for this parameter is for example: {‘covariance’: False} to disable automatic calculation of the covariance from the Hessian.
Returns:

The result maps for the given composite model or the last model in the cascade.

This returns the results as 3d/4d volumes for every output map.

Return type:

dict

mdt.get_models_list()[source]

Get a list of all available composite models

Returns:A list of available model names.
Return type:list of str
mdt.get_models_meta_info()[source]

Get the meta information tags for all the models returned by get_models_list()

Returns:
The first dictionary indexes the model names to the meta tags, the second holds the meta
information.
Return type:dict of dict
mdt.get_optimization_inits(model_name, input_data, output_folder, cl_device_ind=None, method=None, optimizer_options=None, double_precision=False)[source]

Get better optimization starting points for the given model.

Since initialization can make quite a difference in optimization results, this function can generate a good initialization starting point for the given model. The idea is that before you call the fit_model() function, you call this function to get a better starting point. An usage example would be:

input_data = mdt.load_input_data(..)

init_data = get_optimization_inits('BallStick_r1', input_data, '/my/folder')

fit_model('BallStick_r1', input_data, '/my/folder',
          initialization_data={'inits': init_data})

Where the init data returned by this function can directly be used as input to the initialization_data argument of the :func`fit_model` function.

Please note that his function only supports models shipped by default with MDT.

Parameters:
  • model_name (str) – The name of a model for which we want the optimization starting points.
  • input_data (MRIInputData) – the input data object containing all the info needed for model fitting of intermediate models.
  • output_folder (string) – The path to the folder where to place the output, we will make a subdir with the model name in it.
  • cl_device_ind (int or list) – the index of the CL device to use. The index is from the list from the function utils.get_cl_devices(). This can also be a list of device indices.
  • method (str) –

    The optimization method to use, one of: - ‘Levenberg-Marquardt’ - ‘Nelder-Mead’ - ‘Powell’ - ‘Subplex’

    If not given, defaults to ‘Powell’.

  • optimizer_options (dict) – extra options passed to the optimization routines.
  • double_precision (boolean) – if we would like to do the calculations in double precision
Returns:

a dictionary with initialization points for the selected model

Return type:

dict

mdt.get_volume_names(directory)[source]

Get the names of the Nifti volume maps in the given directory.

Parameters:directory – the directory to get the names of the available maps from.
Returns:A list with the names of the volumes.
Return type:list
mdt.make_path_joiner(*args, make_dirs=False)[source]

Generates and returns an instance of utils.PathJoiner to quickly join path names.

Parameters:
  • *args – the initial directory or list of directories to concatenate
  • make_dirs (boolean) – if we should make the referenced directory if it does not yet exist
Returns:

easy path manipulation path joiner

Return type:

mdt.utils.PathJoiner

mdt.reload_components()[source]

Reload all the dynamic components.

This can be useful after changing some of the dynamically loadable modules. This function will remove all cached components and reload the directories.

mdt.reset_logging()[source]

Reset the logging to reflect the current configuration.

This is commonly called after updating the logging configuration to let the changes take affect.

mdt.sample_model(model, input_data, output_folder, nmr_samples=None, burnin=None, thinning=None, method=None, recalculate=False, cl_device_ind=None, cl_load_balancer=None, double_precision=False, store_samples=True, sample_items_to_save=None, tmp_results_dir=True, initialization_data=None, post_processing=None, post_sampling_cb=None, sampler_options=None)[source]

Sample a composite model using Markov Chain Monte Carlo sampling.

Parameters:
  • model (str or EstimableModel) – the model to sample
  • input_data (MRIInputData) – the input data object containing all the info needed for the model fitting.
  • output_folder (string) – The path to the folder where to place the output, we will make a subdir with the model name in it (for the optimization results) and then a subdir with the samples output.
  • nmr_samples (int) – the number of samples we would like to return.
  • burnin (int) – the number of samples to burn-in, that is, to discard before returning the desired number of samples
  • thinning (int) – how many sample we wait before storing a new one. This will draw extra samples such that the total number of samples generated is nmr_samples * (thinning) and the number of samples stored is nmr_samples. If set to one or lower we store every sample after the burn in.
  • method (str) –

    The sampling method to use, one of: - ‘AMWG’, for the Adaptive Metropolis-Within-Gibbs - ‘SCAM’, for the Single Component Adaptive Metropolis - ‘FSL’, for the sampling method used in the FSL toolbox - ‘MWG’, for the Metropolis-Within-Gibbs (simple random walk metropolis without updates)

    If not given, defaults to ‘AMWG’.

  • recalculate (boolean) – If we want to recalculate the results if they are already present.
  • (List[Union[mot.lib.cl_environments.CLEnvironment, int]] (cl_device_ind) – or mot.lib.cl_environments.CLEnvironment or int): the CL devices to use. Either provide MOT CLEnvironment’s or indices from into the list from the function mdt.get_cl_devices().
  • cl_load_balancer (mot.lib.load_balancers.LoadBalancer or Tuple[float]) – the load balancer to use. Can also be an array of fractions (summing to 1) with one fraction per device. For example, for two devices one can specify cl_load_balancer = [0.3, 0.7] to let one device to more work than another.
  • double_precision (boolean) – if we would like to do the calculations in double precision
  • store_samples (boolean) – determines if we store any of the samples. If set to False we will store none of the samples.
  • sample_items_to_save (list) – list of output names we want to store the samples of. If given, we only store the items specified in this list. Valid items are the free parameter names of the model and the items ‘LogLikelihood’ and ‘LogPrior’.
  • tmp_results_dir (str, True or None) – The temporary dir for the calculations. Set to a string to use that path directly, set to True to use the config value, set to None to disable.
  • initialization_data (dict) –

    provides (extra) initialization data to use during model fitting. This dictionary can contain the following elements:

    • inits: dictionary with per parameter an initialization point
    • fixes: dictionary with per parameter a fixed point, this will remove that parameter from the fitting
    • lower_bounds: dictionary with per parameter a lower bound
    • upper_bounds: dictionary with per parameter a upper bound
    • unfix: a list of parameters to unfix

    For example:

    initialization_data = {
        'fixes': {'Stick0.theta: np.array(...), ...},
        'inits': {...}
    }
    
  • post_processing (dict) – a dictionary with flags for post-processing options to enable or disable. For valid elements, please see the configuration file settings for sample under post_processing. Valid input for this parameter is for example: {‘univariate_normal’: True} to enable automatic calculation of the univariate normal distribution for the model parameters.
  • (Callable[ (post_sampling_cb) –
    [mot.sample.base.SamplingOutput, mdt.models.composite.DMRICompositeModel], Optional[Dict]]):
    additional post-processing called after sampling. This function can optionally return a (nested) dictionary with as keys dir-/file-names and as values maps to be stored in the results directory.
  • sampler_options (dict) – specific options for the MCMC routine. These will be provided to the sampling routine as additional keyword arguments to the constructor.
Returns:

if store_samples is True then we return the samples per parameter as a numpy memmap. If store_samples

is False we return None

Return type:

dict

mdt.sort_maps(input_maps, reversed_sort=False, sort_index_matrix=None)[source]

Sort the values of the given maps voxel by voxel.

This first creates a sort matrix to index the maps in sorted order per voxel. Next, it creates the output maps for the maps we sort on.

Parameters:
  • input_maps (list) – a list of string (filenames) or ndarrays we will sort
  • reversed_sort (boolean) – if we want to sort from large to small instead of small to large. This is not used if a sort index matrix is provided.
  • sort_index_matrix (ndarray) – if given we use this sort index map instead of generating one by sorting the maps_to_sort_on. Supposed to be a integer matrix.
Returns:

the list of sorted volumes

Return type:

list

mdt.start_gui(base_dir=None, app_exec=True)[source]

Start the model fitting GUI.

Parameters:
  • base_dir (str) – the starting directory for the file opening actions
  • app_exec (boolean) – if true we execute the Qt application, set to false to disable. This is only important if you want to start this GUI from within an existing Qt application. If you leave this at true in that case, this will try to start a new Qt application which may create problems.
mdt.view_maps(data, config=None, figure_options=None, block=True, show_maximized=False, use_qt=True, window_title=None, save_filename=None)[source]

View a number of maps using the MDT Maps Visualizer.

Parameters:
  • data (str, dict, DataInfo, list, tuple) – the data we are showing, either a dictionary with result maps, a string with a path name, a DataInfo object or a list with filenames and/or directories.
  • config (str, dict, base import MapPlotConfig) – either a Yaml string or a dictionary with configuration settings or a ValidatedMapPlotConfig object to use directly
  • figure_options (dict) – Used when use_qt is False or when write_figure is used. Sets the figure options for the matplotlib Figure. If figsizes is not given you can also specify two ints, width and height, to indicate the pixel size of the resulting figure, together with the dpi they are used to calculate the figsize.
  • block (boolean) – if we block the plots or not
  • show_maximized (boolean) – if we show the window maximized or not
  • window_title (str) – the title for the window
  • use_qt (boolean) – if we want to use the Qt GUI, or show the results directly in matplotlib
  • save_filename (str) – save the figure to file. If set, we will not display the viewer.
mdt.view_result_samples(data, **kwargs)[source]

View the samples from the given results set.

Parameters:
  • data (string or dict) – The location of the maps to use the samples from, or the samples themselves.
  • kwargs (kwargs) – see SampleVisualizer for all the supported keywords
mdt.with_logging_to_debug()[source]

A context in which the logging is temporarily set to WARNING.

Example of usage:

with mdt.with_logging_to_debug():
    your_computations()

During the function your_computations only WARNING level logging will show up.

mdt.write_volume_maps(maps, directory, header=None, overwrite_volumes=True, gzip=True)[source]

Write a dictionary with maps to the given directory using the given header.

Parameters:
  • maps (dict) – The maps with as keys the map names and as values 3d or 4d maps
  • directory (str) – The dir to write to
  • header – The Nibabel Image Header
  • overwrite_volumes (boolean) – If we want to overwrite the volumes if they are present.
  • gzip (boolean) – if we want to write the results gzipped