Components are a modular sub-system of MDT allowing you to add models and other functionality to MDT.

On the moment MDT supports the following components:

## Defining components¶

There are two ways of adding or updating components in MDT, Global definitions, by adding a component to your configuration folder or Dynamic definitions by defining it dynamically in your modeling scripts.

### Global definitions¶

For persistent model definitions you can use the .mdt folder in your home folder. This folder contains diffusion MRI models and other functionality that you can extend without needing to reinstall or recompile MDT.

The .mdt folder contains, for every version of MDT that existed on your machine, a directory containing the configuration files and a folder with the dynamically loadable modules. A typical layout of the .mdt directory is:

• .mdt/
• <version>/
• mdt.default.conf
• mdt.conf
• components/

The configuration files are discussed in Advanced configuration, the components folder is used for the global model definitions.

The components folder consists of two sub-folders, standard and user, with an identical folder structure for the contained modules:

• components/
• standard
• compartment_models
• composite_models
• user
• compartment_models
• composite_models

By editing the contents of these folders, the user can add, extend and/or remove functionality globally and persistently. The folder named standard contains modules that come pre-supplied with MDT. These modules can change from version to version and any change you make in in this folder will be lost after an update. To make persistent changes you can add your modules to the user folder. The content of this folder is automatically copied to a new version.

### Dynamic definitions¶

Alternatively, it is also possible to define components on the fly in your analysis scripts. This is as simple as defining a template in your script prior to using it in your analysis. For example, prior to calling the fit model function, you can define a new model as:

from mdt import CompositeModelTemplate

class BallZeppelin(CompositeModelTemplate):
model_expression = '''
S0 * ( (Weight(w_csf) * Ball) +
(Weight(w_res) * Zeppelin) )
'''

mdt.fit_model('BallZeppelin', ...)


It is also possible to overwrite existing models on the fly, for example:

import mdt

class Tensor(mdt.get_template('composite_models', 'Tensor')):
likelihood_function = 'Rician'

mdt.fit_model('Tensor', ...)


Breaking this up, in the first part:

class Tensor(mdt.get_template('composite_models', 'Tensor')):
likelihood_function = 'Rician'


we load the last available Tensor model template from MDT (using get_template('composite_models', 'Tensor')) and use it as a basis for an updated template. Then, since this class is also named Tensor (by saying class Tensor(...)) this new template will override the previous Tensor. The body of this template then updates the previous Tensor, in this case by changing the likelihood function.

In the second part:

mdt.fit_model('Tensor', ...)


we just call mdt.fit_model with as model Tensor. MDT will then load the model by taking the last known definitions. As such, the new Tensor model with the updated likelihood function will be used in the model fitting.

To remove an entry, you can use, for example:

mdt.remove_last_entry('composite_models', 'Tensor')


## Composite models¶

The composite models, or, multi-compartment models, are the models that MDT actually optimizes. Composite models are formed by a combination / composition, of compartment models.

When asked to optimize (or sample) a composite model, MDT combines the CL code of the compartments into one objective function and combines it with a likelihood function (Rician, OffsetGaussian, Gaussian). Since the compartments already contain the CL code, no further CL modeling code is necessary in the multi-compartment models.

Composite models are defined by inheriting from CompositeModelTemplate. The following is an minimal example of a composite (multi-compartment) model in MDT:

class BallStick_r2(CompositeModelTemplate):

model_expression = '''
S0 * ( (Weight(w_ball) * Ball) +
(Weight(w_stick0) * Stick(Stick0)) +
(Weight(w_stick1) * Stick(Stick1)) )
'''


The model expression is a string that expresses the model in a MDT model specific mini-language. This language, which only accepts the operators *, /, + and - can be used to combine your compartments in any way possible (within the grammar of the mini-language). MDT parses this string, loads the compartments from the compartment models and uses the CL code of these compartments to create the CL objective function for your complete composite model.

The example above combines the compartments (Ball and Stick) as a weighted summation using the special compartment Weight for the compartment weighting (these weights are sometimes called volume fractions).

The example also shows compartment renaming. Since it is possible to use a compartment multiple times, it is necessary to rename the double compartments to ensure that all the compartments have a unique name. This renaming is done by specifying the nickname in parenthesis after the compartment. For example Stick(Stick0) refers to a Stick compartment that has been renamed to Stick0. This new name is then used to refer to that specific compartment in the rest of the composite model attributes.

The composite models have more functionality than what is shown here. For example, they support parameter dependencies, initialization values, parameter fixations and protocol options.

### Parameter dependencies¶

Parameter dependencies make explicit the dependency of one parameter on another. For example, some models have both an intra- and an extra-axonal compartment that both feature the theta and phi fibre orientation parameters. It could be desired that these angles are exactly the same for both compartments, that is, that they both reflect the exact same fibre orientation. One possibility to solve this would be to create a new compartment having the features of both the intra- and the extra-axonal compartment. This however lowers the reusability of the compartments. Instead, one could define parameter dependencies in the composite model. For example:

class NODDI(CompositeModelTemplate):
...
fixes = {
...
'Ball.d': 3.0e-9,
'NODDI_EC.dperp0': 'NODDI_EC.d * (w_ec.w / (w_ec.w + w_ic.w))',
'NODDI_EC.kappa': 'NODDI_IC.kappa',
'NODDI_EC.theta': 'NODDI_IC.theta',
'NODDI_EC.phi': 'NODDI_IC.phi'
}


In this example, we used the attribute fixes to specify dependencies and parameter fixations. The attribute fixes accepts a dictionary with as key the name of the parameter and as value a scalar, a map or a dependency. The dependency can either be given as a string or as a dependency object.

In the example above we added two simple assignment dependencies in which the theta and phi of the NODDI_EC compartment are locked to that of the NODDI_IC compartment. This dependency locks the NODDI_EC theta and phi to that of NODDI_IC assuring that both the intra cellular and extra cellular models reflect the same orientation.

### Weights sum to one¶

Most composite models consist of a weighted sum of compartments models. An implicit dependency in this set-up is that those weights must exactly sum to one. To ensure this, MDT adds, by default, a dependency to the last Weight compartment in the composite model definition. This dependency first normalizes (if needed) the n-1 Weight compartments by their sum $$s = \sum_{i}^{n-1}w_{i}$$. Then, the last Weight, which is not optimized explicitly, is then either set to zero, i.e. $$w_{n} = 0$$ or set as $$w_{n}=1-s$$ if s is smaller than zero.

If you wish to disable this feature, for example in a model that does not have a linear sum of weighted compartments, you can use set the attribute enforce_weights_sum_to_one to false, e.g.:

class MyModel(CompositeModelTemplate):
...
enforce_weights_sum_to_one = False


### Protocol options¶

It is possible to add dMRI volume selection to a composite model using the “protocol options”. These protocol options allow the composite model to select, using the protocol, only those volumes that it can use for optimization. For example, the Tensor model is defined to work with b-values up to 1500 s/mm^2, yet the user might be using a dataset that has more shells, with some shells above the b-value threshold. To prevent the user from having to load a separate dataset for the Tensor model and another dataset for the other models, we implemented in MDT model protocol options. This way, the end user can provide the whole protocol file and the models will pick from it what they need.

Please note that these volume selections only work with columns in the protocol, not with the extra_protocol maps.

There are two ways to enable this mechanism in your composite model. The first is to add the volume_selection directive to your model:

class Tensor(CompositeModelTemplate):
...
volume_selection = {'b': [(0, 1.5e9 + 0.1e9)]}


This directive specifies that we wish to use a subset of the weighted volumes, that is, a single b-value range with b-values between b=0 and b=1.5e9 s/m^2. Each key in volume_selection should refer to a column in the protocol file and each value should be a list of ranges.

The second method is to add the bound function _get_suitable_volume_indices to your model definition. For example:

...
from mdt.component_templates.base import bind_function

class Tensor(CompositeModelTemplate):
...

@bind_function
def _get_suitable_volume_indices(self, input_data):
return protocol.get_indices_bval_in_range(start=0, end=1.5e9 + 0.1e9)


This function should then return a list of integers specifying the volumes (and therefore protocol rows) you wish to use in the analysis of this model. To use all volumes you can use something like this:

@bind_function
def _get_suitable_volume_indices(self, input_data):
return list(range(input_data.protocol.length))


### Extra result maps¶

It is also possible to add additional parameter maps to the fitting and sampling results. These maps are meant to be forthcoming to the end-user by providing additional maps to the output. Extra results maps can be added by both the composite model as well as by the compartment models.

Just as with compartment models, one can add extra output maps to the optimization results and to the sampling results as:

class MyModel(CompositeModelTemplate):
...
extra_optimization_maps = [
lambda results: ...
]

extra_sampling_maps = [
lambda samples: ...
]


where each callback function should return a dictionary with extra maps to add to the output.

### Likelihood functions¶

Models are optimized by finding the set of free parameter values $$x \in R^{n}$$ that minimize the likelihood function of the modeling errors $$(O - S(x))$$ with $$O$$ the observed data and $$S(x)$$ the model signal estimate. In diffusion MRI the common likelihood models are the Gaussian, Rician and OffsetGaussian models. Each has different characteristics and implements the modeling $$(O - S(x))$$ in a slightly different way. Following (Harms 2017) we use, by default, the Offset Gaussian likelihood model for all models. To change this to another likelihood model for one of your models you can override the likelihood_function attribute, for example:

class MyModel(CompositeModelTemplate)
...
likelihood_function = 'Rician'


By default the likelihood_function attribute is set to OffsetGaussian. The likelihood function can either be defined as a string or as an object. Using a string, the possible options are Gaussian, OffsetGaussian and Rician. Using an object, you must provide an instance of mdt.model_building.likelihood_functions.LikelihoodFunction. For example:

...
from mdt.model_building.likelihood_functions import RicianLikelihoodFunction

class MyModel(CompositeModelTemplate)
...
likelihood_function = RicianLikelihoodFunction()


All listed likelihood functions require a standard deviation $$\sigma$$ representing the noise in the input data. This value is typically taken from the noise of the images in the complex domain and is provided in the input data (see Input data).

### Constraints¶

It is possible to add additional inequality constraints to a composite model, using the constraints attribute. These constraints need to be added as the result of the function $$g(x)$$ where we assume $$g(x) \leq 0$$.

For example, in the NODDIDA model we implemented the constraint that the intra-cellular diffusivity must be larger than the extra-cellular diffusivity, following Kunz et al., NeuroImage 2018. Mathematically, this constraint can be stated as $$d_{ic} \geq d_{ec}$$. For implementation in MDT, we will state it as $$d_{ec} - d_{ic} \leq 0$$ and implement it as:

class NODDIDA(CompositeModelTemplate)
...
constraints = '''
constraints[0] = NODDI_EC.d - NODDI_IC.d;
'''


This constraints attribute can hold arbitrary OpenCL C code, as long as it contains the literal constraints[i] for each additional constraint i.

From this constraints string, MDT creates a function with the same dependencies and parameters as the composite model. This function is then provided to the optimization routines, which enforce it using the penalty method (https://en.wikipedia.org/wiki/Penalty_method).

## Compartment models¶

The compartment models are the building blocks of the composite models. They consists in basis of two parts, a list of parameters (see Parameters) and the model code in OpenCL C (see CL code). At runtime, MDT loads the C/CL code of the compartment model and combines it with the other compartments to form the composite model.

Compartment models can be defined using the templating mechanism by inheriting from CompartmentTemplate. For example, the Stick model can be defined as:

from mdt.component_templates.compartment_models import CompartmentTemplate

class Stick(CompartmentTemplate):

parameters = ('g', 'b', 'd', 'theta', 'phi')
cl_code = '''
float4 n = (float4)(cos(phi) * sin(theta),
sin(phi) * sin(theta),
cos(theta),
0);

return exp(-b * d * pown(dot(g, n), 2));
'''


This Stick example contains all the basic definitions required for a compartment model: a parameter list and CL code.

### Defining parameters¶

The elements of the parameter list can either be string referencing one of the parameters in the library (like shown in the example above), or it can be a direct instance of a parameter. For example, this is also a valid parameter list:

class special_param(FreeParameterTemplate):
...

class MyModel(CompartmentTemplate):

parameters = ('g', 'b', special_param()())
...


where the parameters g and b are loaded from the dynamically loadable parameters while the special_param is given as a parameter instance. It is also possible to provide a nickname for a parameter by stating something like:

parameters = ('my_theta(theta)', ...)


Here, the parameter my_theta is loaded with the nickname theta. This allows you to use simpler names for the parameters of a compartment and allows you to swap a parameter for a different type while still using the same (external) name.

### Dependency list¶

Some models may depend on other compartment models or on library functions. These dependencies can be specified using the dependencies attribute of the compartment model definition. As an example:

dependencies = ('erfi', 'MRIConstants', 'CylinderGPD')


This list should contain strings with references to either library functions or other compartment models. In this example the erfi library function is loaded from MOT, MRIConstants from MDT and CylinderGPD is another compartment model which our example depends on.

Adding items to this list means that the corresponding CL functions of these components are included into the optimized OpenCL kernel and allows you to use the corresponding CL code in your compartment model.

For example, in the dependency list above, the MRIConstants dependency adds multiple constants to the kernel, like GAMMA_H, the gyromagnetic ratio of in the nucleus of H in units of (rad s^-1 T^-1). By adding MRIConstants as a compartment dependency, this constant can now be used in your compartment model function.

### Defining extra functions for your code¶

It is possible that a compartment model needs some auxiliary functions that are too small for an own library function. These can be added to the compartment model using the cl_extra attribute. For example:

class MyCompartment(CompartmentTemplate):

parameters = ('g', 'b', 'd')
cl_code = 'return other_function(g, b, d);'
cl_extra = '''
double other_function(
float4 g,
mot_float_type b,
mot_float_type d){

...
}
'''


### Extra result maps¶

It is possible to add additional parameter maps to the fitting and sampling results. These maps are meant to be forthcoming to the end-user by providing additional maps to the output. Extra results maps can be added by both the composite model as well as by the compartment models. By defining them in a compartment model one ensures that all composite models that use that compartment profit from the additional output maps.

Just as with composite models, one can add extra output maps to the optimization results and to the sampling results as:

class MyCompartment(CompartmentTemplate):
...
extra_optimization_maps = [
lambda results: ...
]

extra_sampling_maps = [
lambda samples: ...
]


where each callback function should return a dictionary with extra maps to add to the output.

### Constraints¶

It is possible to add additional inequality constraints to a compartment model, using the constraints attribute. These constraints need to be added as the result of the function $$g(x)$$ where we assume $$g(x) \leq 0$$.

For example, in the Tensor model we implemented the constraint that the diffusivities must be in a strict order, such that $$d_{\parallel} \geq d_{\perp_{0}} \geq d_{\perp_{1}}$$.

For implementation in MDT, we will state this as the two constraints $$d_{\perp_{0}} \leq d_{\parallel}$$ and $$d_{\perp_{1}} \leq d_{\perp_{0}}$$, and implement it as:

class Tensor(CompartmentTemplate)
...
constraints = '''
constraints[0] = dperp0 - d;
constraints[1] = dperp1 - dperp0;
'''


This constraints attribute can hold arbitrary OpenCL C code, as long as it contains the literal constraints[i] for each additional constraint i.

From this constraints string, MDT creates a function with the same dependencies and parameters as the compartment model. This function is then provided to the optimization routines, which enforce it using the penalty method (https://en.wikipedia.org/wiki/Penalty_method).

## Parameters¶

Parameters form the building blocks of the compartment models. They define how data is provided to the model and form a bridge between the model and the Input data.

The type of a parameters determines how the model uses that parameter. For example, compare these two parameters:

class theta(FreeParameterTemplate):
...

class b(ProtocolParameterTemplate):
...


In this example the theta parameter is defined as a free parameter while the b parameter is defined as a protocol parameter. The type matters and defines how MDT handles the parameter. There are only two types available:

See the sections below for more details on each type.

### Free parameters¶

These parameters are normally supposed to be optimized by the optimization routines. They contain some meta-information such as a lower- and upper- bound, sampling prior, parameter transformation function and more. During optimization, parameters of this type can be fixed to a specific value, which means that they are no longer optimized but that their values (per voxel) are provided by a scalar or a map. When fixed, these parameters are still classified as free parameters (you can consider them as fixed free parameters).

To fix these parameters you can either define so in a composite model or using the Python API before model optimization

mdt.fit_model('CHARMED_r1',
...,
initialization_data=dict(
fixes={},
inits={}
))


A free parameter is identified by having the super class FreeParameterTemplate.

Hereunder we list some details that are important when adding a new free parameter to MDT.

#### Parameter transformations¶

Panagiotaki (2012) and Harms (2017) describe the use of parameter transformations to limit the range of each parameter to biophysical meaningful values and to scale the parameters to a range better suited for optimization. They work by injecting a parameter transformation before model evaluation that limits the parameters between bounds. See (Harms 2017) for more details on which transformations are used in MDT. You can define the transformation function used by setting the parameter_transform attribute. For an overview of the available parameter transformations, see transformations in MOT.

#### Sampling¶

For sampling one needs to define per parameter a prior and a proposal function. These can easily be added in MDT using the attributes sampling_proposal and sampling_prior. Additionally, one can define a sampling statistic function sampling_statistics which is ran after sampling and returns statistics on the observed samples.

### Protocol parameters¶

These parameters are meant to be fulfilled by the values in the Protocol file (see Protocol) or in the extra protocol maps. During model optimization, MDT checks the model for protocol parameters and tries to match the names of the protocol parameters with available protocol data. Protocol data can be submitted using either the Protocol file, or using voxel-based protocol maps.

By matching parameter names with input values, the user can add protocol values dynamically by ensuring a common name between the protocol parameter and the provided values.

A protocol parameter is identified by having the super class ProtocolParameterTemplate.

## Library functions¶

Library functions are Python wrappers around reusable CL functions. Having specified them in the compartment model they are included in the CL kernel and hence their CL functions are usable from within the compartment models.

To create one, please look at one of the existing library functions. To use one, add to your compartment model the attribute dependencies and add there one or more of the library functions to include in the CL code.

Next to the library functions available in MDT, there are also a few loadable functions in MOT, see library_functions for a list.

## Batch profiles¶

Batch profiles are part of the batch processing engine in MDT. They specify how all the necessary data for model fitting (protocol, nifti volumes, brain mask, noise std, etc.) can be loaded from a directory containing one or more subjects.

The general idea is that the batch profiles indicate how the data is supposed to be loaded per subject. This batch profile can then be used by functions like batch_apply() to apply a function to all subjects in the batch profile.

For example, suppose you have a directory containing a lot of subjects on which you want to run the NODDI analysis using MDT. One way to do this would be to write a script that loops over the directories and calls the right fitting commands per subject. Another approach would be to write a batch profile and use the batch processing utilities in MDT to process the datasets automatically. The advantage of the latter is that there are multiple batch processing tools in MDT such as batch_fit(), batch_apply() and run_function_on_batch_fit_output() that allow you to easy manipulate large groups of subjects.

Since the batch profiles form an specification of the directory structure, MDT can guess from a given directory which batch profile to use for that directory. This makes batch processing in some instances as simple as using:

\$ mdt-batch-fit . 'BallStick_r1'


To fit BallStick_r1 to all subjects found (assuming a suitable batch profile was found).