rsatoolbox.vis.model_plot module

Barplot for model comparison based on a results file

rsatoolbox.vis.model_plot.draw_hor_arrow(ax, x1, x2, y, style, ah_L, ah_R)[source]

Draws a horizontal arrow from (x1, y) to (x2, y) if style is ‘->’ and in the reverse direction if style is ‘<-’. If style is ‘<->’, this function draws a double arrow.

rsatoolbox.vis.model_plot.plot_arrows(axbar, significant)[source]

Summarizes the significances with arrows. The argument significant is a binary matrix of pairwise model comparisons. A nonzero value (or True) indicates that the model specified by the row index beats the model specified by the column index. Only the lower triangular part of compMat is used, so the upper triangular part need not be filled in symmetrically. The summary will be most concise if models are ordered by performance (using the sort argument of model_plot.py).

rsatoolbox.vis.model_plot.plot_cliques(axbar, significant)[source]

plots the results of the pairwise inferential model comparisons in the form of a set of maximal cliques of models that are not significantly different in performance. One bar is drawn for each clique with open circles indicating the clique members. Within a clique of models, no pair comparison is significant. All pair comparisons not indicated as insignificant are significant.

Parameters:
  • axbar – Matplotlib axes handle to plot in

  • significant – Boolean matrix of model comparisons

Returns:

rsatoolbox.vis.model_plot.plot_golan_wings(axbar, significant, perf, sort, colors=None, always_black=False, version=3)[source]

Plots the results of the pairwise inferential model comparisons in the form of black horizontal bars with a tick mark at the reference model and a circular bulge at each significantly different model similar to the visualization in Golan, Raju, Kriegeskorte (2020).

Parameters:
  • axbar – Matplotlib axes handle to plot in

  • significant – Boolean matrix of model comparisons

  • version

    • 0 (single wing: solid circle anchor and open circles),

    • 1 (single wing: tick anchor and circles),

    • 2 (single wing: circle anchor and up and down feathers)

    • 3 (double wings: circle anchor, downward dominance-indicating feathers, from bottom to top in model order)

    • 4 (double wings: circle anchor, downward dominance-indicating feathers, from bottom to top in performance order)

Returns:

rsatoolbox.vis.model_plot.plot_model_comparison(result, sort=False, colors=None, alpha=0.01, test_pair_comparisons=True, multiple_pair_testing='fdr', test_above_0=True, test_below_noise_ceil=True, error_bars='sem', test_type='t-test')[source]

Plots the results of RSA inference on a set of models as a bar graph with one bar for each model indicating its predictive performance. The function also shows the noise ceiling whose upper edge is an upper bound on the performance the true model could achieve (given noise and inter- subject variability) and whose lower edge is an estimate of a lower bound on the performance of the true model. In addition, all pairwise inferential model comparisons are shown in the upper part of the figure. The only mandatory input is a “result” object containing model evaluations for bootstrap samples and crossvalidation folds. These are used here to construct confidence intervals and perform the significance tests.

All string inputs are case insensitive.

Parameters:
  • result (rsatoolbox.inference.result.Result) – model evaluation result

  • sort (Boolean or string) –

    False (default): plot bars in the order passed

    ’descend[ing]’: plot bars in descending order of model performance

    ’ascend[ing]’: plot bars in ascending order of model performance

  • colors (list of lists, numpy array, matplotlib colormap) –

    None (default):

    default blue for all bars

    single color:

    list or numpy array of 3 or 4 values (RGB, RGBA) specifying the color for all bars

    multiple colors:

    list of lists or numpy array (number of colors by 3 or 4 channels – RGB, RGBA). If the number of colors matches the number of models, each color is used for the bar corresponding to one model (in the order of the models as passed). If the number of colors does not match the number of models, the list is linearly interpolated to assign a color to each model (in the order of the models as passed). For example, two colors will become a gradation, unless there are exactly two model. Instead of a list of lists or numpy array, a matplotlib colormap object may also be passed (e.g. colors = cm.coolwarm).

  • alpha (float) – significance threshold (p threshold or FDR q threshold)

  • test_pair_comparisons (Boolean or string) –

    False or None:

    do not plot pairwise model comparison results

    True (default):

    plot pairwise model comparison results using default settings

    ’arrows’:

    plot results in arrows style, indicating pairs of sets between which all differences are significant

    ’nili’:

    plot results as Nili bars (Nili et al. 2014), indicating each significant difference by a horizontal line (or each nonsignificant difference if the string contains a ‘2’, e.g. ‘nili2’)

    ’golan’:

    plot results as Golan wings (Golan et al. 2020), with one wing (graphical element) indicating all dominance relationships for one model.

    ’cliques’: plot results as cliques of insignificant differences

  • multiple_pair_testing (Boolean or string) –

    False or ‘none’:

    do not adjust for multiple testing for the pairwise model comparisons

    ’FDR’ or ‘fdr’ (default):

    control the false-discorvery rate at q = alpha

    ’FWER’,’ fwer’, or ‘Bonferroni’:

    control the familywise error rate using the Bonferroni method

  • test_above_0 (Boolean or string) –

    False or None:

    do not plot results of statistical comparison of each model performance against 0

    True (default):

    plot results of statistical comparison of each model performance against 0 using default settings (‘dewdrops’)

    ’dewdrops’:

    place circular “dewdrops” at the baseline to indicate models whose performance is significantly greater than 0

    ’icicles’:

    place triangular “icicles” at the baseline to indicate models whose performance is significantly greater than 0

    Tests are one-sided, use the global alpha threshold and are automatically Bonferroni-corrected for the number of models tested.

  • test_below_noise_ceil (Boolean or string) –

    False or None:

    do not plot results of statistical comparison of each model performance against the lower-bound estimate of the noise ceiling

    True (default):

    plot results of statistical comparison of each model performance against the lower-bound estimate of the noise ceiling using default settings (‘dewdrops’)

    ’dewdrops’:

    use circular “dewdrops” at the lower bound of the noise ceiling to indicate models whose performance is significantly below the lower-bound estimate of the noise ceiling

    ’icicles’:

    use triangular “icicles” at the lower bound of the noise ceiling to indicate models whose performance is significantly below the lower-bound estimate of the noise ceiling

    Tests are one-sided, use the global alpha threshold and are automatically Bonferroni-corrected for the number of models tested.

  • error_bars (Boolean or string) –

    False or None:

    do not plot error bars

    True (default) or ‘SEM’:

    plot the standard error of the mean

    ’CI’:

    plot 95%-confidence intervals (exluding 2.5% on each side)

    ’CI[x]’:

    plot x%-confidence intervals (exluding (100-x)/2% on each side) i.e. ‘CI’ has the same effect as ‘CI95’

    Confidence intervals are based on the bootstrap procedure, reflecting variability of the estimate across subjects and/or experimental conditions.

    ’dots’:

    Draws dots for each data-point, i.e. first dimension of the evaluation tensor. This is primarily sensible for fixed evaluation where this dimension corresponds to the subjects in the experiment.

  • test_type (string) –

    which tests to perform:

    ’t-test’:

    performs a t-test based on the variance estimates in the result structs

    ’bootstrap’:

    performs a bootstrap test, i.e. checks based on the number of samples defying H0

    ’ranksum’:

    performs wilcoxon signed rank sum tests

Returns:

(matplotlib.pyplot.Figure, matplotlib.pyplot.Axis,

matplotlib.pyplot.Axis): the figure and axes the plots were made into. This allows further modification, saving and printing.

rsatoolbox.vis.model_plot.plot_nili_bars(axbar, significant, version=1)[source]

plots the results of the pairwise inferential model comparisons in the form of a set of black horizontal bars connecting significantly different models as in the 2014 RSA Toolbox (Nili et al. 2014).

Parameters:
  • axbar – Matplotlib axes handle to plot in

  • significant – Boolean matrix of model comparisons

  • version

    • 1 (Normal Nili bars, indicating significant differences)

    • 2 (Negative Nili bars in gray, indicating nonsignificant comparison results)

Returns: