The analysis of brain-imaging data requires complex and often non-linear transformations to support findings on brain function or pathologies. And yet, recent work has shown that variability in the choices that one makes when analyzing data can lead to quantitatively and qualitatively different results, endangering the trust in conclusions. Even within a given method or analytical technique, numerical instabilities could compromise findings. We instrumented a structural-connectome estimation pipeline with Monte Carlo Arithmetic, a technique to introduce random noise in floating-point computations, and evaluated the stability of the derived connectomes, their features, and the impact on a downstream analysis. The stability of results was found to be highly dependent upon which features of the connectomes were evaluated, and ranged from perfectly stable (i.e. no observed variability across executions) to highly unstable (i.e. the results contained no trustworthy significant information). While the extreme range and variability in results presented here could severely hamper our understanding of brain organization in brain-imaging studies, it also leads to an increase in the reliability of datasets. This paper highlights the potential of leveraging the induced variance in estimates of brain connectivity to reduce the bias in networks alongside increasing the robustness of their applications in the detection or classification of individual differences. This paper demonstrates that stability evaluations are necessary for understanding error and bias inherent to scientific computing, and that they should be a component of typical analytical workflows.
bioRxiv Subject Collection: Neuroscience