Skip to content

ArborIO Technical Questions

Ben Cumming edited this page Feb 4, 2021 · 8 revisions

Gap Junctions

  • Glomerular compartments for the IO (Starfish models). We must decide how to best implement the tiny ‘glomerular’ compartments that lodge both gap junctions and chemical synapses. Despite the fact that conductance would appear symmetrical (given that the proteins forming the gap junctions being of the same are for both cells), gap junctions are known to ‘rectify’. It is possible that glomerular morphology may help explain rectification. @Mario @Arbor @Pavia
  • Question to Arbor: what is the best way to define different functions for the gap junction conductances, so that they allow asymmetrical current flows (vs symmetrical)?
  • Help us determine the numerical issues with tiny compartments (stability, error control, tiny-signal aggregation in soma etc.). How will this affect Arbor solvers? How will it affect signaling across modeled neurons/cluster nodes?
  • What is the plan for activity dependent plasticity (Calcium dependent)?

Answers:

  • Arbor currently only supports linear gap junctions of the form current = ggap*(v-vgap). We can add support for new rules, but we haven't yet discussed how best to enable that functionality.
  • For the current model we need to have arbitrary functions for ggap, particularly: sigmoids, gaussians, rectifying functions. Note: parameters for these functions may vary on a gap basis. Our MATLAB model implements a gaussian function (fitted on real data).
  • Does 'symmetric' gap junctions mean that charge is conserved between the 2 sides of the gap junction? i.e. the current added to one side is the negative of the current added to the other?
  • current may not be conserved as in Eve Marder's paper below: [1] G. J. Gutierrez and E. Marder. Rectifying Electrical Synapses Can Affect the Influence of Synaptic Modulation on Output Pattern Robustness. The Journal of neuroscience : the official journal of the Society for Neuroscience, 33(32):13238 – 13248, 2013.
  • What governs the conductance of a gap junction? Is it a function of the voltages v and vgap? Are there references that illustrate the different forms that the current and conductance calculation functions could have?
  • See ref above, and the one on which our implementation is based, below: [2] N. Schweighofer, K. Doya, and M. Kawato. Electrophysiological properties of inferior olive neurons: A compartmental model. Journal of Neurophysiology, 82(2):804 – 817, 1999.

Stochastic Sources

  1. We want to be able to generate conductance and current noise correctly and efficiently (Ohrstein-Uhlenbeck is only one type). We also want to specify ‘shared components’ of the noise sources. For instance, one compartment can receive from N noise sources each with its own coefficient. Arbor's guts matter much here. What are the overheads for this decision? @Arbor, @Christos, @Michele
  2. Arbor is probably ready for parallel RNG, but we need to specify the necessary changes regarding multiple GPUs. @Christos, @Arbor
  • For the implementation we must specify a tech details-invariant seeding strategy for a given model, if possible. (invariant wrt. multinode etc.) @Christos, @Arbor
  1. Specify randomness quality, how to find correlations @Michele, @Amo, @Arbor
  • Run the validation protocol, automatically @Michele, @Amo, @Arbor

MultiGPU port:

  • Need to understand Arbor internals to expose data representation, communication schemes (NEST style won’t do; scatter-gather more promising).
  • To devise an optimal multiGPU Arbor solution, we may need to (re)consider architectural aspects of Arbor, which may be beyond our expertise and/or beyond available time; especially in the case of GJs in the rest of the heterogeneous cerebellar network

Answers:

  • Regarding this comment:

    NEST style won’t do; scatter-gather more promising

    Our current plan for implementing distributed gap junction is to use waveform relaxation, which is similar to how NEST implements gap junctions. Why has this been ruled out? If scatter-gather means a MPI communication between nodes at every integration step, this will likely severely affect performance.

I am not an expert here, I would like to invite @Christos to address this.

Platform Considerations

  • EBRAINS integration: As with the scaffold we want to run this model in EBRAINS. We need to make this simulation suite available to the community. This is a deliverable. How to deploy Arbor in EBRAINS with as much as possible access to powerful computational resources?
  • Model development / MULTI-GPU: Can we use Docker to deploy Arbor? And, on what (multiGPU) cluster?

Answers

  • I will recommend a method based on my preferences and things other people have done in the past (Thorsten).
  • You will need to have access to a HPC installation that ties in to EBRAINS and offers access via UNICORE.
  • This means applying for a project with GPU resources at JSC, CSCS, or similar. We do not offer these as a part of EBRAINS by default (ie you will need to make a scientific case for being granted access).
  • The best way is likely to go through ICEI.
    • Recommendations at JSC:
      • JUSUF (AMD + V100 + NVMe nodes)
      • JURECA-DC (AMC + A100
      • JUWELS (GPU: Intel + V100) (Booster: AMD + A100).
    • At CSCS Piz Daint comes to mind.
    • Note that the A100 is the much more exiting option. ;)
  • Next, set up a Jupyter Notebook on the EBRAINS wiki collab that drives the HPC cluster via UNICORE. Examples exists.
  • This can also be used to process results.
  • At JSC Singularity as a Container Service is supported and at CSCS Sarus is installed. Docker is not really a good option due to hardware pass-through. However, both options should be compatible with Docker files.
  • We can (at least) help setting up containers.

Risks

  • We promised a cerebellum with an entire IO with 10000 Multicompartmental cells (up to 20 compartments, if using starfish) and gap-junctioned network with long simulations (up to 100s brain time). This adds to the scaffold.
  • The scaffold has two versions, a fast and a slow one (multicompartmental). Currently the slow version of the scaffold has a total of 31,683 cells with among them a total of 3,394,024 compartments). To deliver on the promise above, we probably need a cerebellum scaffold that has EGLIF’s and binds to our Inferior Olivary multicompartmental cells. To Arbor: Can we do this?
    • [Ben] These model sizes are modest.
      • The 10,000 multicompartment IO model will fit on a single GPU comfortably, and might not be enough work to fully utilize one GPU from 2016. We would have to aim for something
      • Similarly for the slow scaffold: 30k cells will fit on a handful of GPUs, and comfortably on a single GPU available in 2021. I would encourage aiming for much larger models if that is desirable.