data_basedb_initializersload_simrun_generalinit

init

data_base.db_initializers.load_simrun_general.init(db, simresult_path, core=True, voltage_traces=True, synapse_activation=True, dendritic_voltage_traces=True, parameterfiles=True, spike_times=True, burst_times=False, repartition=True, scheduler=None, rewrite_in_optimized_format=True, dendritic_spike_times=True, dendritic_spike_times_threshold=-30.0, client=None, n_chunks=5000, dumper=None)

Initialize a database with simulation data.

Use this function to load simulation data generated with the simrun module into a DataBase.

Parameters:
  • core (bool, optional) – Parse and write the core data to the database: voltage traces, metadata, sim_trial_index and filelist. See also: _build_core()

  • voltage_traces (bool, optional) – Parse and write the somatic voltage traces to the database.

  • spike_times (bool, optional) – Parse and write the spike times into the database. See also: data_base.analyze.spike_detection.spike_detection()

  • dendritic_voltage_traces (bool, optional) – Parse and write the dendritic voltage traces to the database. See also: _build_dendritic_voltage_traces()

  • dendritic_spike_times (bool, optional) – Parse and write the dendritic spike times to the database. See also: add_dendritic_spike_times()

  • dendritic_spike_times_threshold (float, optional) – Threshold for the dendritic spike times in \(mV\). Default is \(-30 mV\). See also: add_dendritic_spike_times()

  • synapse_activation (bool, optional) – Parse and write the synapse activation data to the database. See also: _build_synapse_activation()

  • parameterfiles (bool, optional) – Parse and write the parameterfiles to the database. See also: _build_param_files()

  • rewrite_in_optimized_format (bool, optional) – If True (default): data is converted to a high performance binary format and makes unpickling more robust against version changes of third party libraries. Also, it makes the database self-containing, i.e. you can move it to another machine or subfolder and everything still works. Deleting the data folder then would (should) not cause loss of data. If False: the db only contains links to the actual simulation data folder and will not work if the data folder is deleted or moved or transferred to another machine where the same absolute paths are not valid.

  • repartition (bool, optional) – If True, the dask dataframe is repartitioned to 5000 partitions (only if it contains over \(10000\) entries).

  • n_chunks (int, optional) – Number of chunks to split the Synapse activation and Presynaptic spike times dataframes into. Default is 5000.

  • client (dask.distributed.Client, optional) – Distributed Client object for parallel parsing of anything that isn’t a dask dataframe.

  • scheduler (dask.distributed.Client, optional) – Scheduler to use for parallellized parsing of dask dataframes. can e.g. be simply the distributed.Client.get method. Default is None.

  • dumper (module, optional, deprecated) – Dumper to use for saving pandas dataframes. Default is pandas_to_msgpack. This has been deprecated in favor of a central configuration for the dumpers.

Deprecated since version 0.2.0: The burst_times argument is deprecated and will be removed in a future version.

Deprecated since version 0.5.0: The dumper argument is deprecated and will be removed in a future version. Dumpers are configured in the centralized config module.