❭ API reference ❭ data_base ❭ db_initializers ❭
load_simrun_general¶
Parse Raw simulation results generated with simrun and write to a DataBase
The output format of simrun are Raw simulation results: a nested folder structure with .csv and/or .npz files.
This module provides functions to gather and parse this data to pandas and dask dataframes. It merges all trials in a single dataframe.
This saves IO time, disk space, and is strongly recommended for HPC systems and other shared filesystems in general, as it reduces the amount of inodes required.
After running init(), a database is created containing
the following keys:
Key |
Description |
|---|---|
|
Filepath to the raw simulation output of |
|
List containing paths to all original somatic voltage trace files. |
|
The simulation trial indices as a pandas Series. |
|
A metadata dataframe out of sim_trial_indices |
|
Dask dataframe containing the somatic voltage traces |
|
A |
|
A |
|
A pandas dataframe containing the original paths of the parameter files and their hashes. |
|
Dask dataframe containing the parsed Synapse activation data. |
|
Dask dataframe containing the parsed Presynaptic spike times. |
|
Subdatabase containing the membrane voltage at the recording sites specified in the Cell parameters as a dask dataframe. |
|
Subdatabase containing the spike times at the recording sites specified in the Cell parameters as a dask dataframe. |
|
Dask dataframe containing the spike times of the postsynaptic cell for all trials. |
After initialization, you can access the data from the data_base in the following manner:
>>> db['synapse_activation']
<synapse activation dataframe>
>>> db['cell_activation']
<cell activation dataframe>
>>> db['voltage_traces']
<voltage traces dataframe>
>>> db['spike_times']
<spike times dataframe>
If you intialize the database with rewrite_in_optimized_format=True (default), the keys are written as dask dataframes to whichever format is configured as the optimized format (see config).
If rewrite_in_optimized_format=False instead, these keys are pickled dask dataframes, containing the instructions to build the dataframe, not the data itself.
This is useful for fast intermediate analysis, but strongly discouraged for long term storage, since these instructions contain absolute paths to the original data files, which invalidates once they are moved or deleted.
Individual keys can afterwards be set to permanent, self-contained and efficient dask dataframes by calling
optimize() on specific database
keys.
See also
Raw simulation results for more information on the raw output format of simrun.
See also
init() for the initialization of the database.
Functions¶
|
Initialize a database with simulation data. |
|
Add dendritic voltage traces to the database. |
|
Add dendritic spike times to the database. |
|
Rewrite existing data with a new dumper. |
|
Load and set up the cell and network from the database. |
Modules¶
Pipelines for building database keys containing results from |
|
- |
|
- |
|
- |
|
- |
|
- |
|
- |
|
Re-optimize a database with a new dumper |
|
- |
Documentation unclear, incomplete, broken or wrong? Let us know