NEURON
nrncore_write.cpp
Go to the documentation of this file.
1 #include "nrnconf.h"
2 // A model built using NEURON is heavyweight in memory usage and that
3 // prevents maximizing the number of cells that can be simulated on
4 // a process. On the other hand, a tiny version of NEURON that contains
5 // only the cache efficient structures, minimal memory usage arrays,
6 // needed to do a simulation (no interpreter, hoc Objects, Section, etc.)
7 // lacks the model building flexibility of NEURON.
8 // Ideally, the only arrays needed for a tiny version simulation are those
9 // enumerated in the NrnThread structure in src/nrnoc/multicore.h up to,
10 // but not including, the Node** arrays. Also tiny versions of POINT_PROCESS,
11 // PreSyn, NetCon, and SUFFIX mechanisms will be stripped down from
12 // their full NEURON definitions and, it seems certain, many of the
13 // double fields will be converted to some other, less memory using, types.
14 // With this in mind, we envision that NEURON will incrementally construct
15 // cache efficient whole cell structures which can be saved and read with
16 // minimal processing into the tiny simulator. Note that this is a petabyte
17 // level of data volume. Consider, for example, 128K cores each
18 // preparing model data for several thousand cells using full NEURON where
19 // there is not enough space for the simultaneous existence of
20 // those several thousand cells --- but there is with the tiny version.
21 
22 // Several assumptions with regard to the nrnbbcore_read reader.
23 // Since memory is filled with cells, whole cell
24 // load balance should be adequate and so there is no provision for
25 // multisplit. A process gets a list of the gid owned by that process
26 // and allocates the needed
27 // memory based on size variables for each gid, i.e.
28 // number of nodes, number of instances of each mechanism type, and number
29 // of NetCon instances. Also the offsets are calculated for where the
30 // cell information is to reside in the cache efficient arrays.
31 // The rest of the cell information is then copied
32 // into memory with the proper offsets. Pointers to data, used in the old
33 // NEURON world are converted to integer indices into a common data array.
34 
35 // A good deal of conceptual confusion resulted in earlier implementations
36 // with regard to ordering of synapses and
37 // artificial cells with and without gids. The ordering of the property
38 // data for those is defined by the order in the NrnThread.tml list where
39 // every Memb_list.data has an easily found index relative to its 'nodecount'.
40 // (For artificial cells, since those are not ordered in a cache efficient
41 // array, we get the index using int nrncore_art2index(double* param)
42 // which looks up the index in a hash table. Earlier implementations
43 // handled 'artificial cells without gids' specially which also
44 // necessitated special handling of their NetCons and disallowed artificial
45 // cells with gids. We now handle all artificial cells in a thread
46 // in the same way as any other synapse (the assumption still holds that
47 // any artificial cell without a gid in a thread can connect only to
48 // targets in the same thread. Thus, a single NrnThread.synapses now contains
49 // all synapses and all artificial cells belonging to that thread. All
50 // the synapses and artificial cells are in NrnThread.tml order. So there
51 // are no exceptions in filling Point_process pointers from the data indices
52 // on the coreneuron side. PreSyn ordering is a bit more delicate.
53 // From netpar.cpp, the gid2out_ hash table defines an output_gid
54 // ordering and gives us all the PreSyn
55 // associated with real and artificial cells having gids. But those are
56 // randomly ordered and interleaved with 'no gid instances'
57 // relative to the tml ordering.
58 // Since the number of output PreSyn >= number of output_gid it makes sense
59 // to order the PreSyn in the same way as defined by the tml ordering.
60 // Thus, even though artificial cells with and without gids are mixed,
61 // at least it is convenient to fill the PreSyn.psrc field.
62 // Synapses are first but the artificial cells with and without gids are
63 // mixed. The problem that needs to
64 // be explicitly overcome is associating output gids with the proper PreSyn
65 // and that can be done with a list parallel to the acell part of the
66 // output_gid list that specifies the PreSyn list indices.
67 // Note that allocation of large arrays allows considerable space savings
68 // by eliminating overhead involved in allocation of many individual
69 // instances.
70 /*
71 Assumptions regarding the scope of possible models.(Incomplete list)
72 All real cells have gids (possibly multiple, but no more than one gid
73 for a PreSyn instance.)
74 Artificial cells without gids connect only to cells in the same thread.
75 No POINTER to data outside of NrnThread.
76 No POINTER to data in ARTIFICIAL_CELL (that data is not cache_efficient)
77 nt->tml->pdata is not cache_efficient
78 */
79 // See coreneuron/nrniv/nrn_setup.cpp for a description of
80 // the file format written by this file.
81 
82 /*
83 Support direct transfer of model to dynamically loaded coreneuron library.
84 To do this we factored all major file writing components into a series
85 of functions that return data that can be called from the coreneuron
86 library. The file writing functionality is kept by also calling those
87 functions here as well.
88 Direct transfer mode disables error checking with regard to every thread
89 having a real cell with a gid. Of course real and artificial cells without
90 gids do not have spike information in the output raster file. Trajectory
91 correctness has not been validated for cells without gids.
92 */
93 #include <cstdlib>
94 
95 #include "section.h"
96 #include "parse.hpp"
97 #include "nrnmpi.h"
98 #include "netcon.h"
99 #include "nrncvode.h"
100 
101 #include "vrecitem.h" // for nrnbbcore_vecplay_write
102 #include "nrnsection_mapping.h"
103 
104 #include "nrncore_write.h"
108 #include <map>
109 
110 #include "nrnwrap_dlfcn.h"
111 
112 
114 
115 
116 extern int* nrn_prop_dparam_size_;
117 int* bbcore_dparam_size; // cvodeieq not present
118 extern double t; // see nrncore_psolve
119 
120 /* not NULL, need to write gap information */
121 extern void (*nrnthread_v_transfer_)(NrnThread*);
122 extern size_t nrnbbcore_gap_write(const char* path, int* group_ids);
123 
124 extern size_t nrncore_netpar_bytes();
125 extern short* nrn_is_artificial_;
126 
129 
130 char* (*nrnpy_nrncore_arg_p_)(double tstop);
131 
133 /** mapping information */
135 
136 
137 // direct transfer or via files? The latter makes use of group_gid for
138 // filename construction.
140 
141 // name of coreneuron mpi library to load
143 
144 struct part1_ret {
145  std::size_t rankbytes{};
147 };
148 
149 static part1_ret part1();
150 static void part2(const char*);
151 
152 /// dump neuron model to given directory path
153 size_t write_corenrn_model(const std::string& path) {
154  // if writing to disk then in-memory mode is false
155  corenrn_direct = false;
156 
157  // make sure model is ready to transfer
158  model_ready();
159 
160  // directory to write model
161  create_dir_path(path);
162 
163  // calculate size of the model
164  auto const rankbytes = part1().rankbytes;
165 
166  // mechanism and global variables
167  write_memb_mech_types(get_filename(path, "bbcore_mech.dat").c_str());
168  write_globals(get_filename(path, "globals.dat").c_str());
169 
170  // write main model data
171  part2(path.c_str());
172 
173  return rankbytes;
174 }
175 
176 // accessible from ParallelContext.total_bytes()
177 size_t nrncore_write() {
178  const std::string& path = get_write_path();
179  return write_corenrn_model(path);
180 }
181 
182 static part1_ret part1() {
183  // Need the NEURON model to be frozen and sorted in order to transfer it to
184  // CoreNEURON
185  auto sorted_token = nrn_ensure_model_data_are_sorted();
186 
187  size_t rankbytes = 0;
188  static int bbcore_dparam_size_size = -1;
189 
190  // In nrn/test/pynrn, "python -m pytest ." calls this with
191  // n_memb_func of 27 and then with 29. I don't see any explicit
192  // intervening h.nrn_load_dll in that folder but ...
193  if (bbcore_dparam_size_size != n_memb_func) {
194  if (bbcore_dparam_size) {
195  delete[] bbcore_dparam_size;
196  }
197  bbcore_dparam_size = new int[n_memb_func];
198  }
199 
200  for (int i = 0; i < n_memb_func; ++i) {
201  int sz = nrn_prop_dparam_size_[i];
202  bbcore_dparam_size[i] = sz;
203  const Memb_func& mf = memb_func[i];
204  if (mf.dparam_semantics && sz && mf.dparam_semantics[sz - 1] == -3) {
205  // cvode_ieq in NEURON but not CoreNEURON
206  bbcore_dparam_size[i] = sz - 1;
207  }
208  }
210  cellgroups_ = new CellGroup[nrn_nthread]; // here because following needs mlwithart
212 
214  rankbytes += nrncore_netpar_bytes();
215  // printf("%d bytes %ld\n", nrnmpi_myid, rankbytes);
216  CellGroup::mk_cellgroups(sorted_token, cellgroups_);
218  return {rankbytes, std::move(sorted_token)};
219 }
220 
221 static void part2(const char* path) {
222  CellGroup* cgs = cellgroups_;
223  for (int i = 0; i < nrn_nthread; ++i) {
224  chkpnt = 0;
225  write_nrnthread(path, nrn_threads[i], cgs[i]);
226  }
227 
228  /** write mapping information */
229  if (mapinfo.size()) {
230  int gid = cgs[0].group_id;
231  nrn_write_mapping_info(path, gid, mapinfo);
232  mapinfo.clear();
233  }
234 
235  if (nrnthread_v_transfer_) {
236  // see partrans.cpp. nrn_nthread files of path/icg_gap.dat
237  int* group_ids = new int[nrn_nthread];
238  for (int i = 0; i < nrn_nthread; ++i) {
239  group_ids[i] = cgs[i].group_id;
240  }
241  nrnbbcore_gap_write(path, group_ids);
242  delete[] group_ids;
243  }
244 
245  // filename data might have to be collected at hoc level since
246  // pc.nrncore_write might be called
247  // many times per rank since model may be built as series of submodels.
248  if (ifarg(2) && hoc_is_object_arg(2) && is_vector_arg(2)) {
249  // Legacy style. Interpreter collects groupgids and writes files.dat
250  Vect* cgidvec = vector_arg(2);
251  vector_resize(cgidvec, nrn_nthread);
252  double* px = vector_vec(cgidvec);
253  for (int i = 0; i < nrn_nthread; ++i) {
254  px[i] = double(cgs[i].group_id);
255  }
256  } else {
257  bool append = false;
258  if (ifarg(2)) {
259  if (hoc_is_double_arg(2)) {
260  append = (*getarg(2) != 0);
261  } else {
262  hoc_execerror("Second arg must be Vector or double.", NULL);
263  }
264  }
265  write_nrnthread_task(path, cgs, append);
266  }
267 
268  part2_clean();
269 }
270 
271 
272 #if defined(HAVE_DLFCN_H)
273 
274 /** Return neuron.coreneuron.enable */
275 int nrncore_is_enabled() {
277  int result = (*nrnpy_nrncore_enable_value_p_)();
278  return result;
279  }
280  return 0;
281 }
282 
283 /** Return value of neuron.coreneuron.file_mode flag */
284 int nrncore_is_file_mode() {
286  int result = (*nrnpy_nrncore_file_mode_value_p_)();
287  return result;
288  }
289  return 0;
290 }
291 
292 /** Launch CoreNEURON in direct memory mode */
293 int nrncore_run(const char* arg) {
294  // using direct memory mode
295  corenrn_direct = true;
296 
297  // If "--simulate-only" argument is passed that means that the model is already dumped to disk
298  // and we just need to simulate it with CoreNEURON
299  // Avoid trying to check the NEURON model, passing any data between them and other bookeeping
300  // actions
301  bool corenrn_skip_write_model_to_disk = static_cast<std::string>(arg).find(
302  "--skip-write-model-to-disk") != std::string::npos;
303 
304  // check that model can be transferred
305  // unless "--simulate-only" argument is passed that means that the model is already dumped to
306  // disk
307  if (!corenrn_skip_write_model_to_disk) {
308  model_ready();
309  }
310 
311  // get coreneuron library handle
312  void* handle = [] {
313  try {
314  return get_coreneuron_handle();
315  } catch (std::runtime_error const& e) {
316  hoc_execerror(e.what(), nullptr);
317  }
318  }();
319 
320  // make sure coreneuron & neuron are compatible
321  check_coreneuron_compatibility(handle);
322 
323  // setup the callback functions between neuron & coreneuron
325 
326  // lookup symbol from coreneuron for launching
327  using launcher_t = int (*)(int, int, int, int, const char*, const char*, int);
328  auto* const coreneuron_launcher = reinterpret_cast<launcher_t>(
329  dlsym(handle, "corenrn_embedded_run"));
330  if (!coreneuron_launcher) {
331  hoc_execerror("Could not get symbol corenrn_embedded_run from", NULL);
332  }
333 
334  if (nrnmpi_numprocs > 1 && t > 0.0) {
335  // In case t was reached by an fadvance on the NEURON side,
336  // it may be the case that there are spikes generated on other
337  // ranks that have not been enqueued on this rank.
339  }
340 
341  // check that model can be transferred unless we only want to run the CoreNEURON simulation
342  // with prebuilt model
343  if (!corenrn_skip_write_model_to_disk) {
344  // prepare the model, the returned token will keep the NEURON-side copy of
345  // the model frozen until the end of nrncore_run.
346  auto sorted_token = part1().sorted_token;
347  }
348 
349  int have_gap = nrnthread_v_transfer_ ? 1 : 0;
350 #if !NRNMPI
351 #define nrnmpi_use 0
352 #endif
353 
354  // launch coreneuron
355  int result = coreneuron_launcher(nrn_nthread,
356  have_gap,
357  nrnmpi_use,
359  corenrn_mpi_library.c_str(),
360  arg,
362 
363  // close handle and return result
364  dlclose(handle);
365 
366  // Simulation has finished after calling coreneuron_launcher so we can now return
367  if (!corenrn_skip_write_model_to_disk) {
368  return result;
369  }
370 
371  // Note: possibly non-empty only if nrn_nthread > 1
373 
374  // Huge memory waste
376 
377  return result;
378 }
379 
380 /** Find folder set for --datpath CLI option in CoreNEURON to dump the CoreNEURON data
381  * Note that it is expected to have the CLI option passed in the form of `--datpath <path>`
382  * All the logic to find the proper folder to dump the coreneuron files in file_mode is
383  * tightly coupled with the `coreneuron` Python class.
384  */
385 std::string find_datpath_in_arguments(const std::string& coreneuron_arguments) {
386  std::string arg;
387  std::stringstream ss(coreneuron_arguments);
388  // Split the coreneuron arguments based on spaces
389  // and look for the `--datpath <argument>`
390  getline(ss, arg, ' ');
391  while (arg != "--datpath") {
392  getline(ss, arg, ' ');
393  }
394  // Read the real path that follows `--datpath`
395  getline(ss, arg, ' ');
396  return arg;
397 }
398 
399 /** Run coreneuron with arg string from neuron.coreneuron.nrncore_arg(tstop)
400  * Return 0 on success
401  */
402 int nrncore_psolve(double tstop, int file_mode) {
403  if (nrnpy_nrncore_arg_p_) {
404  char* args = (*nrnpy_nrncore_arg_p_)(tstop);
405  if (args) {
406  auto args_as_str = static_cast<std::string>(args);
407  // if file mode is requested then write model to a directory
408  // note that CORENRN_DATA_DIR name is also used in module
409  // file coreneuron.py
410  auto corenrn_skip_write_model_to_disk =
411  args_as_str.find("--skip-write-model-to-disk") != std::string::npos;
412  if (file_mode && !corenrn_skip_write_model_to_disk) {
413  std::string CORENRN_DATA_DIR = "corenrn_data";
414  if (args_as_str.find("--datpath") != std::string::npos) {
415  CORENRN_DATA_DIR = find_datpath_in_arguments(args);
416  }
417  write_corenrn_model(CORENRN_DATA_DIR);
418 #if NRNMPI
419  if (nrnmpi_numprocs > 1) {
420  nrnmpi_barrier();
421  }
422 #endif
423  }
424  nrncore_run(args);
425  // data return nt._t so copy to t
426  t = nrn_threads[0]._t;
427  free(args);
428  // Really just want to get NetParEvent back onto queue.
429  if (!corenrn_skip_write_model_to_disk) {
431  }
432  return 0;
433  }
434  }
435  return -1;
436 }
437 
438 #else // !HAVE_DLFCN_H
439 
440 int nrncore_run(const char*) {
441  return -1;
442 }
443 
445  return 0;
446 }
447 
449  return 0;
450 }
451 
452 int nrncore_psolve(double tstop, int file_mode) {
453  return 0;
454 }
455 
456 #endif //! HAVE_DLFCN_H
static void nrnmpi_barrier()
int group_id
Definition: cell_group.h:23
static void clean_deferred_netcons()
static void clean_deferred_type2artml()
Definition: cell_group.h:61
static void mk_tml_with_art(neuron::model_sorted_token const &cache_token, CellGroup *)
Definition: cell_group.cpp:480
static void setup_nrn_has_net_event()
Definition: cell_group.cpp:622
static size_t get_mla_rankbytes(CellGroup *)
Definition: cell_group.cpp:552
static void datumtransform(CellGroup *)
Definition: cell_group.cpp:205
static void mk_cellgroups(neuron::model_sorted_token const &cache_token, CellGroup *)
Definition: cell_group.cpp:52
#define i
Definition: md1redef.h:19
DLFCN_EXPORT int dlclose(void *handle)
Definition: dlfcn.c:423
DLFCN_NOINLINE DLFCN_EXPORT void * dlsym(void *handle, const char *name)
Definition: dlfcn.c:447
int hoc_is_object_arg(int narg)
Definition: code.cpp:876
void vector_resize(IvocVect *v, int n)
Definition: ivocvect.cpp:302
int hoc_is_double_arg(int narg)
Definition: code.cpp:864
IvocVect * vector_arg(int i)
Definition: ivocvect.cpp:265
int is_vector_arg(int i)
Definition: ivocvect.cpp:378
#define getarg
Definition: hocdec.h:17
void move(Item *q1, Item *q2, Item *q3)
Definition: list.cpp:200
void append(Item *ql, Item *q)
Definition: list.cpp:289
static int nrnmpi_use
Definition: multisplit.cpp:41
NrnThread * nrn_threads
Definition: multicore.cpp:56
double * vector_vec(IvocVect *v)
Definition: ivocvect.cpp:19
void nrn_spike_exchange(NrnThread *nt)
int nrn_nthread
Definition: multicore.cpp:55
void hoc_execerror(const char *s1, const char *s2)
Definition: nrnoc_aux.cpp:39
bool nrn_use_fast_imem
Definition: fast_imem.cpp:19
void nrn_spike_exchange_init()
Definition: netpar.cpp:238
handle_interface< non_owning_identifier< storage > > handle
Non-owning handle to a Mechanism instance.
neuron::model_sorted_token nrn_ensure_model_data_are_sorted()
Ensure neuron::container::* data are sorted.
Definition: treeset.cpp:2182
void map_coreneuron_callbacks(void *handle)
Populate function pointers by mapping function pointers for callback.
void part2_clean()
void write_memb_mech_types(const char *fname)
Definition: nrncore_io.cpp:58
void create_dir_path(const std::string &path)
create directory with given path
Definition: nrncore_io.cpp:28
std::string get_filename(const std::string &path, std::string file_name)
Definition: nrncore_io.cpp:51
void write_globals(const char *fname)
Definition: nrncore_io.cpp:74
void nrn_write_mapping_info(const char *path, int gid, NrnMappingInfo &minfo)
dump mapping information to gid_3.dat file
Definition: nrncore_io.cpp:531
void write_nrnthread_task(const char *path, CellGroup *cgs, bool append)
Write all dataset ids to files.dat.
Definition: nrncore_io.cpp:366
void write_nrnthread(const char *path, NrnThread &nt, CellGroup &cg)
Definition: nrncore_io.cpp:116
int chkpnt
Definition: nrncore_io.cpp:24
std::string get_write_path()
Definition: nrncore_io.cpp:43
void model_ready()
static void part2(const char *)
int nrncore_run(const char *)
short * nrn_is_artificial_
Definition: init.cpp:214
CellGroup * cellgroups_
std::string corenrn_mpi_library
size_t nrnbbcore_gap_write(const char *path, int *group_ids)
Definition: partrans.cpp:987
size_t write_corenrn_model(const std::string &path)
dump neuron model to given directory path
void(* nrnthread_v_transfer_)(NrnThread *)
Definition: fadvance.cpp:139
int nrncore_is_enabled()
int nrncore_is_file_mode()
bool corenrn_direct
int nrncore_psolve(double tstop, int file_mode)
int(* nrnpy_nrncore_file_mode_value_p_)()
value of neuron.coreneuron.file_mode as 0, 1 (-1 if error)
double t
Definition: cvodeobj.cpp:57
int * bbcore_dparam_size
size_t nrncore_write()
char *(* nrnpy_nrncore_arg_p_)(double tstop)
Gets the python string returned by neuron.coreneuron.nrncore_arg(tstop) return a strdup() copy of the...
int * nrn_prop_dparam_size_
Definition: init.cpp:163
NetCvode * net_cvode_instance
Definition: cvodestb.cpp:26
int(* nrnpy_nrncore_enable_value_p_)()
value of neuron.coreneuron.enable as 0, 1 (-1 if error)
size_t nrncore_netpar_bytes()
Definition: netpar.cpp:1593
NrnMappingInfo mapinfo
mapping information
static part1_ret part1()
int ifarg(int)
Definition: code.cpp:1607
std::vector< Memb_func > memb_func
Definition: init.cpp:145
int n_memb_func
Definition: init.cpp:448
int find(const int, const int, const int, const int, const int)
#define NULL
Definition: spdefs.h:105
std::unique_ptr< int[]> dparam_semantics
Definition: membfunc.h:89
Compartment mapping information for NrnThread.
size_t size()
number of cells
void clear()
after writing NrnThread to file we remove all previous mapping information, free memory.
Represent main neuron object computed by single thread.
Definition: multicore.h:58
std::size_t rankbytes
neuron::model_sorted_token sorted_token