NEURON
bbsavestate.cpp
Go to the documentation of this file.
1 #include <../../nrnconf.h>
2 
3 /*
4 
5 The goal is to be able to save state to a file and restore state when the
6 save and restore contexts have different numbers of processors, different
7 distribution of gids, and different splitting. It needs to work efficiently
8 in the context of many GByte file sizes and many thousands of processors.
9 
10 The major assumption is that Point_process order is the same within a Node.
11 It is difficult to assert the correctness of this assumption unless the user
12 defines a global identifier for each Point_process and modifies the
13 BBSaveState implementation to explicitly check that parameter with
14 f->i(ppgid, 1).
15 This has been relaxed a bit to allow point processes to be marked IGNORE
16 which will skip them on save/restore. However, it still holds that
17 the order of the non-ignored point processes must be the same within a Node.
18 When a property is encountered the type is checked. However there is no
19 way to determine if two point processes of the same type are restored in
20 the saved order in a Node. Also note that when a point process is ignored,
21 that also implies that the NetCons for which it is a target are also
22 ignored. Finally, note that a non-ignored point process that is saved
23 cannot be marked IGNORE on restore. Similarly, if a point process is
24 marked IGNORE on save it cannot be non-ignored on restore.
25 
26 The user can mark a point process IGNORE by calling the method
27 bbss.ignore(point_process_object)
28 on all the point processes to be ignored.
29 The internal list of ignored point processes can be cleared by calling
30 bbss.ignore()
31 
32 Because a restore clears the event queue and because one cannot call
33 finitialize from hoc without vitiating the restore, Vector.play will
34 not work unless one calls BBSaveState.vector_play_init() after a
35 restore (similarly frecord() must be called for Vector.record to work.
36 Note that it is necessary that Vector.play use a tvec argument with
37 a first element greater than or equal to the restore time.
38 
39 Because of many unimplemented aspects to the general problem,
40 this implementation is more or less limited to BlueBrain cortical models.
41 There are also conceptual difficulties with the general problem since
42 necessary information is embodied in the Hoc programs.
43 
44 Some model restrictions:
45 1) "Real" cells must have gids.
46 2) Artificial cells can have gids. If not they must be directly connected
47  to just one synapse (e.g. NetStim -> NetCon -> synapse).
48 3) There is only one spike output port per cell and that is associated
49  with a base gid.
50 4) NetCon.event in Hoc can be used only with NetCon's with a None source.
51 
52 Note: On reading, the only things that need to exist in the file are the gids
53 and sections that are needed and the others are ignored. So subsets of a written
54 model can read the same file. However, again, all the Point_process objects
55 have to exist for each section that were written by the file (unless they
56 they are eplicitly marked IGNORE).
57 
58 We keep together, all the information associated with a section which includes
59 Point_processes and the NetCons, and SelfEvents targeting each Point_process.
60 Sometimes a NetStim stimulus is used to stimulate the PointProcess.
61 For those cases, when the NetCon has no srcid, the source is also associated
62 with the Point_process.
63 
64 Originally, NetCon with an ARTIFICIAL_CELL source were ignored and did
65 not appear in the DEList of the point process. This allowed one to have
66 different numbers of NetStim with NetCon connected locally before a save
67 and restore. Note that outstanding events from these NetCon on the
68 queue were not saved. PreSyn with state are associated with the cell
69 (gid)that contains them. We required PreSyn to be associated with gids.
70 Although that convention had some nice properties (albeit, it involved some
71 extra programming complexity) it has been abandoned because in practice,
72 stimuli such as InhPoissonStim are complex and it is desirable that they
73 continue past a save/restore.
74 
75 We share more or less the same code for counting, writing, and reading by
76 using subclasses of BBSS_IO.
77 
78 The implementation is based on what was learned from a prototype
79 bbsavestate python module which turned out to be too slow. In particular,
80 the association of Point_Process and its list of NetCons and queue SelfEvents
81 needs to be kept in a hash map. NetCon order in the list is assumed to
82 be consistent for writing and reading but srcgids are checked for consistency
83 on reading.(note that is not a guarantee of correctness since it is possible
84 for two NetCons to have the same source and the same target. However the
85 Blue Brain project models at present have a one-to-one correspondence between
86 NetCon and synapse Point_process so even the idea of a list is overdoing it.)
87 
88 We assume only NetCon and SelfEvent queue events need to be saved. Others
89 that are under user control have to be managed by the user since reading
90 will clear the queue (except for an initialized NetParEvent). In particular
91 all cvode.event need to be re-issued. One efficiency we realize is to avoid
92 saving the queue with regard to NetCon events and instead save the
93 gid, spiketime information and re-issue the NetCon.send appropriately if not
94 already delivered.
95 
96 On further thought, the last sentence above needs some clarification.
97 Because of the variation in NetCon.delay, in general some spikes
98 for a given PreSyn have already been delivered and some spikes are
99 still on the queue. Nevertheless, it is only necessary to issue
100 the PreSyn.send with its proper initiation time on the source machine
101 and then do a spike exchange. On the target machine only the
102 NetCon spikes with delivery time >= t need to be re-issued from the
103 PreSyn.send. For this reason all the global NetCon events are
104 resolved into a set of source PreSyn events and associated with the
105 cell portion containing the spike generation site. This is very similar
106 to the problem solved by nrn_fake_fire when helping implement PatternStim
107 and to extend that for use here we only needed an extra mode
108 (fake_out = 2) with a one line change.
109 On the other hand, all the SelfEvents are associated with the cell
110 portion containing the target synapse.
111 
112 So, associated with a Section are the point processes and PreSyn, and
113 associated with point processes are NetCon and SelfEvent
114 
115 To allow Random state to be saved for POINT_PROCESS, if a POINT_PROCESS
116 declares FUNCTION bbsavestate, that function is called when the
117 POINT_PROCESS instance is saved and restored.
118 Also now allow Random state to be saved for a SUFFIX (density mechanism)
119 if it declares FUNCTION bbsavestate. Same interface.
120 The API of FUNCTION bbsavestate has been modified to allow saving/restoring
121 several values. FUNCTION bbsavestate takes two pointers to double arrays,
122 xdir and xval.
123 The first double array, xdir, has length 1 and xdir[0] is -1.0, 0.0, or 1.0
124 If xdir[0] == -1.0, then replace the xdir[0] with the proper number of elements
125 of xval and return 0.0. If xdir[0] == 0.0, then save double values into
126 the xval array (which will be sized correctly from a previous call with
127 xdir[0] == -1.0). If xdir[0] == 1.0, then the saved double values are in
128 xval and should be restored to their original values.
129 The number of elements saved/restored has to be apriori known by the instance
130 since the size of the xval that was saved is not sent to the instance on
131 restore.
132 
133 For example
134  FUNCTION bbsavestate() {
135  bbsavestate = 0
136  VERBATIM
137  double *xdir, *xval; // , *hoc_pgetarg();
138  xdir = hoc_pgetarg(1);
139  if (*xdir == -1.) { *xdir = 2; return 0.0; }
140  xval = hoc_pgetarg(2);
141  if (*xdir == 0.) { xval[0] = 20.; xval[1] = 21.;}
142  if (*xdir == 1) { printf("%d %d\n", xval[0]==20.0, xval[1] == 21.0); }
143  ENDVERBATIM
144  }
145 
146 When spike compression is used, there are only 256 time slots available
147 for spike exchange time within an integration interval. However, during
148 restore, we are sending the gid spike initiation time which can be
149 earlier than the current integration interval (and therefore may have
150 a negative slot index.
151 To avoid this problem, we must temporarily turn off both
152 spike and gid compression so that PreSyn::send steers to
153 nrn2ncs_outputevent(output_index_, tt) instead of nrn_outputevent(localgid,tt)
154 and so nrn_spike_exchange does a non-compressed exchange.
155 We do not need to worry about bin queueing since it is the delivery time that
156 is enqueueed and that is always in the future.
157 
158 When bin queueing is used, a mechanism is needed to avoid the assertion
159 error in BinQ::enqueue (see nrncvode/tqueue.cpp) when the enqueued event
160 has a delivery time earlier than the binq current time. One possibility
161 is to turn off bin queueing and force all events on the standard queue
162 to be on binq boundaries. Another possibility is for bbsavestate to
163 take over bin queueing in this situation. The first possibility is
164 simpler but entails a slight loss of performance when dealing with the
165 normally binned events on the standard queue when simulation takes up again.
166 Let's try trapping the assertion error in BinQ::enqueue and executing a
167 callback to bbss_early when needed.
168 */
169 
170 #include "bbsavestate.h"
171 #include "cabcode.h"
172 #include "classreg.h"
173 #include "nrncvode.h"
174 #include "nrnoc2iv.h"
175 #include "nrnran123.h"
176 #include "ocfile.h"
177 #include <cmath>
178 #include <nrnmpiuse.h>
179 #include <stdio.h>
180 #include <stdlib.h>
181 #include <string>
182 #include <sys/stat.h>
183 #include <unordered_map>
184 #include <unordered_set>
185 
186 #include "netcon.h"
187 #include "nrniv_mf.h"
188 #include "tqueue.hpp"
189 #include "vrecitem.h"
190 
191 // on mingw, OUT became defined
192 #undef IN
193 #undef OUT
194 #undef CNT
195 
196 extern bool nrn_use_bin_queue_;
197 extern void (*nrn_binq_enqueue_error_handler)(double, TQItem*);
198 static void bbss_early(double td, TQItem* tq);
199 
200 typedef void (*ReceiveFunc)(Point_process*, double*, double);
201 
202 #include "membfunc.h"
203 extern int section_count;
204 extern "C" void nrn_shape_update();
205 extern Section** secorder;
206 extern ReceiveFunc* pnt_receive;
210 extern void nrn_netcon_event(NetCon*, double);
211 extern double t;
212 typedef void (*PFIO)(int, Object*);
213 extern void nrn_gidout_iter(PFIO);
214 extern short* nrn_is_artificial_;
215 extern Object* nrn_gid2obj(int gid);
216 extern PreSyn* nrn_gid2presyn(int gid);
217 extern int nrn_gid_exists(int gid);
218 
219 #if NRNMPI
220 extern void nrnmpi_barrier();
221 extern void nrnmpi_int_alltoallv(const int*, const int*, const int*, int*, int*, int*);
222 extern void nrnmpi_dbl_alltoallv(const double*, const int*, const int*, double*, int*, int*);
223 extern int nrnmpi_int_allmax(int);
224 extern void nrnmpi_int_allgather(int* s, int* r, int n);
225 extern void nrnmpi_int_allgatherv(int* s, int* r, int* n, int* dspl);
226 extern void nrnmpi_dbl_allgatherv(double* s, double* r, int* n, int* dspl);
227 #else
228 static void nrnmpi_barrier() {}
229 static void nrnmpi_int_alltoallv(const int* s,
230  const int* scnt,
231  const int* sdispl,
232  int* r,
233  int* rcnt,
234  int* rdispl) {
235  for (int i = 0; i < scnt[0]; ++i) {
236  r[i] = s[i];
237  }
238 }
239 static void nrnmpi_dbl_alltoallv(const double* s,
240  const int* scnt,
241  const int* sdispl,
242  double* r,
243  int* rcnt,
244  int* rdispl) {
245  for (int i = 0; i < scnt[0]; ++i) {
246  r[i] = s[i];
247  }
248 }
249 static int nrnmpi_int_allmax(int x) {
250  return x;
251 }
252 static void nrnmpi_int_allgather(int* s, int* r, int n) {
253  for (int i = 0; i < n; ++i) {
254  r[i] = s[i];
255  }
256 }
257 static void nrnmpi_int_allgatherv(int* s, int* r, int* n, int* dspl) {
258  for (int i = 0; i < n[0]; ++i) {
259  r[i] = s[i];
260  }
261 }
262 static void nrnmpi_dbl_allgatherv(double* s, double* r, int* n, int* dspl) {
263  for (int i = 0; i < n[0]; ++i) {
264  r[i] = s[i];
265  }
266 }
267 #endif // NRNMPI
268 
269 #if NRNMPI
270 extern bool use_multisend_;
271 #endif
272 
273 extern void nrn_play_init();
275 
276 // turn off compression to avoid problems with presyn deliver earlier than
277 // restore time.
278 #if NRNMPI
279 extern bool nrn_use_compress_;
280 extern bool nrn_use_localgid_;
281 #endif
282 static bool use_spikecompress_;
283 static bool use_gidcompress_;
284 
285 static int callback_mode;
286 static void tqcallback(const TQItem* tq, int i);
287 typedef std::vector<TQItem*> TQItemList;
290 
291 #define QUEUECHECK 1
292 #if QUEUECHECK
293 static void bbss_queuecheck();
294 #endif
295 
296 // 0 no debug, 1 print to stdout, 2 read/write to IO file
297 #define DEBUG 0
298 static int debug = DEBUG;
299 static char dbuf[1024];
300 #if DEBUG == 1
301 #define PDEBUG printf("%s\n", dbuf)
302 #else
303 #define PDEBUG f->s(dbuf, 1)
304 #endif
305 
307 
309 
310 static int usebin_; // 1 if using Buffer, 0 if using TxtFile
311 
312 class BBSS_Cnt: public BBSS_IO {
313  public:
314  BBSS_Cnt();
315  virtual ~BBSS_Cnt();
316  virtual void i(int& j, int chk = 0) override;
317  virtual void d(int n, double& p) override;
318  virtual void d(int n, double* p) override;
319  virtual void d(int n, double** p) override;
320  virtual void d(int n, neuron::container::data_handle<double> h) override;
321  virtual void s(char* cp, int chk = 0) override;
322  virtual Type type() override;
323  int bytecnt();
324  int ni, nd, ns, nl;
325 
326  private:
327  int bytecntbin();
328  int bytecntasc();
329 };
331  ni = nd = ns = nl = 0;
332 }
334 void BBSS_Cnt::i(int& j, int chk) {
335  ++ni;
336  ++nl;
337 }
338 void BBSS_Cnt::d(int n, double& p) {
339  nd += n;
340  ++nl;
341 }
342 void BBSS_Cnt::d(int n, double* p) {
343  nd += n;
344  ++nl;
345 }
346 void BBSS_Cnt::d(int n, double**) {
347  nd += n;
348  ++nl;
349 }
351  nd += n;
352  ++nl;
353 }
354 void BBSS_Cnt::s(char* cp, int chk) {
355  ns += strlen(cp) + 1;
356 }
358  return BBSS_IO::CNT;
359 }
361  return usebin_ ? bytecntbin() : bytecntasc();
362 }
364  return ni * sizeof(int) + nd * sizeof(double) + ns;
365 }
367  return ni * 12 + nd * 23 + ns + nl;
368 }
369 
370 class BBSS_TxtFileOut: public BBSS_IO {
371  public:
372  BBSS_TxtFileOut(const char*);
373  virtual ~BBSS_TxtFileOut();
374  virtual void i(int& j, int chk = 0) override;
375  virtual void d(int n, double& p) override;
376  virtual void d(int n, double* p) override;
377  virtual void d(int n, double** p) override;
378  virtual void d(int n, neuron::container::data_handle<double> h) override;
379  virtual void s(char* cp, int chk = 0) override;
380  virtual Type type() override;
381  FILE* f;
382 };
384  f = fopen(fname, "w");
385  assert(f);
386 }
388  fclose(f);
389 }
390 void BBSS_TxtFileOut::i(int& j, int chk) {
391  fprintf(f, "%12d\n", j);
392 }
393 void BBSS_TxtFileOut::d(int n, double& p) {
394  d(n, &p);
395 }
396 void BBSS_TxtFileOut::d(int n, double* p) {
397  for (int i = 0; i < n; ++i) {
398  fprintf(f, " %22.15g", p[i]);
399  }
400  fprintf(f, "\n");
401 }
402 void BBSS_TxtFileOut::d(int n, double** p) {
403  for (int i = 0; i < n; ++i) {
404  fprintf(f, " %22.15g", *p[i]);
405  }
406  fprintf(f, "\n");
407 }
409  assert(n == 1); // Cannot read n values "starting at" a data handle
410  assert(h);
411  fprintf(f, " %22.15g\n", *h);
412 }
413 void BBSS_TxtFileOut::s(char* cp, int chk) {
414  fprintf(f, "%s\n", cp);
415 }
417  return BBSS_IO::OUT;
418 }
419 
420 class BBSS_TxtFileIn: public BBSS_IO {
421  public:
422  BBSS_TxtFileIn(const char*);
423  virtual ~BBSS_TxtFileIn();
424  virtual void i(int& j, int chk = 0) override;
425  virtual void d(int n, double& p) override {
426  d(n, &p);
427  }
428  virtual void d(int n, double* p) override;
429  virtual void d(int n, double** p) override;
430  virtual void d(int n, neuron::container::data_handle<double> h) override;
431  virtual void s(char* cp, int chk = 0) override;
432  virtual Type type() override {
433  return BBSS_IO::IN;
434  }
435  virtual void skip(int) override;
436  FILE* f;
437 };
438 BBSS_TxtFileIn::BBSS_TxtFileIn(const char* fname) {
439  f = fopen(fname, "r");
440  if (!f) {
441  hoc_execerr_ext("Could not open %s", fname);
442  }
443 }
445  fclose(f);
446 }
447 void BBSS_TxtFileIn::i(int& j, int chk) {
448  int k;
449  int rval = fscanf(f, "%d\n", &k);
450  assert(rval == 1);
451  if (chk) {
452  assert(j == k);
453  }
454  j = k;
455 }
456 void BBSS_TxtFileIn::d(int n, double* p) {
457  for (int i = 0; i < n; ++i) {
458  nrn_assert(fscanf(f, " %lf", p + i) == 1);
459  }
460  nrn_assert(fscanf(f, "\n") == 0);
461 }
462 void BBSS_TxtFileIn::d(int n, double** p) {
463  for (int i = 0; i < n; ++i) {
464  nrn_assert(fscanf(f, " %lf", p[i]) == 1);
465  }
466  nrn_assert(fscanf(f, "\n") == 0);
467 }
469  assert(n == 1);
470  assert(h);
471  double v{};
472  nrn_assert(fscanf(f, " %lf\n", &v) == 1);
473  *h = v;
474 }
475 void BBSS_TxtFileIn::s(char* cp, int chk) {
476  char buf[100];
477  nrn_assert(fscanf(f, "%[^\n]\n", buf) == 1);
478  if (chk) {
479  assert(strcmp(buf, cp) == 0);
480  }
481  strcpy(cp, buf);
482 }
484  for (int i = 0; i < n; ++i) {
485  fgetc(f);
486  }
487 }
488 
489 class BBSS_BufferOut: public BBSS_IO {
490  public:
491  BBSS_BufferOut(char* buffer, int size);
492  virtual ~BBSS_BufferOut();
493  virtual void i(int& j, int chk = 0) override;
494  virtual void d(int n, double& p) override;
495  virtual void d(int n, double* p) override;
496  virtual void d(int n, double** p) override;
497  virtual void d(int n, neuron::container::data_handle<double> h) override;
498  virtual void s(char* cp, int chk = 0) override;
499  virtual Type type() override;
500  virtual void a(int);
501  virtual void cpy(int size, char* cp);
502  int sz;
503  char* b;
504  char* p;
505 };
506 BBSS_BufferOut::BBSS_BufferOut(char* buffer, int size) {
507  b = buffer;
508  p = b;
509  sz = size;
510 }
512 void BBSS_BufferOut::i(int& j, int chk) {
513  cpy(sizeof(int), (char*) (&j));
514 }
515 void BBSS_BufferOut::d(int n, double& d) {
516  cpy(sizeof(double), (char*) (&d));
517 }
518 void BBSS_BufferOut::d(int n, double* d) {
519  cpy(n * sizeof(double), (char*) d);
520 }
521 void BBSS_BufferOut::d(int n, double** d) {
522  for (auto i = 0; i < n; ++i) {
523  cpy(sizeof(double), reinterpret_cast<char*>(d[i]));
524  }
525 }
527  assert(n == 1);
528  cpy(sizeof(double), reinterpret_cast<char*>(static_cast<double*>(h)));
529 }
530 void BBSS_BufferOut::s(char* cp, int chk) {
531  cpy(strlen(cp) + 1, cp);
532 }
533 void BBSS_BufferOut::a(int i) {
534  assert((p - b) + i <= sz);
535 }
537  return BBSS_IO::OUT;
538 }
539 void BBSS_BufferOut::cpy(int ns, char* cp) {
540  a(ns);
541  for (int ii = 0; ii < ns; ++ii) {
542  p[ii] = cp[ii];
543  }
544  p += ns;
545 }
547  public:
548  BBSS_BufferIn(char* buffer, int size);
549  virtual ~BBSS_BufferIn();
550  virtual void i(int& j, int chk = 0);
551  virtual void s(char* cp, int chk = 0);
552  virtual void skip(int n) {
553  p += n;
554  }
555  virtual Type type();
556  virtual void cpy(int size, char* cp);
557 };
558 BBSS_BufferIn::BBSS_BufferIn(char* buffer, int size)
559  : BBSS_BufferOut(buffer, size) {}
561 void BBSS_BufferIn::i(int& j, int chk) {
562  int k;
563  cpy(sizeof(int), (char*) (&k));
564  if (chk) {
565  assert(j == k);
566  }
567  j = k;
568 }
569 void BBSS_BufferIn::s(char* cp, int chk) {
570  if (chk) {
571  assert(strcmp(p, cp) == 0);
572  }
573  cpy(strlen(p) + 1, cp);
574 }
576  return BBSS_IO::IN;
577 }
578 void BBSS_BufferIn::cpy(int ns, char* cp) {
579  a(ns);
580  for (int ii = 0; ii < ns; ++ii) {
581  cp[ii] = p[ii];
582  }
583  p += ns;
584 }
585 
586 static void* cons(Object*) {
587  BBSaveState* ss = new BBSaveState();
588  return (void*) ss;
589 }
590 
591 static void destruct(void* v) {
592  BBSaveState* ss = (BBSaveState*) v;
593  delete ss;
594 }
595 
596 static double save(void* v) {
597  usebin_ = 0;
598  BBSaveState* ss = (BBSaveState*) v;
599  BBSS_IO* io = new BBSS_TxtFileOut(gargstr(1));
600  io->d(1, nrn_threads->_t); // save time
601  ss->apply(io);
602  delete io;
603  return 1.;
604 }
605 
606 static void bbss_restore_begin() {
608 
609  // turn off compression. Will turn back on in bbss_restore_done.
610 #if NRNMPI
611  use_spikecompress_ = nrn_use_compress_;
613  nrn_use_compress_ = false;
614  nrn_use_localgid_ = false;
615 #endif
616 
617  if (nrn_use_bin_queue_) {
618  // Start the BinQ with the same time it had at save time.
620  tq->shift_bin(nrn_threads->_t - 0.5 * nrn_threads->_dt);
622  }
623 }
624 
625 static double restore(void* v) {
626  usebin_ = 0;
627  BBSaveState* ss = (BBSaveState*) v;
628  BBSS_IO* io = new BBSS_TxtFileIn(gargstr(1));
629  io->d(1, t);
630  nrn_threads->_t = t; // single thread assumption here
631 
633  ss->apply(io);
634  delete io;
636  return 1.;
637 }
638 
639 #include <ivocvect.h>
640 static double save_request(void* v) {
641  int *gids, *sizes;
642  Vect* gidvec = vector_arg(1);
643  Vect* sizevec = vector_arg(2);
644  BBSaveState* ss = (BBSaveState*) v;
645  int len = ss->counts(&gids, &sizes);
646  gidvec->resize(len);
647  sizevec->resize(len);
648  for (int i = 0; i < len; ++i) {
649  gidvec->elem(i) = double(gids[i]);
650  sizevec->elem(i) = double(sizes[i]);
651  }
652  if (len) {
653  free(gids);
654  free(sizes);
655  }
656  return double(len);
657 }
658 
659 static double save_gid(void* v) {
660  printf("save_gid not implemented\n");
661  return 0.;
662 }
663 
664 static double restore_gid(void* v) {
665  printf("restore_gid not implemented\n");
666  return 0.;
667 }
668 
669 static void pycell_name2sec_maps_clear();
670 
671 static double save_test(void* v) {
672  int *gids, *sizes;
673  BBSaveState* ss = (BBSaveState*) v;
674  usebin_ = 0;
675  if (nrnmpi_myid == 0) { // save global time
676 #ifdef MINGW
677  mkdir("bbss_out");
678 #else
679  mkdir("bbss_out", 0770);
680 #endif
681  BBSS_IO* io = new BBSS_TxtFileOut("bbss_out/tmp");
682  io->d(1, nrn_threads->_t);
683  delete io;
684  }
685  nrnmpi_barrier();
686 
687  int len = ss->counts(&gids, &sizes);
688  for (int i = 0; i < len; ++i) {
689  char fn[200];
690  Sprintf(fn, "bbss_out/tmp.%d.%d", gids[i], nrnmpi_myid);
691  BBSS_IO* io = new BBSS_TxtFileOut(fn);
692  ss->f = io;
693  ss->gidobj(gids[i]);
694  delete io;
695  }
696  if (len) {
697  free(gids);
698  free(sizes);
699  }
700  return 0.;
701 }
702 
703 static double save_test_bin(void* v) { // only for whole cells
704  int len, *gids, *sizes, global_size;
705  char* buf;
706  char fname[100];
707  FILE* f;
708  usebin_ = 1;
709  void* ref = bbss_buffer_counts(&len, &gids, &sizes, &global_size);
710  if (nrnmpi_myid == 0) { // save global time
711  buf = new char[global_size];
712  bbss_save_global(ref, buf, global_size);
713  Sprintf(fname, "binbufout/global.%d", global_size);
714  nrn_assert(f = fopen(fname, "w"));
715  fwrite(buf, sizeof(char), global_size, f);
716  fclose(f);
717  delete[] buf;
718 
719  Sprintf(fname, "binbufout/global.size");
720  nrn_assert(f = fopen(fname, "w"));
721  fprintf(f, "%d\n", global_size);
722  fclose(f);
723  }
724  for (int i = 0; i < len; ++i) {
725  buf = new char[sizes[i]];
726  bbss_save(ref, gids[i], buf, sizes[i]);
727  Sprintf(fname, "binbufout/%d.%d", gids[i], sizes[i]);
728  nrn_assert(f = fopen(fname, "w"));
729  fwrite(buf, sizeof(char), sizes[i], f);
730  fclose(f);
731  delete[] buf;
732 
733  Sprintf(fname, "binbufout/%d.size", gids[i]);
734  nrn_assert(f = fopen(fname, "w"));
735  fprintf(f, "%d\n", sizes[i]);
736  fclose(f);
737  }
738  if (len) {
739  free(gids);
740  free(sizes);
741  }
743  return 0.;
744 }
745 
746 typedef std::unordered_map<Point_process*, int> PointProcessMap;
747 static std::unique_ptr<PointProcessMap> pp_ignore_map;
748 
749 static double ppignore(void* v) {
750  if (ifarg(1)) {
752  if (!pp_ignore_map) {
753  pp_ignore_map.reset(new PointProcessMap());
754  pp_ignore_map->reserve(100);
755  }
756  (*pp_ignore_map)[pp] = 0; // naive set instead of map
757  } else if (pp_ignore_map) { // clear
758  pp_ignore_map.reset();
759  }
760  return 0.;
761 }
762 
763 static int ignored(Prop* p) {
764  if (pp_ignore_map) {
765  auto* pp = p->dparam[1].get<Point_process*>();
766  if (pp_ignore_map->count(pp) > 0) {
767  return 1;
768  }
769  }
770  return 0;
771 }
772 
773 void* bbss_buffer_counts(int* len, int** gids, int** sizes, int* global_size) {
774  usebin_ = 1;
775  BBSaveState* ss = new BBSaveState();
776  *global_size = 0;
777  if (nrnmpi_myid == 0) { // save global time
778  BBSS_Cnt io{};
779  io.d(1, nrn_threads->_t);
780  *global_size = io.bytecnt();
781  }
782  *len = ss->counts(gids, sizes);
783  return ss;
784 }
785 void bbss_save_global(void* bbss, char* buffer,
786  int sz) { // call only on host 0
787  usebin_ = 1;
788  BBSS_BufferOut io(buffer, sz);
789  io.d(1, nrn_threads->_t);
790 }
791 void bbss_restore_global(void* bbss, char* buffer,
792  int sz) { // call on all hosts
793  usebin_ = 1;
794  BBSS_BufferIn io(buffer, sz);
795  io.d(1, nrn_threads->_t);
796  t = nrn_threads->_t;
798 }
799 void bbss_save(void* bbss, int gid, char* buffer, int sz) {
800  usebin_ = 1;
801  BBSaveState* ss = (BBSaveState*) bbss;
802  BBSS_IO* io = new BBSS_BufferOut(buffer, sz);
803  ss->f = io;
804  ss->gidobj(gid);
805  delete io;
806 }
807 void bbss_restore(void* bbss, int gid, int ngroup, char* buffer, int sz) {
808  usebin_ = 1;
809  BBSaveState* ss = (BBSaveState*) bbss;
810  BBSS_IO* io = new BBSS_BufferIn(buffer, sz);
811  ss->f = io;
812  for (int i = 0; i < ngroup; ++i) {
813  ss->gidobj(gid);
814  t = nrn_threads->_t;
815  }
816  delete io;
817 }
818 void bbss_save_done(void* bbss) {
819  BBSaveState* ss = (BBSaveState*) bbss;
820  delete ss;
821 }
822 
823 static void bbss_remove_delivered() {
825 
826  // PreSyn and NetCon spikes are on the queue. To determine the spikes
827  // that have already been delivered the PreSyn items that have
828  // NetCon delivery times < t need to get fanned out to NetCon items
829  // on the queue before checking the times.
831  callback_mode = 2;
833  for (TQItem* qi: *tq_presyn_fanout) {
834  double td = qi->t_;
835  PreSyn* ps = (PreSyn*) qi->data_;
836  tq->remove(qi);
838  }
839  delete tq_presyn_fanout;
840 
841  // now everything on the queue which is subject to removal is a NetCon
842  tq_removal_list = new TQItemList();
843  callback_mode = 3;
845  for (TQItem* qi: *tq_removal_list) {
846  int type = ((DiscreteEvent*) qi->data_)->type();
847  if (type != NetConType) {
848  printf("%d type=%d\n", nrnmpi_myid, type);
849  }
850  assert(type == NetConType);
851  tq->remove(qi);
852  }
853  delete tq_removal_list;
854 }
855 
856 void bbss_restore_done(void* bbss) {
857  if (bbss) {
858  BBSaveState* ss = (BBSaveState*) bbss;
859  delete ss;
860  }
861  // We did not save the NetParEvent. In fact, during
862  // saving, the minimum delay might be different than
863  // what is needed with this distribution and nhost.
864  NetParEvent* npe = new NetParEvent();
865  npe->ithread_ = 0;
867  delete npe;
869 #if NRNMPI
870  // only necessary if multisend method is using two subintervals
871  if (use_multisend_) {
873  }
874 #endif
875  // The queue may now contain spikes which have already been
876  // delivered and so those must be removed if delivery time < t.
877  // Actually, the Presyn spikes may end up as NetCon spikes some
878  // of which may have been already delivered and some not (due to
879  // variations in NetCon.delay .
881 
882 #if NRNMPI
883  // turn spike compression back on
885  nrn_use_compress_ = use_spikecompress_;
886 #endif
887 
888  // compare the restored queue count for each presyn spike with the
889  // expected undelivered NetCon count what was read from the file
890  // for each PreSyn. This is optional due to the computational
891  // expense but is helpful to point toward a diagnosis of errors.
892 #if QUEUECHECK
893  bbss_queuecheck();
894 #endif
896 }
897 
898 static double restore_test(void* v) {
899  usebin_ = 0;
900  int *gids, *sizes;
901  BBSaveState* ss = (BBSaveState*) v;
902  // restore global time
903  BBSS_IO* io = new BBSS_TxtFileIn("in/tmp");
904  io->d(1, t);
905  nrn_threads->_t = t;
906  delete io;
907 
909  int len = ss->counts(&gids, &sizes);
910  for (int i = 0; i < len; ++i) {
911  char fn[200];
912  Sprintf(fn, "in/tmp.%d", gids[i]);
913  BBSS_IO* io = new BBSS_TxtFileIn(fn);
914  ss->f = io;
915  int ngroup;
916  ss->f->i(ngroup);
917  for (int j = 0; j < ngroup; ++j) {
918  ss->gidobj(gids[i]);
919  }
920  delete io;
921  }
922  if (len) {
923  free(gids);
924  free(sizes);
925  }
927  return 0.;
928 }
929 
930 static double restore_test_bin(void* v) { // assumes whole cells
931  usebin_ = 1;
932  int len, *gids, *sizes, global_size, npiece, sz;
933  void* ref;
934  char* buf;
935  char fname[100];
936  FILE* f;
937 
938  Sprintf(fname, "binbufin/global.size");
939  nrn_assert(f = fopen(fname, "r"));
940  nrn_assert(fscanf(f, "%d\n", &sz) == 1);
941  fclose(f);
942  global_size = sz;
943  buf = new char[sz];
944 
945  Sprintf(fname, "binbufin/global.%d", global_size);
946  f = fopen(fname, "r");
947  if (!f) {
948  printf("%d fail open for read %s\n", nrnmpi_myid, fname);
949  }
950  assert(f);
951  nrn_assert(fread(buf, sizeof(char), global_size, f) == global_size);
952  fclose(f);
953  bbss_restore_global(NULL, buf, global_size);
954  delete[] buf;
955 
956  ref = bbss_buffer_counts(&len, &gids, &sizes, &global_size);
957 
958  for (int i = 0; i < len; ++i) {
959  npiece = 1;
960 
961  Sprintf(fname, "binbufin/%d.size", gids[i]);
962  nrn_assert(f = fopen(fname, "r"));
963  nrn_assert(fscanf(f, "%d\n", &sz) == 1);
964  fclose(f);
965  // if (sz != sizes[i]) {
966  // printf("%d note sz=%d size=%d\n", nrnmpi_myid, sz, sizes[i]);
967  //}
968 
969  buf = new char[sz];
970  Sprintf(fname, "binbufin/%d.%d", gids[i], sz);
971  f = fopen(fname, "r");
972  if (!f) {
973  printf("%d fail open for read %s\n", nrnmpi_myid, fname);
974  }
975  assert(f);
976  nrn_assert(fread(buf, sizeof(char), sz, f) == sz);
977  fclose(f);
978  bbss_restore(ref, gids[i], npiece, buf, sz);
979  delete[] buf;
980  }
981 
982  if (len) {
983  free(gids);
984  free(sizes);
985  }
987  return 0.;
988 }
989 
990 static double vector_play_init(void* v) {
991  nrn_play_init();
992  return 0.;
993 }
994 
995 static Member_func members[] = {{"save", save},
996  {"restore", restore},
997  {"save_test", save_test},
998  {"restore_test", restore_test},
999  // binary test
1000  {"save_test_bin", save_test_bin},
1001  {"restore_test_bin", restore_test_bin},
1002  // binary save/restore interface to interpreter
1003  {"save_request", save_request},
1004  {"save_gid", save_gid},
1005  {"restore_gid", restore_gid},
1006  // indicate which point processes are to be ignored
1007  {"ignore", ppignore},
1008  // allow Vector.play to work
1009  {"vector_play_init", vector_play_init},
1010  {nullptr, nullptr}};
1011 
1013  class2oc("BBSaveState", cons, destruct, members, nullptr, nullptr);
1014 }
1015 
1016 // from savstate.cpp
1018  int offset{-1};
1019  int size{};
1020  Symbol* callback{nullptr};
1021 };
1023 static cTemplate* nct;
1024 static void ssi_def() {
1025  assert(!ssi);
1026  if (nct) {
1027  return;
1028  }
1029  Symbol* s = hoc_lookup("NetCon");
1030  nct = s->u.ctemplate;
1031  ssi = new StateStructInfo[n_memb_func]{};
1032  for (int im = 0; im < n_memb_func; ++im) {
1033  if (!memb_func[im].sym) {
1034  continue;
1035  }
1036  // generally we only save STATE variables. However for
1037  // models containing a NET_RECEIVE block, we also need to
1038  // save everything except the parameters
1039  // because they often contain
1040  // logic and analytic state values. Unfortunately, it is often
1041  // the case that the ASSIGNED variables are not declared as
1042  // RANGE variables so to avoid missing state, save the whole
1043  // param array including PARAMETERs.
1044  if (pnt_receive[im]) {
1045  ssi[im].offset = 0;
1046  ssi[im].size = nrn_prop_param_size_[im]; // sum over array dims
1047  } else {
1048  Symbol* msym = memb_func[im].sym;
1049  for (int i = 0; i < msym->s_varn; ++i) {
1050  Symbol* sym = msym->u.ppsym[i];
1051  int vartype = nrn_vartype(sym);
1052  if (vartype == STATE || vartype == _AMBIGUOUS) {
1053  if (ssi[im].offset < 0) {
1054  ssi[im].offset = sym->u.rng.index;
1055  } else {
1056  // assert what we assume: that after this code the variables we want are
1057  // `size` contiguous legacy indices starting at `offset`
1058  assert(ssi[im].offset + ssi[im].size == sym->u.rng.index);
1059  }
1060  ssi[im].size += hoc_total_array_data(sym, 0);
1061  }
1062  }
1063  }
1064  if (memb_func[im].is_point) {
1065  // check for callback named bbsavestate
1066  cTemplate* ts = nrn_pnt_template_[im];
1067  ssi[im].callback = hoc_table_lookup("bbsavestate", ts->symtable);
1068  // if (ssi[im].callback) {
1069  // printf("callback %s.%s\n", ts->sym->name,
1070  // ssi[im].callback->name);
1071  //}
1072  } else {
1073  // check for callback named bbsavestate in a density mechanism
1074  char name[256];
1075  Sprintf(name, "bbsavestate_%s", memb_func[im].sym->name);
1077  // if (ssi[im].callback) {
1078  // printf("callback %s\n", ssi[im].callback->name);
1079  //}
1080  }
1081  }
1082 }
1083 
1084 // if we know the Point_process, we can find the NetCon
1085 // BB project never has more than one NetCon connected to a Synapse.
1086 // But that may not hold in general so extend to List of NetCon using DEList.
1087 // and assume the list is same order on save/restore.
1088 typedef struct DEList {
1090  struct DEList* next;
1092 typedef std::unordered_map<Point_process*, DEList*> PP2DE;
1093 static std::unique_ptr<PP2DE> pp2de;
1094 // NetCon.events
1095 typedef std::vector<double> DblList;
1096 typedef std::unordered_map<NetCon*, DblList*> NetCon2DblList;
1097 static std::unique_ptr<NetCon2DblList> nc2dblist;
1098 
1099 class SEWrap: public DiscreteEvent {
1100  public:
1101  SEWrap(const TQItem*, DEList*);
1102  ~SEWrap();
1103  int type() {
1104  return se->type();
1105  }
1107  double tt;
1108  int ncindex; // in the DEList or -1 if no NetCon for self event.
1109 };
1110 SEWrap::SEWrap(const TQItem* tq, DEList* dl) {
1111  tt = tq->t_;
1112  se = (SelfEvent*) tq->data_;
1113  if (se->weight_) {
1114  ncindex = 0;
1115  for (; dl && dl->de && dl->de->type() == NetConType; dl = dl->next, ++ncindex) {
1116  if (se->weight_ == ((NetCon*) dl->de)->weight_) {
1117  return;
1118  }
1119  }
1120  // There used to be an assert(0) here.
1121  // There is no associated NetCon (or the NetCon is being
1122  // ignored, perhaps because it is used to reactivate a
1123  // BinReportHelper after a bbsavestate restore).
1124  // In either case, we should also ignore the
1125  // SelfEvent. So send back a special ncindex sentinal value.
1126 
1127  ncindex = -2;
1128  } else {
1129  ncindex = -1;
1130  }
1131 }
1133 
1134 typedef std::vector<SEWrap*> SEWrapList;
1136 
1137 typedef std::unordered_map<int, int> Int2Int;
1138 static std::unique_ptr<Int2Int> base2spgid{new Int2Int()}; // base gids are the host independent
1139  // key for a cell which was multisplit
1140 
1141 typedef std::unordered_map<int, DblList*> Int2DblList;
1142 static std::unique_ptr<Int2DblList> src2send{new Int2DblList()};
1143 ; // gid to presyn send time map
1144 static int src2send_cnt;
1145 // the DblList was needed in place of just a double because there might
1146 // be several spikes from a single PreSyn (interval between spikes less
1147 // than the maximum NetCon delay for that PreSyn).
1148 // Given a current bug regarding dropped spikes with bin queuing as well
1149 // as the question about the proper handling of a spike with delivery
1150 // time equal to the save state time, it is useful to provide a mechanism
1151 // that allows us to assert that the number of spikes on the queue
1152 // at a save (conceptually the number of NetCon fanned out from the
1153 // PreSyn that have not yet been delivered)
1154 // is identical to the number of spikes on the queue after
1155 // a restore. To help with this, we add a count of the "to be delivered"
1156 // NetCon spikes on the queue to each PreSyn item in the saved file.
1157 // For least effort, given the need to handle counts in the same
1158 // fashion as we handle the presyn send times in the DblList, we merely
1159 // represent (ts, cnt) pairs as adjacent items in the DblList.
1160 // For saving, the undelivered netcon count is always computed. After
1161 // restore this information can be checked against the actual
1162 // netcon count on the queue by defining QUEUECHECK to 1. Note that
1163 // computing the undelivered netcon count involves: 1) each machine
1164 // processing its queue's NetCon and PreSyn spikes using tqcallback
1165 // with callback_mode 1. 2) aggregating the PreSyn counts on some
1166 // machine using scatteritems(). 3) lastly sending the total count per
1167 // PreSyn to the machine that owns the PreSyn gid (see mk_presyn_info).
1168 // This 3-fold process is reused during restore to check the counts ane
1169 // we have factored out the relevant code so it can be used for both
1170 // save and restore (for the latter see bbss_queuecheck()).
1171 #if QUEUECHECK
1172 static std::unique_ptr<Int2DblList> queuecheck_gid2unc;
1173 #endif
1174 
1175 static double binq_time(double tt) {
1176  if (nrn_use_bin_queue_) {
1177  double dt = nrn_threads->_dt;
1178  // For a given event time, return a time which would put it in the
1179  // same bin on restore, that it came from on save.
1180  // Note that, before save, it was put on the queue via BinQ::enqueue
1181  // with int idt = (int)((td - tt_)/nt_dt + 1.e-10);
1182  // where tt_ is the early half step edge time of the current bin.
1183  double t1 = int(tt / dt + 0.5 + 1e-10) * dt;
1184  return t1;
1185  }
1186  return tt;
1187 }
1188 
1189 static void bbss_early(double td, TQItem* tq) {
1190  int type = ((DiscreteEvent*) tq->data_)->type();
1191  // If NetCon, discard. If PreSyn, fanout.
1192  if (type == NetConType) {
1193  return;
1194  } else if (type == PreSynType) {
1195  PreSyn* ps = (PreSyn*) tq->data_;
1197  } else {
1198  assert(0);
1199  }
1200 }
1201 
1202 static void tqcallback(const TQItem* tq, int i) {
1203  int type = ((DiscreteEvent*) tq->data_)->type();
1204  switch (callback_mode) {
1205  case 0: { // the self events
1206  if (type == SelfEventType) {
1207  Point_process* pp = ((SelfEvent*) tq->data_)->target_;
1208  DEList* dl1 = NULL;
1209  SEWrap* sew = 0;
1210  const auto& dl1iter = pp2de->find(pp);
1211  if (dl1iter != pp2de->end()) {
1212  dl1 = dl1iter->second;
1213  sew = new SEWrap(tq, dl1);
1214  } else {
1215  sew = new SEWrap(tq, NULL);
1216  }
1217  if (sew->ncindex == -2) { // ignore the self event
1218  // printf("%d Ignoring a SelfEvent to %s\n", nrnmpi_myid,
1219  // memb_func[pp->prop->_type].sym->name);
1220  delete sew;
1221  sew = 0;
1222  }
1223  if (sew) {
1224  sewrap_list->push_back(sew);
1225  DEList* dl = new DEList;
1226  dl->next = 0;
1227  dl->de = sew;
1228  if (dl1) {
1229  while (dl1->next) {
1230  dl1 = dl1->next;
1231  }
1232  dl1->next = dl;
1233  } else {
1234  (*pp2de)[pp] = dl;
1235  }
1236  }
1237  }
1238  } break;
1239 
1240  case 1: { // the NetCon (and PreSyn) events
1241  int srcid, i;
1242  double ts;
1243  PreSyn* ps;
1244  int cntinc; // number on queue to be delivered due to this event
1245  NetCon* nc = 0;
1246  if (type == NetConType) {
1247  nc = (NetCon*) tq->data_;
1248  ps = nc->src_;
1249  ts = tq->t_ - nc->delay_;
1250  cntinc = 1;
1251  } else if (type == PreSynType) {
1252  ps = (PreSyn*) tq->data_;
1253  ts = tq->t_ - ps->delay_;
1254  cntinc = ps->dil_.size();
1255  } else {
1256  return;
1257  }
1258  if (ps) {
1259  if (ps->gid_ >= 0) { // better not be from NetCon.event
1260  srcid = ps->gid_;
1261  DblList* dl;
1262  const auto& dliter = src2send->find(srcid);
1263  if (dliter != src2send->end()) {
1264  dl = dliter->second;
1265  // If delay is long and spiking
1266  // is fast there may be
1267  // another source spike when
1268  // its previous spike is still on
1269  // the queue. So we might have
1270  // to add another initiation time.
1271  // originally there was an assert error here under the assumption that
1272  // all spikes from a PreSyn were delivered before that PreSyn fired
1273  // again. The assumption did not hold for existing Blue Brain models.
1274  // Therefore we extend the algorithm to any number of spikes with
1275  // different initiation times from the same PreSyn. For sanity we
1276  // assume Presyns do not fire more than once every 0.1 ms.
1277  // Unfortunately this possibility makes mpi exchange much more
1278  // difficult as the number of doubles exchanged can be greater than
1279  // the number of src2send gids. Add another initiation time if needed
1280  double m = 1e9; // > .1 add
1281  int im = -1; // otherwise assert < 1e-12
1282  for (i = 0; i < dl->size(); i += 2) {
1283  double x = fabs((*dl)[i] - ts);
1284  if (x < m) {
1285  m = x;
1286  im = i;
1287  }
1288  }
1289  if (m > 0.1) {
1290  dl->push_back(ts);
1291  dl->push_back(cntinc);
1292  } else if (m > 1e-12) {
1293  assert(0);
1294  } else {
1295  (*dl)[im + 1] += cntinc;
1296  }
1297  } else {
1298  dl = new DblList();
1299  dl->push_back(ts);
1300  dl->push_back(cntinc);
1301  (*src2send)[srcid] = dl;
1302  ++src2send_cnt; // distinct PreSyn
1303  }
1304 
1305  } else {
1306  // osrc state should already have been associated
1307  // with the target Point_process and we should
1308  // now associate this event as well
1309  if (ps->osrc_) { // NetStim possibly
1310  // does not matter if NetCon.event
1311  // or if the event was sent from
1312  // the local stimulus. Can't be from
1313  // a PreSyn event since we demand
1314  // that there be only one NetCon
1315  // from this stimulus.
1316  assert(nc);
1317  DblList* db = 0;
1318  const auto& dbiter = nc2dblist->find(nc);
1319  if (dbiter == nc2dblist->end()) {
1320  db = new DblList();
1321  (*nc2dblist)[nc] = db;
1322  } else {
1323  db = dbiter->second;
1324  }
1325  db->push_back(tq->t_);
1326  } else { // assume from NetCon.event
1327  // ps should be unused_presyn
1328  // printf("From NetCon.event\n");
1329  assert(nc);
1330  DblList* db = 0;
1331  const auto& dbiter = nc2dblist->find(nc);
1332  if (dbiter == nc2dblist->end()) {
1333  db = new DblList();
1334  (*nc2dblist)[nc] = db;
1335  } else {
1336  db = dbiter->second;
1337  }
1338  db->push_back(tq->t_);
1339  }
1340  }
1341  }
1342  } break;
1343 
1344  case 2: {
1345  // some PreSyns may get fanned out into NetCon, only
1346  // some of which have already been delivered. If this is the
1347  // case, add the q item to the fanout list. Do not put in
1348  // list if all or none have been delivered.
1349  // Actually, for simplicity, just put all the PreSyn items in
1350  // the list for fanout if PreSyn::deliver time < t.
1351  if (type == PreSynType) {
1352  if (tq->t_ < t) {
1353  tq_presyn_fanout->push_back((TQItem*) tq);
1354  }
1355  }
1356  } break;
1357  case 3: {
1358  // ??? have the ones at t been delivered or not?
1359  if (type != NetConType) {
1360  break;
1361  }
1362  if (tq->t_ == t) {
1363  DiscreteEvent* de = (DiscreteEvent*) tq->data_;
1364  de->pr("Don't know if this event has already been delivered", t, net_cvode_instance);
1365  // assert(0);
1366  }
1367  double td = tq->t_;
1368  double tt = t;
1369  // td should be on bin queue boundary
1370  if (nrn_use_bin_queue_) {
1371  // not discarded if in the current bin
1373  }
1374  if (td <= tt) {
1375  //((DiscreteEvent*)tq->data_)->pr("tq_removal_list", td,
1376  // net_cvode_instance);
1377  tq_removal_list->push_back((TQItem*) tq);
1378  }
1379  } break;
1380  }
1381 }
1383  hoc_Item* q;
1384  assert(!pp2de); // one only or make it a field.
1385  int n = nct->count;
1386  pp2de.reset(new PP2DE);
1387  pp2de->reserve(n + 1);
1388  sewrap_list = new SEWrapList();
1389  ITERATE(q, nct->olist) {
1390  NetCon* nc = (NetCon*) OBJ(q)->u.this_pointer;
1391  // ignore NetCon with no PreSyn.
1392  // i.e we do not save or restore information about
1393  // NetCon.event. We do save NetCons with local NetStim source.
1394  // But that NetStim can only be the source for one NetCon.
1395  // (because its state info will be attached to the
1396  // target synapse)
1397  if (!nc->src_) {
1398  continue;
1399  }
1400  // has a gid or else only one connection
1401  assert(nc->src_->gid_ >= 0 || nc->src_->dil_.size() == 1);
1402  Point_process* pp = nc->target_;
1403  DEList* dl = new DEList;
1404  dl->de = nc;
1405  dl->next = 0;
1406  DEList* dl1;
1407  const auto& delistiter = pp2de->find(pp);
1408  // NetCons first then SelfEvents
1409  if (delistiter != pp2de->end()) {
1410  dl1 = delistiter->second;
1411  while (dl1->next) {
1412  dl1 = dl1->next;
1413  }
1414  dl1->next = dl;
1415  } else {
1416  (*pp2de)[pp] = dl;
1417  }
1418  }
1420  callback_mode = 0;
1422 }
1423 
1424 static std::unique_ptr<Int2DblList> presyn_queue;
1425 
1426 static void del_presyn_info() {
1427  if (presyn_queue) {
1428  for (const auto& dl: *presyn_queue) {
1429  delete dl.second;
1430  }
1431  presyn_queue.reset();
1432  }
1433  if (nc2dblist) {
1434  for (const auto& dl: *nc2dblist) {
1435  delete dl.second;
1436  }
1437  nc2dblist.reset();
1438  }
1439 }
1440 
1442  DEList* dl1;
1443  if (!pp2de) {
1444  return;
1445  }
1446  for (const auto& dlpair: *pp2de) {
1447  auto dl = dlpair.second;
1448  for (; dl; dl = dl1) {
1449  dl1 = dl->next;
1450  // need to delete SEWrap items but dl->de that
1451  // are not SEWrap may already be deleted.
1452  // Hence the extra sewrap_list and items are
1453  // deleted below.
1454  delete dl;
1455  }
1456  }
1457  pp2de.reset();
1458  if (sewrap_list) {
1459  for (SEWrap* sewrap: *sewrap_list) {
1460  delete sewrap;
1461  }
1462  delete sewrap_list;
1463  sewrap_list = NULL;
1464  }
1465  del_presyn_info();
1466 }
1467 
1470  if (!ssi) {
1471  ssi_def();
1472  }
1473 }
1475  if (pp2de) {
1476  del_pp2de();
1477  }
1479 }
1481  f = io;
1482  bbss = this;
1483  core();
1484 };
1485 
1487  if (debug) {
1488  Sprintf(dbuf, "Enter core()");
1489  PDEBUG;
1490  }
1491  char buf[100];
1492  Sprintf(buf, "//core");
1493  f->s(buf, 1);
1494  init();
1495  gids();
1496  finish();
1497  if (debug) {
1498  Sprintf(dbuf, "Leave core()");
1499  PDEBUG;
1500  }
1501 }
1502 
1504  mk_base2spgid();
1505  mk_pp2de();
1506  mk_presyn_info();
1507 }
1508 
1510  del_pp2de();
1511  del_presyn_info();
1512  base2spgid.reset();
1513  if (f->type() == BBSS_IO::IN) {
1515  }
1516 }
1517 
1518 static void base2spgid_item(int spgid, Object* obj) {
1519  int base = spgid % 10000000;
1520  if (spgid == base || !base2spgid->count(base)) {
1521  (*base2spgid)[base] = spgid;
1522  }
1523  if (obj && !obj->secelm_ && !is_point_process(obj)) {
1524  // must be Python Cell
1525  hoc_obj_unref(obj);
1526  }
1527 }
1528 
1530  base2spgid.reset(new Int2Int());
1531  base2spgid->reserve(1000);
1533 }
1534 
1535 // c++ blue brain write interface in two phases. First return to bb what is
1536 // needed for each gid and then get from bb a series of gid, buffer
1537 // pairs to write the buffer. The read interface only requires a single
1538 // phase where we receive from bb a series of gid, buffer pairs for
1539 // reading. However a final step is required to transfer PreSyn spikes by
1540 // calling BBSaveState:finish().
1541 
1542 // first phase for writing
1543 // the caller is responsible for free(*gids) and free(*cnts)
1544 // when finished with them.
1545 int BBSaveState::counts(int** gids, int** cnts) {
1546  f = new BBSS_Cnt();
1547  BBSS_Cnt* c = (BBSS_Cnt*) f;
1548  bbss = this;
1549  init();
1550  // how many
1551  int gidcnt = base2spgid->size();
1552  if (gidcnt) {
1553  // using malloc instead of new as we might need to
1554  // deallocate from c code (of mod files)
1555  *gids = (int*) malloc(gidcnt * sizeof(int));
1556  *cnts = (int*) malloc(gidcnt * sizeof(int));
1557 
1558  if (*gids == NULL || *cnts == NULL) {
1559  printf("Error : Memory allocation failure in BBSaveState\n");
1560 #if NRNMPI
1561  nrnmpi_abort(-1);
1562 #else
1563  abort();
1564 #endif
1565  }
1566  }
1567  gidcnt = 0;
1568  for (const auto& pair: *base2spgid) {
1569  auto base = pair.first;
1570  auto spgid = pair.second;
1571  (*gids)[gidcnt] = base;
1572  c->ni = c->nd = c->ns = c->nl = 0;
1573  Object* obj = nrn_gid2obj(spgid);
1574  gidobj(spgid, obj);
1575  if (obj && !obj->secelm_ && !is_point_process(obj)) {
1576  hoc_obj_unref(obj);
1577  }
1578  (*cnts)[gidcnt] = c->bytecnt();
1579  ++gidcnt;
1580  }
1581  delete f;
1582  return gidcnt;
1583 }
1584 
1585 // note that a cell on a processor consists of any number of
1586 // pieces of a whole cell and each piece has its own spgid
1587 // (see nrn/share/lib/hoc/binfo.hoc) The piece with the output port
1588 // has spgid == basegid with a PreSyn.output_index_ = basegid
1589 // and the others have a PreSyn.output_index_ = -2
1590 // in all cases the PreSyn.gid_ = spgid. (see netpar.cpp BBS::cell())
1591 // Note that binfo.hoc provides base_gid(spgid) which is spgid%msgid
1592 // where msgid is typically 1e7 if the maximum basegid is less than that.
1593 // thishost_gid(basegid) returns the minimum spgid
1594 // of the pieces for the cell on thishost. It would be great if we
1595 // did not have to use the value of msgid. How could we safely derive it?
1596 // In general, different models invent different mappings between basegid
1597 // and msgid so ultimately (if not restricted to Blue Brain usage)
1598 // it is necessary to supply a user defined hoc (or python) callback
1599 // to compute base_gid(spgid). Then the writer and reader can create
1600 // a map of basegid to cell and use only basegid (never reading or writing
1601 // the spgid). If there is no basegid callback then we presume there
1602 // are no split cells.
1603 
1604 // a gid item for second phase of writing
1605 void BBSaveState::gid2buffer(int gid, char* buffer, int size) {
1606  if (f) {
1607  delete f;
1608  }
1609  f = new BBSS_BufferOut(buffer, size);
1610  Object* obj = nrn_gid2obj(gid);
1611  gidobj(gid, obj);
1612  if (obj && !obj->secelm_ && !is_point_process(obj)) {
1613  hoc_obj_unref(obj);
1614  }
1615  delete f;
1616  f = 0;
1617 }
1618 
1619 // a gid item for first phase of reading
1620 void BBSaveState::buffer2gid(int gid, char* buffer, int size) {
1621  if (f) {
1622  delete f;
1623  }
1624  f = new BBSS_BufferIn(buffer, size);
1625  Object* obj = nrn_gid2obj(gid);
1626  gidobj(gid, obj);
1627  if (obj && !obj->secelm_ && !is_point_process(obj)) {
1628  hoc_obj_unref(obj);
1629  }
1630  delete f;
1631  f = 0;
1632 }
1633 
1634 static void cb_gidobj(int gid, Object* obj) {
1635  bbss->gidobj(gid, obj);
1636  if (obj && !obj->secelm_ && !is_point_process(obj)) {
1637  hoc_obj_unref(obj);
1638  }
1639 }
1640 
1642  if (debug) {
1643  Sprintf(dbuf, "Enter gids()");
1644  PDEBUG;
1645  }
1647  if (debug) {
1648  Sprintf(dbuf, "Leave gids()");
1649  PDEBUG;
1650  }
1651 }
1652 
1653 void BBSaveState::gidobj(int basegid) {
1654  int spgid;
1655  Object* obj;
1656  const auto& spgiditer = base2spgid->find(basegid);
1657  nrn_assert(spgiditer != base2spgid->end());
1658  spgid = spgiditer->second;
1659  obj = nrn_gid2obj(spgid);
1660  gidobj(spgid, obj);
1661  if (obj && !obj->secelm_ && !is_point_process(obj)) {
1662  hoc_obj_unref(obj);
1663  }
1664 }
1665 
1666 void BBSaveState::gidobj(int gid, Object* obj) {
1667  char buf[256];
1668  int rgid = gid;
1669  if (debug) {
1670  Sprintf(dbuf, "Enter gidobj(%d, %s)", gid, hoc_object_name(obj));
1671  PDEBUG;
1672  }
1673  Sprintf(buf, "begin cell");
1674  f->s(buf, 1);
1675  f->i(rgid); // on reading, we promptly ignore rgid from now on, stick with gid
1676  int size = cellsize(obj);
1677  f->i(size);
1678  cell(obj);
1679  possible_presyn(gid);
1680  Sprintf(buf, "end cell");
1681  f->s(buf, 1);
1682  if (debug) {
1683  Sprintf(dbuf, "Leave gidobj(%d, %s)", gid, hoc_object_name(obj));
1684  PDEBUG;
1685  }
1686 }
1687 
1689  if (debug) {
1690  Sprintf(dbuf, "Enter cellsize(%s)", hoc_object_name(c));
1691  PDEBUG;
1692  }
1693  int cnt = -1;
1694  if (f->type() == BBSS_IO::OUT) {
1695  BBSS_IO* sav = f;
1696  f = new BBSS_Cnt();
1697  cell(c);
1698  cnt = ((BBSS_Cnt*) f)->bytecnt();
1699  delete f;
1700  f = sav;
1701  }
1702  if (debug) {
1703  Sprintf(dbuf, "Leave cellsize(%s)", hoc_object_name(c));
1704  PDEBUG;
1705  }
1706  return cnt;
1707 }
1708 
1709 // what is the section list for sections associated with a PythonObject.
1710 
1711 // Searching through the global section_list for each cell to create
1712 // a seclist for that cell has unacceptable quadratic
1713 // performance. So search once and create map from PythonCell to SecName2Sec
1714 // map. on first request. The SecName2Sec map associates the base section name'
1715 // to the Section* wrapped by a Python Section. Using a SecName2Sec map
1716 // rather than a seclist allows similar use for save and restore.
1717 
1718 typedef std::unordered_map<std::string, Section*> SecName2Sec;
1719 static std::unordered_map<void*, SecName2Sec> pycell_name2sec_maps;
1720 
1721 // clean up after a restore
1723  pycell_name2sec_maps.clear();
1724 }
1725 
1728  hoc_Item* qsec;
1729  // ForAllSections(sec)
1730  ITERATE(qsec, section_list) {
1731  Section* sec = hocSEC(qsec);
1732  if (sec->prop && sec->prop->dparam[PROP_PY_INDEX].get<void*>()) { // PythonSection
1733  // Assume we can associate with a Python Cell
1734  // Sadly, cannot use nrn_sec2cell Object* as the key because it
1735  // is not unique and the map needs definite PyObject* keys.
1736  Object* ho = nrn_sec2cell(sec);
1737  if (ho) {
1738  void* pycell = nrn_opaque_obj2pyobj(ho);
1739  hoc_obj_unref(ho);
1740  if (pycell) {
1741  SecName2Sec& sn2s = pycell_name2sec_maps[pycell];
1742  std::string name = secname(sec);
1743  // basename is after the cell name component that ends in '.'.
1744  size_t last_dot = name.rfind(".");
1745  assert(last_dot != std::string::npos);
1746  assert(name.size() > (last_dot + 1));
1747  std::string basename = name.substr(last_dot + 1);
1748  if (sn2s.find(basename) != sn2s.end()) {
1749  hoc_execerr_ext("Python Section name, %s, is not unique in the Python cell",
1750  name.c_str());
1751  }
1752  sn2s[basename] = sec;
1753  continue;
1754  }
1755  }
1756  hoc_execerr_ext("Python Section, %s, not associated with Python Cell.", secname(sec));
1757  }
1758  }
1759 }
1760 
1762  if (pycell_name2sec_maps.empty()) {
1764  }
1765  void* pycell = nrn_opaque_obj2pyobj(c);
1766  auto search = pycell_name2sec_maps.find(pycell);
1767  assert(search != pycell_name2sec_maps.end());
1768  return search->second;
1769 }
1770 
1771 // Here is the major place where there is a symmetry break between writing
1772 // and reading. That is because of the possibility of splitting where
1773 // not only the pieces are distributed differently between saving and restoring
1774 // processors but the pieces themselves (as well as the piece gids)
1775 // are different. It is only the union of pieces that describes a complete cell.
1776 // Symmetry exists again at the section level
1777 // For writing the cell is defined as the existing set of pieces in the hoc
1778 // cell Object. Here in cell we write enough prefix info about the Section
1779 // to be able to determine if it exists before reading with section(sec);
1780 
1782  if (debug) {
1783  Sprintf(dbuf, "Enter cell(%s)", hoc_object_name(c));
1784  PDEBUG;
1785  }
1786  char buf[256];
1787  Sprintf(buf, "%s", hoc_object_name(c));
1788  f->s(buf);
1789  if (!is_point_process(c)) { // must be cell object
1790  if (f->type() != BBSS_IO::IN) { // writing, counting
1791  // from forall_section in cabcode.cpp
1792  // count, and iterate from first to last
1793  hoc_Item *qsec, *first, *last;
1794  int cnt = 0;
1795  Section* sec;
1796  qsec = c->secelm_;
1797  if (qsec) { // Write HOC Cell
1798  for (first = qsec;
1799  first->itemtype && hocSEC(first)->prop->dparam[6].get<Object*>() == c;
1800  first = first->prev) {
1801  sec = hocSEC(first);
1802  if (sec->prop) {
1803  ++cnt;
1804  }
1805  }
1806  first = first->next;
1807  last = qsec->next;
1808  f->i(cnt);
1809  for (qsec = first; qsec != last; qsec = qsec->next) {
1810  Section* sec = hocSEC(qsec);
1811  if (sec->prop) {
1812  // the section exists
1813  Sprintf(buf, "begin section");
1814  f->s(buf);
1816  section(sec);
1817  Sprintf(buf, "end section");
1818  f->s(buf);
1819  }
1820  }
1821  } else { // Write Python Cell
1822  // secelm_ is NULL if c is a PythonObject. Need to get
1823  // the list of sections associated with c in some other way.
1825  int i = (int) (n2s.size());
1826  f->i(i);
1827  for (auto& iter: n2s) {
1828  const std::string& name = iter.first;
1829  Section* sec = iter.second;
1830  assert(sec->prop); // all exist because n2s derived from global
1831  // section_list.
1832  Sprintf(buf, "begin section");
1833  f->s(buf);
1834  strcpy(buf, name.c_str());
1835  f->s(buf);
1836  int indx = sec->prop->dparam[5].get<int>();
1837  f->i(indx);
1838  int size = sectionsize(sec);
1839  f->i(size, 1);
1840  section(sec);
1841  Sprintf(buf, "end section");
1842  f->s(buf);
1843  }
1844  }
1845  } else { // reading
1846  Section* sec;
1847  int i, cnt, indx, size;
1848 
1849  // Don't know a better idiom for following.
1850  SecName2Sec* n2s = NULL;
1851  if (!c->secelm_) {
1852  n2s = &pycell_name2sec_map(c);
1853  }
1854  // Assert that all section name are unique for a Python Cell
1855  std::unordered_set<std::string> snames;
1856 
1857  f->i(cnt);
1858  for (i = 0; i < cnt; ++i) {
1859  Sprintf(buf, "begin section");
1860  f->s(buf, 1);
1861  f->s(buf); // the section name
1862  f->i(indx); // section array index
1863  f->i(size);
1864  if (c->secelm_) { // HOC cell
1866  } else {
1867  sec = NULL;
1868  if (snames.find(buf) != snames.end()) {
1869  hoc_execerr_ext("More than one section with name %s in cell %s",
1870  buf,
1871  hoc_object_name(c));
1872  }
1873  snames.emplace(buf);
1874  auto search = (*n2s).find(buf);
1875  if (search != (*n2s).end()) {
1876  sec = search->second;
1877  }
1878  }
1879  if (sec) {
1880  section(sec);
1881  } else { // skip size bytes
1882  f->skip(size);
1883  }
1884  Sprintf(buf, "end section");
1885  f->s(buf, 1);
1886  }
1887  }
1888  } else { // ARTIFICIAL_CELL
1889  Point_process* pnt = ob2pntproc(c);
1890  mech(pnt->prop);
1891  }
1892  if (debug) {
1893  Sprintf(dbuf, "Leave cell(%s)", hoc_object_name(c));
1894  PDEBUG;
1895  }
1896 }
1897 
1899  char buf[256];
1900  // not used for python sections
1901  assert(!(sec->prop->dparam[PROP_PY_INDEX]).get<void*>());
1902  auto sym = sec->prop->dparam[0].get<Symbol*>();
1903  if (sym) {
1904  Sprintf(buf, "%s", sym->name);
1905  f->s(buf);
1906  }
1907  int indx = sec->prop->dparam[5].get<int>();
1908  f->i(indx);
1909  int size = sectionsize(sec);
1910  f->i(size, 1);
1911 }
1912 
1914  if (debug) {
1915  Sprintf(dbuf, "Enter section(%s)", sec->prop->dparam[0].get<Symbol*>()->name);
1916  PDEBUG;
1917  }
1918  seccontents(sec);
1919  if (debug) {
1920  Sprintf(dbuf, "Leave section(%s)", sec->prop->dparam[0].get<Symbol*>()->name);
1921  PDEBUG;
1922  }
1923 }
1924 
1926  if (debug == 1) {
1927  Sprintf(dbuf, "Enter sectionsize(%s)", sec->prop->dparam[0].get<Symbol*>()->name);
1928  PDEBUG;
1929  }
1930  // should be same for both IN and OUT
1931  int cnt = -1;
1932  if (f->type() != BBSS_IO::CNT) {
1933  BBSS_IO* sav = f;
1934  f = new BBSS_Cnt();
1935  seccontents(sec);
1936  cnt = ((BBSS_Cnt*) f)->bytecnt();
1937  delete f;
1938  f = sav;
1939  }
1940  if (debug == 1) {
1941  Sprintf(dbuf, "Leave sectionsize(%s)", sec->prop->dparam[0].get<Symbol*>()->name);
1942  PDEBUG;
1943  }
1944  return cnt;
1945 }
1946 
1948  if (debug) {
1949  Sprintf(dbuf, "Enter seccontents(%s)", sec->prop->dparam[0].get<Symbol*>()->name);
1950  PDEBUG;
1951  }
1952  int i, nseg;
1953  char buf[100];
1954  Sprintf(buf, "//contents");
1955  f->s(buf);
1956  nseg = sec->nnode - 1;
1957  f->i(nseg, 1);
1958  for (i = 0; i < nseg; ++i) {
1959  node(sec->pnode[i]);
1960  }
1961  node01(sec, sec->parentnode);
1962  node01(sec, sec->pnode[nseg]);
1963  if (debug) {
1964  Sprintf(dbuf, "Leave seccontents(%s)", sec->prop->dparam[0].get<Symbol*>()->name);
1965  PDEBUG;
1966  }
1967 }
1968 
1969 // If extracellular, then save/restore not just v but also vext
1971  if (nd->extnode) {
1972  int n = 1 + nlayer;
1973  std::vector<double*> tmp{};
1974  tmp.reserve(n);
1975  tmp.push_back(static_cast<double*>(nd->v_handle()));
1976  for (int i = 0; i < nlayer; ++i) {
1977  tmp.push_back(&nd->extnode->v[i]);
1978  }
1979  f->d(n, tmp.data());
1980  } else {
1981  f->d(1, nd->v_handle());
1982  }
1983 }
1984 
1985 // all Point_process and mechanisms -- except IGNORE point process instances
1987  if (debug) {
1988  Sprintf(dbuf, "Enter node(nd)");
1989  PDEBUG;
1990  }
1991  int i;
1992  Prop* p;
1993  v_vext(nd);
1994  // count
1995  // On restore, new point processes may have been inserted in
1996  // the section and marked IGNORE. So we need to count only the
1997  // non-ignored.
1998  for (i = 0, p = nd->prop; p; p = p->next) {
1999  if (p->_type > 3) {
2000  if (memb_func[p->_type].is_point) {
2001  if (!ignored(p)) {
2002  ++i;
2003  }
2004  } else { // density mechanism
2005  ++i;
2006  }
2007  }
2008  }
2009  f->i(i, 1);
2010  for (p = nd->prop; p; p = p->next) {
2011  if (p->_type > 3) {
2012  mech(p);
2013  }
2014  }
2015  if (debug) {
2016  Sprintf(dbuf, "Leave node(nd)");
2017  PDEBUG;
2018  }
2019 }
2020 
2021 // only Point_process that belong to Section
2023  if (debug) {
2024  Sprintf(dbuf, "Enter node01(sec, nd)");
2025  PDEBUG;
2026  }
2027  int i;
2028  Prop* p;
2029  // It is not clear why the zero area node voltages need to be saved.
2030  // Without them, we get correct simulations after a restore for
2031  // whole cells but not for split cells.
2032  // I don't know if there is duplicate saving of zero area node voltages.
2033  // or, if so, it can be avoided. There was mention above of split cells
2034  // and, with extracellular, there can be no such thing.
2035  v_vext(nd);
2036 
2037  // count
2038  for (i = 0, p = nd->prop; p; p = p->next) {
2039  if (memb_func[p->_type].is_point) {
2040  auto* pp = p->dparam[1].get<Point_process*>();
2041  if (pp->sec == sec) {
2042  if (!ignored(p)) {
2043  ++i;
2044  }
2045  }
2046  }
2047  }
2048  f->i(i, 1);
2049  for (p = nd->prop; p; p = p->next) {
2050  if (memb_func[p->_type].is_point) {
2051  auto* pp = p->dparam[1].get<Point_process*>();
2052  if (pp->sec == sec) {
2053  mech(p);
2054  }
2055  }
2056  }
2057  if (debug) {
2058  Sprintf(dbuf, "Leave node01(sec, nd)");
2059  PDEBUG;
2060  }
2061 }
2062 
2064  if (debug) {
2065  Sprintf(dbuf, "Enter mech(prop type %d)", p->_type);
2066  PDEBUG;
2067  }
2068  int type = p->_type;
2069  if (memb_func[type].is_point && ignored(p)) {
2070  return;
2071  }
2072  f->i(type, 1);
2073  char buf[100];
2074  Sprintf(buf, "//%s", memb_func[type].sym->name);
2075  f->s(buf, 1);
2076 
2077  if (type == EXTRACELL) {
2078  // skip - vext states saved by caller and we are not saving parameters.
2079  } else {
2080  auto const size = ssi[p->_type].size; // sum over array dimensions for range variables
2081  auto& random_indices = nrn_mech_random_indices(p->_type);
2082  auto size_random = random_indices.size();
2083  std::vector<double*> tmp{};
2084  tmp.reserve(size + size_random);
2085  for (auto i = 0; i < size; ++i) {
2086  tmp.push_back(static_cast<double*>(p->param_handle_legacy(ssi[p->_type].offset + i)));
2087  }
2088 
2089  // read or write the RANDOM 34 sequence values by pointing last
2090  // size_random tmp elements to seq34 double slots.
2091  std::vector<double> seq34(size_random, 0);
2092  for (auto i = 0; i < size_random; ++i) {
2093  tmp.push_back(static_cast<double*>(&seq34[i]));
2094  }
2095  // if writing, nrnran123_getseq into seq34
2096  if (f->type() == BBSS_IO::OUT) { // save
2097  for (auto i = 0; i < size_random; ++i) {
2098  uint32_t seq{};
2099  char which{};
2100  auto& datum = p->dparam[random_indices[i]];
2101  nrnran123_State* n123s = (nrnran123_State*) datum.get<void*>();
2102  nrnran123_getseq(n123s, &seq, &which);
2103  seq34[i] = 4.0 * double(seq) + double(which);
2104  }
2105  }
2106  f->d(size + size_random, tmp.data());
2107  // if reading, seq34 into nrnran123_setseq
2108  if (f->type() == BBSS_IO::IN) { // restore
2109  for (auto i = 0; i < size_random; ++i) {
2110  auto& datum = p->dparam[random_indices[i]];
2111  nrnran123_State* n123s = (nrnran123_State*) datum.get<void*>();
2112  nrnran123_setseq(n123s, seq34[i]);
2113  }
2114  }
2115  }
2116  Point_process* pp{};
2117  if (memb_func[p->_type].is_point) {
2118  pp = p->dparam[1].get<Point_process*>();
2119  if (pnt_receive[p->_type]) {
2120  // associated NetCon and queue SelfEvent
2121  // if the NetCon has a unique non-gid source (art cell)
2122  // that source is save/restored as well.
2123  netrecv_pp(pp);
2124  }
2125  }
2126  if (ssi[p->_type].callback) { // model author dependent info
2127  // the POINT_PROCESS or SUFFIX has a bbsavestate function
2128  Sprintf(buf, "callback");
2129  f->s(buf, 1);
2130  int narg = 2;
2131  double xdir = -1.0; // -1 size, 0 save, 1 restore
2132  double* xval = NULL; // returned for save, sent for restore.
2133 
2134  hoc_pushpx(&xdir);
2135  hoc_pushpx(xval);
2136  if (memb_func[p->_type].is_point) {
2137  hoc_call_ob_proc(pp->ob, ssi[p->_type].callback, narg);
2138  hoc_xpop();
2139  } else {
2140  nrn_call_mech_func(ssi[p->_type].callback, narg, p, p->_type);
2141  }
2142  int sz = int(xdir);
2143  if (sz > 0) {
2144  xval = new double[sz];
2145 
2146  hoc_pushpx(&xdir);
2147  hoc_pushpx(xval);
2148  if (f->type() == BBSS_IO::IN) { // restore
2149  xdir = 1.;
2150  f->d(sz, xval);
2151  if (memb_func[p->_type].is_point) {
2152  hoc_call_ob_proc(pp->ob, ssi[p->_type].callback, narg);
2153  hoc_xpop();
2154  } else {
2155  nrn_call_mech_func(ssi[p->_type].callback, narg, p, p->_type);
2156  }
2157  } else {
2158  xdir = 0.;
2159  if (memb_func[p->_type].is_point) {
2160  hoc_call_ob_proc(pp->ob, ssi[p->_type].callback, narg);
2161  hoc_xpop();
2162  } else {
2163  nrn_call_mech_func(ssi[p->_type].callback, narg, p, p->_type);
2164  }
2165  f->d(sz, xval);
2166  }
2167  delete[] xval;
2168  }
2169  }
2170  if (debug) {
2171  Sprintf(dbuf, "Leave mech(prop type %d)", p->_type);
2172  PDEBUG;
2173  }
2174 }
2175 
2177  if (debug) {
2178  Sprintf(dbuf, "Enter netrecv_pp(pp prop type %d)", pp->prop->_type);
2179  PDEBUG;
2180  }
2181  char buf[1000];
2182  Sprintf(buf, "%s", hoc_object_name(pp->ob));
2183  f->s(buf);
2184 
2185  // associated NetCon, and queue SelfEvent
2186  DEList *dl, *dl1;
2187  const auto& dliter = pp2de->find(pp);
2188  if (dliter == pp2de->end()) {
2189  dl = 0;
2190  } else {
2191  dl = dliter->second;
2192  }
2193  int cnt = 0;
2194  // dl has the NetCons first then the SelfEvents
2195  // NetCon
2196  for (cnt = 0, dl1 = dl; dl1 && dl1->de->type() == NetConType; dl1 = dl1->next) {
2197  ++cnt;
2198  }
2199  f->i(cnt, 1);
2200  Sprintf(buf, "NetCon");
2201  f->s(buf, 1);
2202  for (; dl && dl->de->type() == NetConType; dl = dl->next) {
2203  NetCon* nc = (NetCon*) dl->de;
2204  f->d(nc->cnt_, nc->weight_);
2205  if (f->type() != BBSS_IO::IN) { // writing, counting
2206  DblList* db = 0;
2207  int j = 0;
2208  if (nc2dblist) {
2209  const auto& dbiter = nc2dblist->find(nc);
2210  if (dbiter != nc2dblist->end()) {
2211  db = dbiter->second;
2212  j = db->size();
2213  f->i(j);
2214  for (int i = 0; i < j; ++i) {
2215  double x = (*db)[i];
2216  f->d(1, x);
2217  }
2218  } else {
2219  f->i(j);
2220  }
2221  } else {
2222  f->i(j);
2223  }
2224  int has_stim = 0;
2225  if (nc->src_ && nc->src_->osrc_ && nc->src_->gid_ < 0) {
2226  // save the associated local stimulus
2227  has_stim = 1;
2228  f->i(has_stim, 1);
2229  Point_process* pp = ob2pntproc(nc->src_->osrc_);
2230  mech(pp->prop);
2231  } else {
2232  f->i(has_stim, 1);
2233  }
2234  } else { // reading
2235  int j = 0;
2236  f->i(j);
2237  for (int i = 0; i < j; ++i) {
2238  double x;
2239  f->d(1, x);
2240  x = binq_time(x);
2241  nrn_netcon_event(nc, x);
2242  }
2243  int has_stim = 0;
2244  f->i(has_stim);
2245  if (has_stim) {
2246  assert((nc->src_ && nc->src_->osrc_ && nc->src_->gid_ < 0) == has_stim);
2247  Point_process* pp = ob2pntproc(nc->src_->osrc_);
2248  mech(pp->prop);
2249  }
2250  }
2251  }
2252  // Count SelfEvents. At this point, for restore, there are no
2253  // SelfEvents (so cnt is 0) because the queue has been cleared.
2254  for (cnt = 0, dl1 = dl; dl1 && dl1->de->type() == SelfEventType; dl1 = dl1->next) {
2255  ++cnt;
2256  }
2257  f->i(cnt);
2258  // SelfEvent existence depends on state. For reading the queue
2259  // is empty. So count and save is different from restore.
2260 
2261  if (f->type() != BBSS_IO::IN) {
2262  for (; dl && dl->de->type() == SelfEventType; dl = dl->next) {
2263  SEWrap* sew = (SEWrap*) dl->de;
2264  Sprintf(buf, "SelfEvent");
2265  f->s(buf);
2266  f->d(1, sew->se->flag_);
2267  f->d(1, sew->tt);
2268  f->i(sew->ncindex);
2269  int moff = -1;
2270  if (sew->se->movable_) {
2271  moff = (Datum*) (sew->se->movable_) - pp->prop->dparam;
2272  }
2273  f->i(moff);
2274  }
2275  } else { // restore
2276  // For restore it is necessary to create the proper SelfEvents.
2277  // since the queue has been cleared.
2278  for (int i = 0; i < cnt; ++i) {
2279  int ncindex, moff;
2280  double flag, tt, *w;
2281  f->s(buf);
2282  f->d(1, flag);
2283  f->d(1, tt);
2284  f->i(ncindex);
2285  f->i(moff);
2286  Datum tqi_datum;
2287  // new SelfEvent item mostly filled in.
2288  // But starting out with NULL weight vector and
2289  // flag=1 so that tqi->data is the new SelfEvent
2290  nrn_net_send(&tqi_datum, nullptr, pp, tt, 1.0);
2291  auto* tqi = tqi_datum.get<TQItem*>();
2292  assert(tqi && tqi->data_ &&
2293  static_cast<DiscreteEvent*>(tqi->data_)->type() == SelfEventType);
2294  auto* se = static_cast<SelfEvent*>(tqi->data_);
2295  se->flag_ = flag;
2296  Datum* movable{};
2297  if (moff >= 0) {
2298  movable = pp->prop->dparam + moff;
2299  if (flag == 1) {
2300  *movable = tqi;
2301  }
2302  se->movable_ = movable;
2303  }
2304  if (ncindex == -1) {
2305  w = NULL;
2306  } else {
2307  int j;
2308  for (j = 0, dl1 = dliter->second; j < ncindex; ++j, dl1 = dl1->next) {
2309  ;
2310  }
2311  assert(dl1 && dl1->de->type() == NetConType);
2312  w = ((NetCon*) dl1->de)->weight_;
2313  }
2314  se->weight_ = w;
2315  }
2316  }
2317  if (debug) {
2318  Sprintf(dbuf, "Leave netrecv_pp(pp prop type %d)", pp->prop->_type);
2319  PDEBUG;
2320  }
2321 }
2322 
2323 static int giddest_size;
2324 static int* giddest;
2325 static int* tsdest_cnts;
2326 static double* tsdest;
2327 
2328 // input scnt, sdipl ; output rcnt, rdispl
2329 static void all2allv_helper(int* scnt, int* sdispl, int* rcnt, int* rdispl) {
2330  int i;
2331  int np = nrnmpi_numprocs;
2332  int* c = new int[np];
2333  rdispl[0] = 0;
2334  for (i = 0; i < np; ++i) {
2335  c[i] = 1;
2336  rdispl[i + 1] = rdispl[i] + c[i];
2337  }
2338  nrnmpi_int_alltoallv(scnt, c, rdispl, rcnt, c, rdispl);
2339  delete[] c;
2340  rdispl[0] = 0;
2341  for (i = 0; i < np; ++i) {
2342  rdispl[i + 1] = rdispl[i] + rcnt[i];
2343  }
2344 }
2345 
2346 static void all2allv_int2(int* scnt, int* sdispl, int* gidsrc, int* ndsrc) {
2347 #if NRNMPI
2348  int np = nrnmpi_numprocs;
2349  int* rcnt = new int[np];
2350  int* rdispl = new int[np + 1];
2351  all2allv_helper(scnt, sdispl, rcnt, rdispl);
2352 
2353  giddest = 0;
2354  tsdest_cnts = 0;
2355  giddest_size = rdispl[np];
2356  if (giddest_size) {
2357  giddest = new int[giddest_size];
2358  tsdest_cnts = new int[giddest_size];
2359  }
2360  nrnmpi_int_alltoallv(gidsrc, scnt, sdispl, giddest, rcnt, rdispl);
2361  nrnmpi_int_alltoallv(ndsrc, scnt, sdispl, tsdest_cnts, rcnt, rdispl);
2362 
2363  delete[] rcnt;
2364  delete[] rdispl;
2365 #else
2366  assert(0);
2367 #endif
2368 }
2369 
2370 static void all2allv_dbl1(int* scnt, int* sdispl, double* tssrc) {
2371 #if NRNMPI
2372  int size;
2373  int np = nrnmpi_numprocs;
2374  int* rcnt = new int[np];
2375  int* rdispl = new int[np + 1];
2376  all2allv_helper(scnt, sdispl, rcnt, rdispl);
2377 
2378  tsdest = 0;
2379  size = rdispl[np];
2380  if (size) {
2381  tsdest = new double[size];
2382  }
2383  nrnmpi_dbl_alltoallv(tssrc, scnt, sdispl, tsdest, rcnt, rdispl);
2384 
2385  delete[] rcnt;
2386  delete[] rdispl;
2387 #else
2388  assert(0);
2389 #endif
2390 }
2391 
2392 static void scatteritems() {
2393  // since gid queue items with the same gid are scattered on
2394  // many processors, we send all the gid,tscnt pairs (and tslists
2395  // with undelivered NetCon counts)
2396  // to the round-robin host (we do not know the gid owner host yet).
2397  int i, host;
2398  src2send_cnt = 0;
2399  src2send.reset(new Int2DblList());
2400  src2send->reserve(1000);
2402  // if event on queue at t we will not be able to decide whether or
2403  // not it should be delivered during restore.
2404  // The assert was moved into mk_presyn_info since this function is
2405  // reused by bbss_queuecheck and the error in that context will
2406  // be analyzed there.
2407  // assert(tq->least_t() > nrn_threads->_t);
2408  callback_mode = 1;
2410  // space inefficient but simple support analogous to pc.all2all
2411  int* gidsrc = 0;
2412  int* ndsrc = 0; // count for each DblList, parallel to gidsrc
2413  double* tssrc = 0; // all the doubles (and undelivered NetCon counts) in every DblList
2414  if (src2send_cnt) {
2415  int ndsrctotal = 0;
2416  gidsrc = new int[src2send_cnt];
2417  ndsrc = new int[src2send_cnt];
2418  for (const auto& pair: *src2send) {
2419  ndsrctotal += pair.second->size();
2420  }
2421  tssrc = new double[ndsrctotal];
2422  }
2423  int* off = new int[nrnmpi_numprocs + 1]; // offsets for gidsrc and ndsrc
2424  int* cnts = new int[nrnmpi_numprocs]; // dest host counts for gidsrc and ndsrc
2425  int* doff = new int[nrnmpi_numprocs + 1]; // offsets for doubles
2426  int* dcnts = new int[nrnmpi_numprocs + 1]; // dest counts of the doubles
2427  for (i = 0; i < nrnmpi_numprocs; ++i) {
2428  cnts[i] = 0;
2429  dcnts[i] = 0;
2430  }
2431  // counts to send to each destination rank
2432  for (const auto& pair: *src2send) {
2433  int gid = pair.first;
2434  int host = gid % nrnmpi_numprocs;
2435  ++cnts[host];
2436  dcnts[host] += pair.second->size();
2437  }
2438  // offsets
2439  off[0] = 0;
2440  doff[0] = 0;
2441  for (i = 0; i < nrnmpi_numprocs; ++i) {
2442  off[i + 1] = off[i] + cnts[i];
2443  doff[i + 1] = doff[i] + dcnts[i];
2444  }
2445  // what to send to each destination. Note dcnts and ndsrc are NOT the same.
2446  for (i = 0; i < nrnmpi_numprocs; ++i) {
2447  cnts[i] = 0;
2448  dcnts[i] = 0;
2449  }
2450  for (const auto& pair: *src2send) {
2451  const auto dl = pair.second;
2452  int gid = pair.first;
2453  host = gid % nrnmpi_numprocs;
2454  gidsrc[off[host] + cnts[host]] = gid;
2455  ndsrc[off[host] + cnts[host]++] = int(dl->size());
2456  for (size_t i = 0; i < dl->size(); ++i) {
2457  tssrc[doff[host] + dcnts[host]++] = (*dl)[i];
2458  }
2459  }
2460  for (const auto& pair: *src2send) {
2461  delete pair.second;
2462  }
2463  src2send.reset();
2464 
2465  if (nrnmpi_numprocs > 1) {
2466  all2allv_int2(cnts, off, gidsrc, ndsrc);
2467  all2allv_dbl1(dcnts, doff, tssrc);
2468  if (gidsrc) {
2469  delete[] gidsrc;
2470  delete[] ndsrc;
2471  delete[] tssrc;
2472  }
2473  } else {
2474  giddest_size = cnts[0];
2475  giddest = gidsrc;
2476  tsdest_cnts = ndsrc;
2477  tsdest = tssrc;
2478  }
2479  delete[] cnts;
2480  delete[] off;
2481  delete[] dcnts;
2482  delete[] doff;
2483 }
2484 
2485 static void allgatherv_helper(int cnt, int* rcnt, int* rdspl) {
2486  nrnmpi_int_allgather(&cnt, rcnt, 1);
2487  rdspl[0] = 0;
2488  for (int i = 0; i < nrnmpi_numprocs; ++i) {
2489  rdspl[i + 1] = rdspl[i] + rcnt[i];
2490  }
2491 }
2492 
2493 static void spikes_on_correct_host(int cnt,
2494  int* g,
2495  int* dcnts,
2496  int tscnt,
2497  double* ts,
2498  Int2DblList* m) {
2499  // tscnt is the sum of the dcnts.
2500  // i.e. the send times and undelivered NetCon counts.
2501  int i, rsize, *rg = NULL, *rtscnts = NULL;
2502  double* rts = NULL;
2503  // not at all space efficient
2504  if (nrnmpi_numprocs > 1) {
2505  int* cnts = new int[nrnmpi_numprocs];
2506  int* dspl = new int[nrnmpi_numprocs + 1];
2507  allgatherv_helper(cnt, cnts, dspl);
2508  rsize = dspl[nrnmpi_numprocs];
2509  if (rsize) {
2510  rg = new int[rsize];
2511  rtscnts = new int[rsize];
2512  }
2513  nrnmpi_int_allgatherv(g, rg, cnts, dspl);
2514  nrnmpi_int_allgatherv(dcnts, rtscnts, cnts, dspl);
2515 
2516  allgatherv_helper(tscnt, cnts, dspl);
2517  rts = new double[dspl[nrnmpi_numprocs]];
2518  nrnmpi_dbl_allgatherv(ts, rts, cnts, dspl);
2519 
2520  delete[] cnts;
2521  delete[] dspl;
2522  } else {
2523  rsize = cnt;
2524  rg = g;
2525  rtscnts = dcnts;
2526  rts = ts;
2527  }
2528  int tsoff = 0;
2529  for (i = 0; i < rsize; ++i) {
2530  if (nrn_gid_exists(rg[i])) {
2531  DblList* dl = new DblList();
2532  dl->reserve(rtscnts[i]);
2533  (*m)[rg[i]] = dl;
2534  for (int j = 0; j < rtscnts[i]; ++j) {
2535  dl->push_back(rts[tsoff + j]);
2536  }
2537  }
2538  tsoff += rtscnts[i];
2539  }
2540  if (nrnmpi_numprocs > 1) {
2541  if (rg)
2542  delete[] rg;
2543  if (rtscnts)
2544  delete[] rtscnts;
2545  if (rts)
2546  delete[] rts;
2547  }
2548 }
2549 
2550 static void construct_presyn_queue() {
2551  // almost all the old mk_presyn_info factored into here since
2552  // a presyn_queue is optionally needed for check_queue()
2553 
2554  // note that undelivered netcon count is interleaved with tsend
2555  // in the DblList of the Int2DblList
2556  int tscnt, i;
2557  DblList* dl;
2558  if (presyn_queue) {
2559  del_presyn_info();
2560  }
2561  nc2dblist.reset(new NetCon2DblList());
2562  nc2dblist->reserve(20);
2563  scatteritems();
2564  int cnt = giddest_size;
2565  std::unique_ptr<Int2DblList> m{new Int2DblList()};
2566  m->reserve(cnt + 1);
2567  int mcnt = 0;
2568  int mdcnt = 0;
2569  int its = 0;
2570  for (i = 0; i < cnt; ++i) {
2571  int gid = giddest[i];
2572  tscnt = tsdest_cnts[i];
2573  const auto& dliter = m->find(gid);
2574  if (dliter != m->end()) {
2575  dl = dliter->second;
2576  } else {
2577  dl = new DblList();
2578  (*m)[gid] = dl;
2579  ++mcnt;
2580  }
2581  // add the initiation time(s) if they are unique ( > .1)
2582  // and also accumulate the undelivered netcon counts
2583  // otherwise assert if not identical ( > 1e-12)
2584  for (int k = 0; k < tscnt; k += 2) {
2585  double t1 = tsdest[its++];
2586  int inccnt = tsdest[its++];
2587  int add = 1;
2588  // can't hurt to test the ones just added
2589  // no thought given to efficiency
2590  // Actually, need to test to accumulate
2591  for (int j = 0; j < dl->size(); j += 2) {
2592  double dt = fabs((*dl)[j] - t1);
2593  if (dt < 1e-12) {
2594  (*dl)[j + 1] += inccnt;
2595  add = 0;
2596  break;
2597  } else if (dt < .1) {
2598  assert(0);
2599  }
2600  }
2601  if (add) {
2602  dl->push_back(t1);
2603  dl->push_back(inccnt);
2604  mdcnt += 2;
2605  }
2606  }
2607  }
2608  if (giddest) {
2609  delete[] giddest;
2610  delete[] tsdest_cnts;
2611  delete[] tsdest;
2612  }
2613  // each host now has a portion of the (gid, ts) spike pairs
2614  // (Actually (gid, list of (ts, undelivered NetCon count)) map.)
2615  // but the pairs are not on the hosts that own the gid.
2616  // The major thing that is accomplished so far is that the
2617  // up to thousands of Netcon events on the queues of thousands
2618  // of host are now a single PreSyn event on a single host.
2619  // We could save them all in separate area and read them all
2620  // and send only when a host owns the gid, but we wish
2621  // to keep the spike sent info with the cell info for the gid.
2622  // We next need to do something analogous to the allgather send
2623  // so each host can examine every spike and pick out the ones
2624  // which were sent from that host. That becomes the true
2625  // presyn_queue map.
2626  int* gidsrc = 0;
2627  int* tssrc_cnt = 0;
2628  double* tssrc = 0;
2629  if (mcnt) {
2630  gidsrc = new int[mcnt];
2631  tssrc_cnt = new int[mcnt];
2632  tssrc = new double[mdcnt];
2633  }
2634  mcnt = 0;
2635  mdcnt = 0;
2636  for (const auto& pair: *m) {
2637  auto dl = pair.second;
2638  gidsrc[mcnt] = pair.first;
2639  tssrc_cnt[mcnt] = dl->size();
2640  for (int i = 0; i < dl->size(); ++i) {
2641  tssrc[mdcnt++] = (*dl)[i];
2642  }
2643  ++mcnt;
2644  delete dl;
2645  }
2646  presyn_queue.reset(new Int2DblList());
2647  presyn_queue->reserve(127);
2648  spikes_on_correct_host(mcnt, gidsrc, tssrc_cnt, mdcnt, tssrc, presyn_queue.get());
2649  if (gidsrc) {
2650  delete[] gidsrc;
2651  delete[] tssrc_cnt;
2652  delete[] tssrc;
2653  }
2654 }
2655 
2656 void BBSaveState::mk_presyn_info() { // also the NetCon* to tdelivery map
2657  if (f->type() != BBSS_IO::IN) { // only when saving or counting
2658  // if event on queue at t we will not be able to decide
2659  // whether or not it should be delivered during restore.
2661 
2662  TQItem* tqi = tq->least();
2663  int dtype = tqi ? ((DiscreteEvent*) (tqi->data_))->type() : 0;
2664  assert(tq->least_t() > nrn_threads->_t || dtype == NetParEventType);
2666  }
2667 }
2668 
2670  if (debug) {
2671  Sprintf(dbuf, "Enter possible_presyn()");
2672  PDEBUG;
2673  }
2674  char buf[100];
2675  int i;
2676  double ts;
2677  // first the presyn state associated with this gid
2678  if (nrn_gid_exists(gid) < 2) {
2679  if (f->type() == BBSS_IO::IN) {
2680  // if reading then skip what was written
2681  i = 0;
2682  f->i(i);
2683  if (i == 1) { // skip state
2684  int j;
2685  double x;
2686  Sprintf(buf, "PreSyn");
2687  f->s(buf, 1);
2688  f->i(j);
2689  f->d(1, x);
2690  }
2691  } else {
2692  i = -1;
2693  f->i(i);
2694  }
2695  } else {
2696  PreSyn* ps = nrn_gid2presyn(gid);
2697  i = (ps->ssrc_ != 0 ? 1 : -1); // does it have state
2698  f->i(i, 1);
2699  int output_index = ps->output_index_;
2700  f->i(output_index);
2701  if (output_index >= 0 && i == 1) {
2702  Sprintf(buf, "PreSyn");
2703  f->s(buf, 1);
2704  int j = (ps->flag_ ? 1 : 0);
2705  double th = ps->valthresh_;
2706  f->i(j);
2707  f->d(1, th);
2708  if (ps->output_index_ >= 0) {
2709  ps->flag_ = j;
2710  ps->valthresh_ = th;
2711  }
2712 #if 0
2713  f->d(1, ps->valold_);
2714  f->d(1, ps->told_);
2715  // ps->qthresh_ handling unimplemented but would not be needed
2716  // unless cvode.condition_order(2) when cvode active.
2717 #endif
2718  }
2719  }
2720  // next the possibility that a spike derived from this presyn
2721  // is on the queue.
2722  if (f->type() != BBSS_IO::IN) { // save or count
2723  DblList* dl;
2724  const auto& dliter = presyn_queue->find(gid);
2725  if (dliter != presyn_queue->end()) {
2726  dl = dliter->second;
2727  f->i(gid);
2728  i = dl->size(); // ts and undelivered netcon counts
2729  f->i(i);
2730  for (i = 0; i < dl->size(); i += 2) {
2731  ts = (*dl)[i];
2732  f->d(1, ts);
2733  int unc = (*dl)[i + 1];
2734  f->i(unc);
2735  }
2736  } else {
2737  i = -1;
2738  f->i(i);
2739  }
2740  } else { // restore
2741  f->i(i); // gid
2742  if (i >= 0) {
2743  int cnt, unc;
2744  // assert (gid == i);
2745  if (gid == i) {
2746  f->i(cnt); // both the ts and undelivered netcon counts
2747 
2748  // while re-issuing the PreSyn send, we do not want
2749  // any spike recording to take place. So, what are
2750  // the current Vector sizes, if used, so we can put
2751  // them back. This presumes we are recording only
2752  // from PreSyn with an output_gid.
2753 #if 1 // if set to 0 comment out asserts below
2754  // The only reason we save here is to allow a
2755  // assertion test when restoring the original
2756  // record vector size.
2757  int sz1 = -1;
2758  int sz2 = -1;
2759  PreSyn* ps = nrn_gid2presyn(gid);
2760  if (ps->tvec_) {
2761  sz1 = ps->tvec_->size();
2762  }
2763  if (ps->idvec_) {
2764  sz2 = ps->idvec_->size();
2765  }
2766 #endif
2767 
2768 #if QUEUECHECK
2769  // map from gid to unc for later checking
2770  if (!queuecheck_gid2unc) {
2771  queuecheck_gid2unc.reset(new Int2DblList());
2772  queuecheck_gid2unc->reserve(1000);
2773  }
2774  DblList* dl = new DblList();
2775  (*queuecheck_gid2unc)[i] = dl;
2776 #endif
2777  for (int j = 0; j < cnt; j += 2) {
2778  f->d(1, ts);
2779  f->i(unc); // expected undelivered NetCon count
2780  nrn_fake_fire(gid, ts, 2); // only owned by this
2781 #if QUEUECHECK
2782  dl->push_back(ts);
2783  dl->push_back(unc);
2784 #endif
2785  }
2786 
2787  // restore spike record sizes.
2788  if (ps->tvec_) {
2789  int sz = ps->tvec_->size() - cnt / 2;
2790  assert(sz == sz1);
2791  ps->tvec_->resize(sz);
2792  }
2793  if (ps->idvec_) {
2794  int sz = ps->idvec_->size() - cnt / 2;
2795  assert(sz == sz2);
2796  ps->idvec_->resize(sz);
2797  }
2798  } else {
2799  // skip -> file has undelivered spikes, but this is not the cell that
2800  // deals with them
2801  f->i(cnt);
2802  for (int j = 0; j < cnt; j += 2) {
2803  f->d(1, ts);
2804  f->i(unc);
2805  }
2806  }
2807  }
2808  }
2809  if (debug) {
2810  Sprintf(dbuf, "Leave possible_presyn()");
2811  PDEBUG;
2812  }
2813 }
2814 
2815 #if QUEUECHECK
2816 static void bbss_queuecheck() {
2818  if (queuecheck_gid2unc)
2819  for (const auto& pair: *queuecheck_gid2unc) {
2820  auto gid = pair.first;
2821  auto dl = pair.second;
2822  DblList* dl2;
2823  const auto& dl2iter = presyn_queue->find(gid);
2824  if (dl2iter != presyn_queue->end()) {
2825  dl2 = dl2iter->second;
2826  if (dl->size() == dl2->size()) {
2827  for (int i = 0; i < dl->size(); i += 2) {
2828  if ((fabs((*dl)[i] - (*dl2)[i]) > 1e-12) || (*dl)[i + 1] != (*dl2)[i + 1]) {
2829  printf(
2830  "error: gid=%d expect t=%g %d but queue contains t=%g %d "
2831  "tdiff=%g\n",
2832  gid,
2833  (*dl)[i],
2834  int((*dl)[i + 1]),
2835  (*dl2)[i],
2836  int((*dl2)[i + 1]),
2837  (*dl)[i] - (*dl2)[i]);
2838  }
2839  }
2840  } else {
2841  printf("error: gid=%d distinct delivery times, expect %ld, actual %ld\n",
2842  gid,
2843  dl->size() / 2,
2844  dl2->size() / 2);
2845  }
2846  } else {
2847  printf("error: gid=%d expect spikes but none on queue\n", gid);
2848  for (int i = 0; i < dl->size() - 1; i += 2) {
2849  printf(" %g %d", (*dl)[i], int((*dl)[i + 1]));
2850  }
2851  printf("\n");
2852  }
2853  }
2854  for (const auto& pair: *presyn_queue) {
2855  auto gid = pair.first;
2856  auto dl2 = pair.second;
2857  DblList* dl;
2858  const auto& dliter = presyn_queue->find(gid);
2859  if (dliter == presyn_queue->end()) {
2860  dl = dliter->second;
2861  printf("error: gid=%d expect no spikes but some on queue\n", gid);
2862  for (int i = 0; i < int(dl2->size()) - 1; i += 2) {
2863  printf(" %g %d", (*dl)[i], int((*dl)[i + 1]));
2864  }
2865  printf("\n");
2866  }
2867  }
2868 
2869  // cleanup
2870  if (queuecheck_gid2unc) {
2871  for (const auto& pair: *queuecheck_gid2unc) {
2872  delete pair.second;
2873  }
2874  queuecheck_gid2unc.reset();
2875  }
2876  del_presyn_info();
2877 }
2878 #endif /*QUEUECHECK*/
void nrn_netcon_event(NetCon *, double)
Definition: netcvode.cpp:147
static StateStructInfo * ssi
Section ** secorder
Definition: solve.cpp:82
std::unordered_map< std::string, Section * > SecName2Sec
static void scatteritems()
short * nrn_is_artificial_
Definition: init.cpp:214
void(* nrn_binq_enqueue_error_handler)(double, TQItem *)
Definition: tqueue.cpp:44
static int * giddest
void nrn_gidout_iter(PFIO)
Definition: netpar.cpp:1574
std::vector< TQItem * > TQItemList
static double binq_time(double tt)
std::vector< SEWrap * > SEWrapList
static std::unique_ptr< PP2DE > pp2de
void bbss_restore_done(void *bbss)
At the end of the restore process, call this to do some extra setting up and cleanup.
static std::unordered_map< void *, SecName2Sec > pycell_name2sec_maps
static int giddest_size
static TQItemList * tq_removal_list
static std::unique_ptr< NetCon2DblList > nc2dblist
static double vector_play_init(void *v)
static char dbuf[1024]
static int src2send_cnt
static SecName2Sec & pycell_name2sec_map(Object *c)
static void bbss_queuecheck()
int nrn_gid_exists(int gid)
Definition: netpar.cpp:1039
std::unordered_map< NetCon *, DblList * > NetCon2DblList
ReceiveFunc * pnt_receive
Definition: init.cpp:155
Object * nrn_gid2obj(int gid)
Definition: netpar.cpp:1564
static void all2allv_int2(int *scnt, int *sdispl, int *gidsrc, int *ndsrc)
static void pycell_name2sec_maps_fill()
static Member_func members[]
static std::unique_ptr< Int2DblList > queuecheck_gid2unc
void * bbss_buffer_counts(int *len, int **gids, int **sizes, int *global_size)
BBSaveState API See save_test_bin and restore_test_bin for an example of the use of this following in...
static void * cons(Object *)
void BBSaveState_reg()
static void allgatherv_helper(int cnt, int *rcnt, int *rdspl)
bool nrn_use_bin_queue_
Definition: netcvode.cpp:225
static void all2allv_dbl1(int *scnt, int *sdispl, double *tssrc)
void bbss_restore(void *bbss, int gid, int ngroup, char *buffer, int sz)
Call this for each item of the gids from bbss_buffer_counts, the number of buffers that were concaten...
static bool use_spikecompress_
static BBSaveState * bbss
static void spikes_on_correct_host(int cnt, int *g, int *dcnts, int tscnt, double *ts, Int2DblList *m)
std::unordered_map< int, DblList * > Int2DblList
void bbss_restore_global(void *bbss, char *buffer, int sz)
Call on all hosts with the buffer contents returned from the call to bbss_save_global.
static void pycell_name2sec_maps_clear()
static void destruct(void *v)
cTemplate ** nrn_pnt_template_
Definition: init.cpp:153
static void nrnmpi_dbl_alltoallv(const double *s, const int *scnt, const int *sdispl, double *r, int *rcnt, int *rdispl)
static void bbss_remove_delivered()
void(* ReceiveFunc)(Point_process *, double *, double)
static double save_gid(void *v)
static int ignored(Prop *p)
static double ppignore(void *v)
static double restore_gid(void *v)
void nrn_play_init()
Definition: cvodestb.cpp:76
static double restore_test_bin(void *v)
static void base2spgid_item(int spgid, Object *obj)
static std::unique_ptr< Int2DblList > src2send
double t
Definition: cvodeobj.cpp:57
PreSyn * nrn_gid2presyn(int gid)
Definition: netpar.cpp:1568
std::unordered_map< int, int > Int2Int
std::vector< double > DblList
static int usebin_
static void bbss_restore_begin()
static void nrnmpi_dbl_allgatherv(double *s, double *r, int *n, int *dspl)
void(* PFIO)(int, Object *)
int section_count
Definition: solve.cpp:81
static void tqcallback(const TQItem *tq, int i)
static double save(void *v)
static void nrnmpi_int_allgather(int *s, int *r, int n)
static double restore_test(void *v)
static double save_test_bin(void *v)
#define PDEBUG
static std::unique_ptr< Int2Int > base2spgid
static int callback_mode
static void ssi_def()
static std::unique_ptr< Int2DblList > presyn_queue
static cTemplate * nct
std::unordered_map< Point_process *, DEList * > PP2DE
void nrn_shape_update()
Definition: treeset.cpp:915
static bool use_gidcompress_
static int debug
void bbss_save_done(void *bbss)
At the end of the save process, call this to cleanup.
TQueue * net_cvode_instance_event_queue(NrnThread *)
Definition: netcvode.cpp:270
void bbss_save(void *bbss, int gid, char *buffer, int sz)
Call this for each item of the gids from bbss_buffer_counts along with a buffer of size from the corr...
NetCvode * net_cvode_instance
Definition: cvodestb.cpp:26
static void nrnmpi_int_alltoallv(const int *s, const int *scnt, const int *sdispl, int *r, int *rcnt, int *rdispl)
static double * tsdest
static void del_presyn_info()
static void all2allv_helper(int *scnt, int *sdispl, int *rcnt, int *rdispl)
static void construct_presyn_queue()
static double save_request(void *v)
#define DEBUG
void bbss_save_global(void *bbss, char *buffer, int sz)
Call only on host 0 with a buffer of size equal to the global_size returned from the bbss_buffer_coun...
static std::unique_ptr< PointProcessMap > pp_ignore_map
Symlist * hoc_built_in_symlist
Definition: symbol.cpp:28
static void nrnmpi_barrier()
static double save_test(void *v)
static int * tsdest_cnts
static TQItemList * tq_presyn_fanout
static void cb_gidobj(int gid, Object *obj)
struct DEList DEList
static void nrnmpi_int_allgatherv(int *s, int *r, int *n, int *dspl)
static void bbss_early(double td, TQItem *tq)
static double restore(void *v)
std::unordered_map< Point_process *, int > PointProcessMap
static int nrnmpi_int_allmax(int x)
static SEWrapList * sewrap_list
const char * secname(Section *sec)
name of section (for use in error messages)
Definition: cabcode.cpp:1674
Section * nrn_section_exists(char *name, int indx, Object *cell)
Definition: cabcode.cpp:2218
virtual void s(char *cp, int chk=0)
virtual void cpy(int size, char *cp)
virtual void i(int &j, int chk=0)
BBSS_BufferIn(char *buffer, int size)
virtual ~BBSS_BufferIn()
virtual Type type()
virtual void skip(int n)
virtual void cpy(int size, char *cp)
virtual ~BBSS_BufferOut()
virtual void i(int &j, int chk=0) override
virtual void a(int)
BBSS_BufferOut(char *buffer, int size)
virtual void s(char *cp, int chk=0) override
virtual Type type() override
virtual void d(int n, double &p) override
int bytecntasc()
int bytecnt()
virtual ~BBSS_Cnt()
virtual void d(int n, double &p) override
virtual void s(char *cp, int chk=0) override
int bytecntbin()
virtual Type type() override
virtual void i(int &j, int chk=0) override
virtual Type type()=0
virtual void d(int n, double &p)=0
virtual void s(char *cp, int chk=0)=0
virtual void skip(int)
Definition: bbsavestate.h:24
virtual void i(int &j, int chk=0)=0
virtual void skip(int) override
BBSS_TxtFileIn(const char *)
virtual Type type() override
virtual void i(int &j, int chk=0) override
virtual ~BBSS_TxtFileIn()
virtual void s(char *cp, int chk=0) override
virtual void d(int n, double &p) override
virtual void s(char *cp, int chk=0) override
virtual void d(int n, double &p) override
virtual ~BBSS_TxtFileOut()
virtual Type type() override
BBSS_TxtFileOut(const char *)
virtual void i(int &j, int chk=0) override
virtual void apply(BBSS_IO *io)
void node01(Section *, Node *)
void mk_presyn_info()
void mech(Prop *)
void node(Node *)
void del_pp2de()
void possible_presyn(int gid)
void cell(Object *)
void mk_base2spgid()
void v_vext(Node *)
int cellsize(Object *)
void gid2buffer(int gid, char *buffer, int size)
void netrecv_pp(Point_process *)
BBSS_IO * f
Definition: bbsavestate.h:32
void section_exist_info(Section *)
void gidobj(int basegid)
int sectionsize(Section *)
virtual ~BBSaveState()
void buffer2gid(int gid, char *buffer, int size)
void seccontents(Section *)
int counts(int **gids, int **counts)
void section(Section *)
bool flag_
Definition: netcon.h:202
double told_
Definition: netcon.h:199
double valthresh_
Definition: netcon.h:200
double valold_
Definition: netcon.h:199
virtual int type()
Definition: netcon.h:72
virtual void pr(const char *, double t, NetCvode *)
Definition: netcvode.cpp:2937
size_t size() const
Definition: ivocvect.h:42
void resize(size_t n)
Definition: ivocvect.h:46
double & elem(int n)
Definition: ivocvect.h:26
Definition: netcon.h:87
double * weight_
Definition: netcon.h:116
int cnt_
Definition: netcon.h:118
double delay_
Definition: netcon.h:113
Point_process * target_
Definition: netcon.h:115
PreSyn * src_
Definition: netcon.h:114
int ithread_
Definition: netcon.h:413
virtual void savestate_restore(double deliverytime, NetCvode *)
Definition: netpar.cpp:308
Definition: netcon.h:258
int output_index_
Definition: netcon.h:308
Section * ssrc_
Definition: netcon.h:299
IvocVect * idvec_
Definition: netcon.h:301
void fanout(double, NetCvode *, NrnThread *)
Definition: netcvode.cpp:3114
int gid_
Definition: netcon.h:309
double delay_
Definition: netcon.h:296
Object * osrc_
Definition: netcon.h:298
IvocVect * tvec_
Definition: netcon.h:300
NetConPList dil_
Definition: netcon.h:294
int type()
SelfEvent * se
SEWrap(const TQItem *, DEList *)
double tt
virtual int type()
Definition: netcon.h:159
Datum * movable_
Definition: netcon.h:171
double * weight_
Definition: netcon.h:170
double flag_
Definition: netcon.h:168
void remove(TQItem *)
Definition: tqueue.cpp:225
double least_t()
Definition: tqueue.hpp:85
TQItem * least()
Definition: tqueue.hpp:68
void shift_bin(double t)
Definition: tqueue.hpp:99
void forall_callback(void(*)(const TQItem *, int))
Definition: tqueue.cpp:103
double tbin()
Definition: tqueue.hpp:103
void class2oc(const char *, ctor_f *cons, dtor_f *destruct, Member_func *, Member_ret_obj_func *, Member_ret_str_func *)
Definition: hoc_oop.cpp:1631
Symbol * hoc_table_lookup(const char *, Symlist *)
Definition: symbol.cpp:48
char * gargstr(int narg)
Definition: code2.cpp:227
#define cnt
Definition: tqueue.hpp:44
#define v
Definition: md1redef.h:11
#define sec
Definition: md1redef.h:20
#define i
Definition: md1redef.h:19
static int indx
Definition: deriv.cpp:289
static RNG::key_type k
Definition: nrnran123.cpp:9
void nrnran123_setseq(nrnran123_State *s, std::uint32_t seq, char which)
Set a Random123 sequence for a sequnece ID and which selector.
Definition: nrnran123.cpp:55
void nrnran123_getseq(nrnran123_State *s, std::uint32_t *seq, char *which)
Get sequence number and selector from an nrnran123_State object.
Definition: nrnran123.cpp:50
char buf[512]
Definition: init.cpp:13
double hoc_xpop()
Definition: code.cpp:903
void * nrn_opaque_obj2pyobj(Object *ho)
Definition: hoc_oop.cpp:2119
void hoc_execerr_ext(const char *fmt,...)
printf style specification of hoc_execerror message.
Definition: fileio.cpp:828
size_t hoc_total_array_data(const Symbol *s, Objectdata *obd)
Definition: hoc_oop.cpp:95
void hoc_pushpx(double *d)
Definition: code.cpp:834
void hoc_call_ob_proc(Object *ob, Symbol *sym, int narg)
Definition: hoc_oop.cpp:648
IvocVect * vector_arg(int i)
Definition: ivocvect.cpp:265
char * hoc_object_name(Object *ob)
Definition: hoc_oop.cpp:73
Symbol * hoc_lookup(const char *)
Definition: symbol.cpp:59
void hoc_obj_unref(Object *obj)
Definition: hoc_oop.cpp:1881
static int c
Definition: hoc.cpp:169
#define assert(ex)
Definition: hocassrt.h:24
#define hocSEC(q)
Definition: hoclist.h:87
#define OBJ(q)
Definition: hoclist.h:88
Point_process * ob2pntproc(Object *ob)
Definition: hocmech.cpp:99
Object ** hoc_objgetarg(int)
Definition: code.cpp:1614
static int narg()
Definition: ivocvect.cpp:121
#define _AMBIGUOUS
Definition: membfunc.hpp:86
#define STATE
Definition: membfunc.hpp:65
#define EXTRACELL
Definition: membfunc.hpp:61
#define ITERATE(itm, lst)
Definition: model.h:18
printf
Definition: extdef.h:5
fabs
Definition: extdef.h:3
const char * name
Definition: init.cpp:16
Item * next(Item *item)
Definition: list.cpp:89
NrnThread * nrn_threads
Definition: multicore.cpp:56
void clear_event_queue()
Definition: cvodestb.cpp:47
static int np
Definition: mpispike.cpp:25
void nrn_spike_exchange(NrnThread *nt)
std::vector< int > & nrn_mech_random_indices(int type)
bool use_multisend_
Definition: multisend.cpp:53
bool nrn_use_localgid_
void nrn_fake_fire(int gid, double spiketime, int fake_out)
Definition: netpar.cpp:578
int Sprintf(char(&buf)[N], const char *fmt, Args &&... args)
Redirect sprintf to snprintf if the buffer size can be deduced.
Definition: wrap_sprintf.h:14
int ii
Definition: cellorder.cpp:631
if(ncell==0)
Definition: cellorder.cpp:785
#define NetParEventType
Definition: netcon.hpp:27
#define PreSynType
Definition: netcon.hpp:26
#define SelfEventType
Definition: netcon.hpp:25
#define NetConType
Definition: netcon.hpp:24
int is_point_process(Object *)
Definition: point.cpp:370
#define nrn_assert(x)
assert()-like macro, independent of NDEBUG status
Definition: nrn_assert.h:33
void nrn_net_send(Datum *v, double *weight, Point_process *pnt, double td, double flag)
Definition: netcvode.cpp:2257
int const size_t const size_t n
Definition: nrngsl.h:10
size_t q
size_t p
size_t j
s
Definition: multisend.cpp:521
int * nrn_prop_param_size_
Definition: init.cpp:162
Object * nrn_sec2cell(Section *)
Definition: cabcode.cpp:252
int ifarg(int)
Definition: code.cpp:1607
void nrnmpi_abort(int errcode)
Definition: nrnmpi.cpp:209
std::vector< Memb_func > memb_func
Definition: init.cpp:145
short type
Definition: cabvars.h:10
int nrn_vartype(const Symbol *sym)
Definition: eion.cpp:503
double nrn_call_mech_func(Symbol *s, int narg, Prop *p, int mechtype)
Definition: init.cpp:1259
hoc_List * section_list
Definition: init.cpp:113
int n_memb_func
Definition: init.cpp:448
int nrnmpi_myid
static double ref(void *v)
Definition: ocbox.cpp:381
#define PROP_PY_INDEX
Definition: section.h:230
#define nlayer
Definition: section_fwd.hpp:31
#define NULL
Definition: spdefs.h:105
DiscreteEvent * de
struct DEList * next
double * v
Definition: section_fwd.hpp:40
Definition: section.h:105
auto v_handle()
Definition: section.h:153
Extnode * extnode
Definition: section.h:199
Prop * prop
Definition: section.h:190
Represent main neuron object computed by single thread.
Definition: multicore.h:58
Definition: hocdec.h:173
hoc_Item * secelm_
Definition: hocdec.h:184
A point process is computed just like regular mechanisms.
Definition: section_fwd.hpp:77
Definition: section.h:231
Datum * dparam
Definition: section.h:247
short _type
Definition: section.h:244
Definition: model.h:47
union Symbol::@28 u
Symbol ** ppsym
Definition: hocdec.h:125
struct Symbol::@45::@46 rng
unsigned s_varn
Definition: hocdec.h:129
char * name
Definition: model.h:61
Definition: hocdec.h:75
Definition: tqitem.hpp:3
double t_
Definition: tqitem.hpp:8
void * data_
Definition: tqitem.hpp:7
Symlist * symtable
Definition: hocdec.h:148
hoc_List * olist
Definition: hocdec.h:155
int count
Definition: hocdec.h:154
short itemtype
Definition: hoclist.h:46
hoc_Item * next
Definition: hoclist.h:44
Non-template stable handle to a generic value.
T get() const
Explicit conversion to any T.