easy_tpp.model.torch_model.torch_thinning

Classes

EventSampler(num_sample, num_exp, ...)

Event Sequence Sampler based on thinning algorithm, which corresponds to Algorithm 2 of The Neural Hawkes Process: A Neurally Self-Modulating Multivariate Point Process, https://arxiv.org/abs/1612.09328.

class easy_tpp.model.torch_model.torch_thinning.EventSampler(num_sample, num_exp, over_sample_rate, num_samples_boundary, dtime_max, patience_counter, device)[source]

Event Sequence Sampler based on thinning algorithm, which corresponds to Algorithm 2 of The Neural Hawkes Process: A Neurally Self-Modulating Multivariate Point Process, https://arxiv.org/abs/1612.09328.

The implementation uses code from https://github.com/yangalan123/anhp-andtt/blob/master/anhp/esm/thinning.py.

__init__(num_sample, num_exp, over_sample_rate, num_samples_boundary, dtime_max, patience_counter, device)[source]

Initialize the event sampler.

Parameters:
  • num_sample (int) – number of sampled next event times via thinning algo for computing predictions.

  • num_exp (int) – number of i.i.d. Exp(intensity_bound) draws at one time in thinning algorithm

  • over_sample_rate (float) – multiplier for the intensity up bound.

  • num_samples_boundary (int) – number of sampled event times to compute the boundary of the intensity.

  • dtime_max (float) – max value of delta times in sampling

  • patience_counter (int) – the maximum iteration used in adaptive thinning.

  • device (torch.device) – torch device index to select.

compute_intensity_upper_bound(time_seq, time_delta_seq, event_seq, intensity_fn, compute_last_step_only)[source]

Compute the upper bound of intensity at each event timestamp.

Parameters:
  • time_seq (tensor) – [batch_size, seq_len], timestamp seqs.

  • time_delta_seq (tensor) – [batch_size, seq_len], time delta seqs.

  • event_seq (tensor) – [batch_size, seq_len], event type seqs.

  • intensity_fn (fn) – a function that computes the intensity.

  • compute_last_step_only (bool) – wheter to compute the last time step pnly.

Returns:

[batch_size, seq_len]

Return type:

tensor

sample_exp_distribution(sample_rate)[source]

Sample an exponential distribution.

Parameters:

sample_rate (tensor) – [batch_size, seq_len], intensity rate.

Returns:

[batch_size, seq_len, num_exp], exp numbers at each event timestamp.

Return type:

tensor

sample_uniform_distribution(intensity_upper_bound)[source]

Sample an uniform distribution

Parameters:

intensity_upper_bound (tensor) – upper bound intensity computed in the previous step.

Returns:

[batch_size, seq_len, num_sample, num_exp]

Return type:

tensor

sample_accept(unif_numbers, sample_rate, total_intensities)[source]

Do the sample-accept process.

For each parallel draw, find its min criterion: if that < 1.0, the 1st (i.e. smallest) sampled time with cri < 1.0 is accepted; if none is accepted, use boundary / maxsampletime for that draw

Parameters:
  • unif_numbers (tensor) – [batch_size, max_len, num_sample, num_exp], sampled uniform random number.

  • sample_rate (tensor) – [batch_size, max_len], sample rate (intensity).

  • total_intensities (tensor) – [batch_size, seq_len, num_sample, num_exp]

Returns:

two tensors, criterion, [batch_size, max_len, num_sample, num_exp] who_has_accepted_times, [batch_size, max_len, num_sample]

Return type:

list

draw_next_time_one_step(time_seq, time_delta_seq, event_seq, dtime_boundary, intensity_fn, compute_last_step_only=False)[source]

Compute next event time based on Thinning algorithm.

Parameters:
  • time_seq (tensor) – [batch_size, seq_len], timestamp seqs.

  • time_delta_seq (tensor) – [batch_size, seq_len], time delta seqs.

  • event_seq (tensor) – [batch_size, seq_len], event type seqs.

  • dtime_boundary (tensor) – [batch_size, seq_len], dtime upper bound.

  • intensity_fn (fn) – a function to compute the intensity.

  • compute_last_step_only (bool, optional) – whether to compute last event timestep only. Defaults to False.

Returns:

next event time prediction and weight.

Return type:

tuple