easy_tpp.model.torch_model.torch_thp

Classes

THP(model_config)

Torch implementation of Transformer Hawkes Process, ICML 2020, https://arxiv.org/abs/2002.09291.

class easy_tpp.model.torch_model.torch_thp.THP(model_config)[source]

Torch implementation of Transformer Hawkes Process, ICML 2020, https://arxiv.org/abs/2002.09291. Note: Part of the code is collected from https://github.com/yangalan123/anhp-andtt/tree/master/thp.

__init__(model_config)[source]

Initialize the model

Parameters:

model_config (EasyTPP.ModelConfig) – config of model specs.

forward(time_seqs, type_seqs, attention_mask)[source]

Call the model

Parameters:
  • time_seqs (tensor) – [batch_size, seq_len], timestamp seqs.

  • type_seqs (tensor) – [batch_size, seq_len], event type seqs.

  • attention_mask (tensor) – [batch_size, seq_len, hidden_size], attention masks.

Returns:

hidden states at event times.

Return type:

tensor

loglike_loss(batch)[source]

Compute the loglike loss.

Parameters:

batch (tuple, list) – batch input.

Returns:

loglike loss, num events.

Return type:

tuple

compute_states_at_sample_times(event_states, sample_dtimes)[source]

Compute the hidden states at sampled times.

Parameters:
  • event_states (tensor) – [batch_size, seq_len, hidden_size].

  • sample_dtimes (tensor) – [batch_size, seq_len, num_samples].

Returns:

hidden state at each sampled time.

Return type:

tensor

compute_intensities_at_sample_times(time_seqs, time_delta_seqs, type_seqs, sample_dtimes, **kwargs)[source]

Compute hidden states at sampled times.

Parameters:
  • time_seqs (tensor) – [batch_size, seq_len], times seqs.

  • time_delta_seqs (tensor) – [batch_size, seq_len], time delta seqs.

  • type_seqs (tensor) – [batch_size, seq_len], event type seqs.

  • sample_dtimes (tensor) – [batch_size, seq_len, num_samples], sampled inter-event timestamps.

Returns:

[batch_size, seq_len, num_samples, num_event_types], intensity at all sampled times.

Return type:

tensor