easy_tpp.model.torch_model.torch_attnhp

Classes

AttNHP(model_config)

Torch implementation of Attentive Neural Hawkes Process, ICLR 2022.

class easy_tpp.model.torch_model.torch_attnhp.AttNHP(model_config)[source]

Torch implementation of Attentive Neural Hawkes Process, ICLR 2022. https://arxiv.org/abs/2201.00044. Source code: https://github.com/yangalan123/anhp-andtt/blob/master/anhp/model/xfmr_nhp_fast.py

__init__(model_config)[source]

Initialize the model

Parameters:

model_config (EasyTPP.ModelConfig) – config of model specs.

compute_temporal_embedding(time)[source]

Compute the temporal embedding.

Parameters:

time (tensor) – [batch_size, seq_len].

Returns:

[batch_size, seq_len, emb_size].

Return type:

tensor

forward_pass(init_cur_layer, time_emb, sample_time_emb, event_emb, combined_mask)[source]

update the structure sequentially.

Parameters:
  • init_cur_layer (tensor) – [batch_size, seq_len, hidden_size]

  • time_emb (tensor) – [batch_size, seq_len, hidden_size]

  • sample_time_emb (tensor) – [batch_size, seq_len, hidden_size]

  • event_emb (tensor) – [batch_size, seq_len, hidden_size]

  • combined_mask (tensor) – [batch_size, seq_len, hidden_size]

Returns:

[batch_size, seq_len, hidden_size*2]

Return type:

tensor

seq_encoding(time_seqs, event_seqs)[source]

Encode the sequence.

Parameters:
  • time_seqs (tensor) – time seqs input, [batch_size, seq_len].

  • event_seqs (_type_) – event type seqs input, [batch_size, seq_len].

Returns:

event embedding, time embedding and type embedding.

Return type:

tuple

make_layer_mask(attention_mask)[source]

Create a tensor to do masking on layers.

Parameters:

attention_mask (tensor) – mask for attention operation, [batch_size, seq_len, seq_len]

Returns:

aim to keep the current layer, the same size of attention mask a diagonal matrix, [batch_size, seq_len, seq_len]

Return type:

tensor

make_combined_att_mask(attention_mask, layer_mask)[source]

Combined attention mask and layer mask.

Parameters:
  • attention_mask (tensor) – mask for attention operation, [batch_size, seq_len, seq_len]

  • layer_mask (tensor) – mask for other layers, [batch_size, seq_len, seq_len]

Returns:

[batch_size, seq_len * 2, seq_len * 2]

Return type:

tensor

forward(time_seqs, event_seqs, attention_mask, sample_times=None)[source]

Call the model.

Parameters:
  • time_seqs (tensor) – [batch_size, seq_len], sequences of timestamps.

  • event_seqs (tensor) – [batch_size, seq_len], sequences of event types.

  • attention_mask (tensor) – [batch_size, seq_len, seq_len], masks for event sequences.

  • sample_times (tensor, optional) – [batch_size, seq_len, num_samples]. Defaults to None.

Returns:

states at sampling times, [batch_size, seq_len, num_samples].

Return type:

tensor

loglike_loss(batch)[source]

Compute the loglike loss.

Parameters:

batch (list) – batch input.

Returns:

loglike loss, num events.

Return type:

list

compute_states_at_sample_times(time_seqs, type_seqs, attention_mask, sample_times)[source]

Compute the states at sampling times.

Parameters:
  • time_seqs (tensor) – [batch_size, seq_len], sequences of timestamps.

  • time_delta_seqs (tensor) – [batch_size, seq_len], sequences of delta times.

  • type_seqs (tensor) – [batch_size, seq_len], sequences of event types.

  • attention_mask (tensor) – [batch_size, seq_len, seq_len], masks for event sequences.

  • sample_dtimes (tensor) – delta times in sampling.

Returns:

hiddens states at sampling times.

Return type:

tensor

compute_intensities_at_sample_times(time_seqs, time_delta_seqs, type_seqs, sample_times, **kwargs)[source]

Compute the intensity at sampled times.

Parameters:
  • time_seqs (tensor) – [batch_size, seq_len], sequences of timestamps.

  • time_delta_seqs (tensor) – [batch_size, seq_len], sequences of delta times.

  • type_seqs (tensor) – [batch_size, seq_len], sequences of event types.

  • sampled_dtimes (tensor) – [batch_size, seq_len, num_sample], sampled time delta sequence.

Returns:

intensities as sampled_dtimes, [batch_size, seq_len, num_samples, event_num].

Return type:

tensor