easy_tpp.model.torch_model.torch_basemodel

Base model with common functionality

Classes

TorchBaseModel(model_config)

class easy_tpp.model.torch_model.torch_basemodel.TorchBaseModel(model_config)[source]
__init__(model_config)[source]

Initialize the BaseModel

Parameters:

model_config (EasyTPP.ModelConfig) – model spec of configs

static generate_model_from_config(model_config)[source]

Generate the model in derived class based on model config.

Parameters:

model_config (EasyTPP.ModelConfig) – config of model specs.

static get_logits_at_last_step(logits, batch_non_pad_mask, sample_len=None)[source]

Retrieve the hidden states of last non-pad events.

Parameters:
  • logits (tensor) – [batch_size, seq_len, hidden_dim], a sequence of logits

  • batch_non_pad_mask (tensor) – [batch_size, seq_len], a sequence of masks

  • sample_len (tensor) – default None, use batch_non_pad_mask to find out the last non-mask position

ref: https://medium.com/analytics-vidhya/understanding-indexing-with-pytorch-gather-33717a84ebc4

Returns:

retrieve the logits of EOS event

Return type:

tensor

compute_loglikelihood(time_delta_seq, lambda_at_event, lambdas_loss_samples, seq_mask, lambda_type_mask)[source]

Compute the loglikelihood of the event sequence based on Equation (8) of NHP paper.

Parameters:
  • time_delta_seq (tensor) – [batch_size, seq_len], time_delta_seq from model input.

  • lambda_at_event (tensor) – [batch_size, seq_len, num_event_types], unmasked intensity at

  • event. ((right after) the) –

  • lambdas_loss_samples (tensor) – [batch_size, seq_len, num_sample, num_event_types],

  • times. (intensity at sampling) –

  • seq_mask (tensor) – [batch_size, seq_len], sequence mask vector to mask the padded events.

  • lambda_type_mask (tensor) – [batch_size, seq_len, num_event_types], type mask matrix to mask the

  • types. (padded event) –

Returns:

event loglike, non-event loglike, intensity at event with padding events masked

Return type:

tuple

make_dtime_loss_samples(time_delta_seq)[source]

Generate the time point samples for every interval.

Parameters:

time_delta_seq (tensor) – [batch_size, seq_len].

Returns:

[batch_size, seq_len, n_samples]

Return type:

tensor

predict_one_step_at_every_event(batch)[source]

One-step prediction for every event in the sequence.

Parameters:
  • time_seqs (tensor) – [batch_size, seq_len].

  • time_delta_seqs (tensor) – [batch_size, seq_len].

  • type_seqs (tensor) – [batch_size, seq_len].

Returns:

tensors of dtime and type prediction, [batch_size, seq_len].

Return type:

tuple

predict_multi_step_since_last_event(batch, forward=False)[source]

Multi-step prediction since last event in the sequence.

Parameters:
  • time_seqs (tensor) – [batch_size, seq_len].

  • time_delta_seqs (tensor) – [batch_size, seq_len].

  • type_seqs (tensor) – [batch_size, seq_len].

  • num_step (int) – num of steps for prediction.

Returns:

tensors of dtime and type prediction, [batch_size, seq_len].

Return type:

tuple