easy_tpp.model.torch_model.torch_baselayer

Functions

activation_layer(act_name)

Construct activation layers :param act_name: str or nn.Module, name of activation function

attention(query, key, value[, mask, dropout])

Classes

DNN(inputs_dim, hidden_size[, activation, ...])

The Multi Layer Percetron Input shape - nD tensor with shape: (batch_size, ..., input_dim). The most common situation would be a 2D input with shape (batch_size, input_dim). Output shape - nD tensor with shape: (batch_size, ..., hidden_size[-1]). For instance, for a 2D input with shape (batch_size, input_dim), the output would have shape (batch_size, hidden_size[-1]). Arguments - inputs_dim: input feature dimension. - hidden_size:list of positive integer, the layer number and units in each layer. - activation: Activation function to use. - l2_reg: float between 0 and 1. L2 regularizer strength applied to the kernel weights matrix. - dropout_rate: float in [0,1). Fraction of the units to dropout. - use_bn: bool. Whether use BatchNormalization before activation or not. - seed: A Python integer to use as random seed.

EncoderLayer(d_model, self_attn[, ...])

GELU(*args, **kwargs)

GeLu activation function

Identity(*args, **kwargs)

MultiHeadAttention(n_head, d_input, d_model)

SublayerConnection(d_model, dropout)

TimePositionalEncoding(d_model[, max_len, ...])

Temporal encoding in THP, ICML 2020

TimeShiftedPositionalEncoding(d_model[, ...])

Time shifted positional encoding in SAHP, ICML 2020

class easy_tpp.model.torch_model.torch_baselayer.MultiHeadAttention(n_head, d_input, d_model, dropout=0.1, output_linear=False)[source]
__init__(n_head, d_input, d_model, dropout=0.1, output_linear=False)[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(query, key, value, mask, output_weight=False)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class easy_tpp.model.torch_model.torch_baselayer.SublayerConnection(d_model, dropout)[source]
__init__(d_model, dropout)[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(x, sublayer)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class easy_tpp.model.torch_model.torch_baselayer.EncoderLayer(d_model, self_attn, feed_forward=None, use_residual=False, dropout=0.1)[source]
__init__(d_model, self_attn, feed_forward=None, use_residual=False, dropout=0.1)[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(x, mask)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class easy_tpp.model.torch_model.torch_baselayer.TimePositionalEncoding(d_model, max_len=5000, device='cpu')[source]

Temporal encoding in THP, ICML 2020

__init__(d_model, max_len=5000, device='cpu')[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(x)[source]

Compute time positional encoding defined in Equation (2) in THP model.

Parameters:

x (tensor) – time_seqs, [batch_size, seq_len]

Returns:

temporal encoding vector, [batch_size, seq_len, model_dim]

class easy_tpp.model.torch_model.torch_baselayer.TimeShiftedPositionalEncoding(d_model, max_len=5000, device='cpu')[source]

Time shifted positional encoding in SAHP, ICML 2020

__init__(d_model, max_len=5000, device='cpu')[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(x, interval)[source]
Parameters:
  • x – time_seq, [batch_size, seq_len]

  • interval – time_delta_seq, [batch_size, seq_len]

Returns:

Time shifted positional encoding defined in Equation (8) in SAHP model

class easy_tpp.model.torch_model.torch_baselayer.GELU(*args, **kwargs)[source]

GeLu activation function

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

class easy_tpp.model.torch_model.torch_baselayer.Identity(*args, **kwargs)[source]
forward(inputs)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

easy_tpp.model.torch_model.torch_baselayer.activation_layer(act_name)[source]

Construct activation layers :param act_name: str or nn.Module, name of activation function

Returns:

activation layer

Return type:

act_layer

class easy_tpp.model.torch_model.torch_baselayer.DNN(inputs_dim, hidden_size, activation='relu', l2_reg=0, dropout_rate=0, use_bn=False, init_std=0.0001)[source]

The Multi Layer Percetron Input shape

  • nD tensor with shape: (batch_size, ..., input_dim).

The most common situation would be a 2D input with shape (batch_size, input_dim).

Output shape
  • nD tensor with shape: (batch_size, ..., hidden_size[-1]).

For instance, for a 2D input with shape (batch_size, input_dim), the output would have shape (batch_size, hidden_size[-1]).

Arguments
  • inputs_dim: input feature dimension.

  • hidden_size:list of positive integer, the layer number and units in each layer.

  • activation: Activation function to use.

  • l2_reg: float between 0 and 1. L2 regularizer strength applied to the kernel weights matrix.

  • dropout_rate: float in [0,1). Fraction of the units to dropout.

  • use_bn: bool. Whether use BatchNormalization before activation or not.

  • seed: A Python integer to use as random seed.

__init__(inputs_dim, hidden_size, activation='relu', l2_reg=0, dropout_rate=0, use_bn=False, init_std=0.0001)[source]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

forward(inputs)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.