EasyTPP
Documentation
EasyTPP
is an easy-to-use development and application toolkit for Neural Temporal Point Process (Neural TPP), with key features in configurability, compatibility and reproducibility. We hope this project could benefit both researchers and practitioners with the goal of easily customized development and open benchmarking.
- Thinning Algorithm
- Tensorboard
- Performance Benchmarks
- Implementation Details
- Basic structure
- Neural Hawkes Process (NHP)
- Attentive Neural Hawkes Process (AttNHP)
- Transformer Hawkes Process (THP)
- Self-Attentive Hawkes Process (SAHP)
- Recurrent Marked Temporal Point Processes (RMTPP)
- Intensity Free Learning of Temporal Point Process (IntensityFree)
- Fully Neural Network based Model for General Temporal Point Processes (FullyNN)
- ODE-based Temporal Point Process (ODETPP)
- Attentive Neural Hawkes Network (ANHN)
- Config
- Preprocess
- Model
- Runner
- Hyper-parameter Optimization
- Tf and Torch Wrapper
- Utilities
py_assert()
make_config_string()
create_folder()
save_yaml_config()
load_yaml_config()
RunnerPhase
LogConst
load_pickle()
has_key()
array_pad_cols()
MetricsHelper
MetricsTracker
set_device()
set_optimizer()
set_seed()
save_pickle()
count_model_params()
Registrable
Timer
concat_element()
get_stage()
to_dict()
parse_uri_to_protocol_and_path()
is_master_process()
is_local_master_process()
dict_deep_update()
DefaultRunnerConfig
rk4_step_method()
is_tf_available()
is_tensorflow_probability_available()
is_torchvision_available()
is_torch_cuda_available()
is_tf_gpu_available()
is_torch_gpu_available()
is_torch_available()
requires_backends()
PaddingStrategy
ExplicitEnum
TruncationStrategy
is_torch_device()
is_numpy_array()