Save_load
Tmp saving and loading.
- the_utils.save_load.save_model(model_filename: str, model: Module, optimizer: Optimizer, current_epoch: int, loss: float) None [source]
Save model, optimizer, current_epoch, loss to
.checkpoints/${model_filename}.pt
.- Parameters:
model_filename (str) – filename to save model.
model (torch.nn.Module) – model.
optimizer (torch.optim.Optimizer) – optimizer.
current_epoch (int) – current epoch.
loss (float) – loss.
- the_utils.save_load.load_model(model_filename: str, model: Module, optimizer: Optimizer | None = None, device: device = device(type='cpu')) Tuple[Module, Optimizer, int, float] [source]
Load model from
.checkpoints/${model_filename}.pt
.- Parameters:
model_filename (str) – filename to load model.
model (torch.nn.Module) – model.
optimizer (torch.optim.Optimizer) – optimizer.
- Returns:
[model, optimizer, epoch, loss]
- Return type:
Tuple[torch.nn.Module, torch.optim.Optimizer, int, float]
- the_utils.save_load.save_embedding(node_embeddings: tensor, dataset_name: str, model_name: str, params: dict, save_dir: str = './save', verbose: bool | int = True)[source]
Save embeddings.
- Parameters:
node_embeddings (torch.tensor) – node embeddings.
dataset_name (str) – dataset name.
model_name (str) – model name.
params (dict) – parameter dict.
save_dir (str, optional) – save dir. Defaults to “./save”.
verbose (Union[bool, int], optional) – whether to print debug info. Defaults to True.