.. _ValidationTrainer: Trainer and validation module ----------------------------- In this module, the fixed dataset provided (usually by study.add_many) is split into training and testing subsets according to the ``split_ratio`` parameter. The ensemble is trained on the training subset by minimizing the corresponding loss, while the generalization performance is assessed according to the loss induced on the testing subset. The ensemble weights yielding the smallest loss on the testing subset are saved, effectively reducing the chances of overfitting. Worst performing ensemble members according to the loss on the testing subset can be expelled by adjusting num_expel_NNs parameter delivering a reduced ensemble with better generalization qualities. .. _ActiveLearning.NN.validation_trainer.num_epochs: num_epochs (int) """""""""""""""" Number of epochs used in the training of all the ensemble members. Default: ``800`` .. _ActiveLearning.NN.validation_trainer.num_expel_NNs: num_expel_NNs (int) """"""""""""""""""" The ensemble members are sorted according to the final loss induced on testing dataset. According to that loss, the ``num_expel_NNs`` worst performing ones are removed. Default: ``0`` .. _ActiveLearning.NN.validation_trainer.learning_rate: learning_rate (float) """"""""""""""""""""" Learning rate hyperparameter used in the loss minimization when cold starting. Default: ``0.006`` .. _ActiveLearning.NN.validation_trainer.scale_grad_loss: scale_grad_loss (float) """"""""""""""""""""""" Hyperparameter used to scale the loss induced by the gradients with respect to the input parameters. Default: ``0.5`` .. _ActiveLearning.NN.validation_trainer.optimizer: optimizer (str) """"""""""""""" Optimizer used for the minimization of the loss during the ensemble training process. AdamW optimizer has the correct implementation of weight_decay regularization. Default: ``'Adam'`` Choices: ``'AdamW'``, ``'Adam'``. .. _ActiveLearning.NN.validation_trainer.weight_decay: weight_decay (float) """""""""""""""""""" Weight decay hyperparameter in optimization with AdamW optimizer. Default: ``0.0`` .. _ActiveLearning.NN.validation_trainer.loss_function: loss_function (str) """"""""""""""""""" Loss function which is minimized during ensemble training. Default: ``'MSE'`` Choices: ``'MSE'``, ``'L1'``. .. _ActiveLearning.NN.validation_trainer.batch_size_train: batch_size_train (int) """""""""""""""""""""" Batch size for training of the ensemble. If not explicitly stated, it is determined during the optimization run so that all the training data is processed in one batch. Default: ``None`` .. _ActiveLearning.NN.validation_trainer.save_history_path: save_history_path (str) """"""""""""""""""""""" An absolute path where the training/evaluation history shall be stored. Default: ``None`` .. _ActiveLearning.NN.validation_trainer.split_ratio: split_ratio (float) """"""""""""""""""" The ratio of the number of testing data points over training data points. Default: ``0.15`` .. _ActiveLearning.NN.validation_trainer.batch_size_test: batch_size_test (int) """"""""""""""""""""" Batch size for testing of the ensemble. If not explicitly stated, it is determined during the optimization run so that all the testing data is processed in one batch. Default: ``None``