Training Module¶
- class TrainingConfig(nr_of_steps: int, nr_of_episodes: int, model_blue_print: BluePrintBase, data: DataFrame, initial_budget: float, max_amount_of_trades: int, window_size: int, learning_strategy_handler: LearningStrategyHandlerBase, testing_strategy_handler: TestingStrategyHandlerBase, sell_stop_loss: float = 0.8, sell_take_profit: float = 1.2, buy_stop_loss: float = 0.8, buy_take_profit: float = 1.2, penalty_starts: int = 0, penalty_stops: int = 10, static_reward_adjustment: float = 1, repeat_test: int = 10, test_ratio: float = 0.2, validator: Optional[RewardValidatorBase] = None, label_annotator: Optional[LabelAnnotatorBase] = None, labeled_data_balancer: Optional[LabeledDataBalancer] = None, meta_data: Optional[dict[str, Any]] = None)¶
Bases:
object
Implements a configuration class for training agents in a trading environment. It encapsulates all necessary parameters for training, including the number of steps, episodes, model blueprint, data path, initial budget, maximum amount of trades, window size, and various reward parameters. It also provides methods for instantiating an agent handler and printing the configuration.
- instantiate_agent_handler() AgentHandler ¶
Instantiates the agent handler with the configured environment and strategies.
- Returns:
An instance of the agent handler configured with the model blueprint, trading environment, learning strategy handler and testing strategy handler.
- Return type:
- class TrainingHandler(config: TrainingConfig, page_width: int = 612.0, page_height: int = 792.0, heading_spacing: int = 20, caption_font_size: int = 14, text_font_size: int = 8, font_name: str = 'Courier', margins: dict[str, int] = {'bottom': 30, 'left': 30, 'right': 30, 'top': 30}, exclude_from_logs: list[str] = ['ETA'])¶
Bases:
object
Responsible for orchestrating the training process and report generation.
This class manages the complete training workflow, from initializing the environment and agent, running training and testing sessions, to generating PDF reports with performance visualizations and logs. It serves as the main entry point for executing and documenting trading agent training.
- generate_report(path_to_pdf: str) None ¶
Generates a comprehensive PDF report of training and testing results.
Creates a multi-page report with logs, training history plots, and test results visualizations based on the data collected during training.
- Parameters:
path_to_pdf (str) – File path where the PDF report should be saved.
- run_training(callbacks: list[tensorflow.python.keras.callbacks.Callback] = [], weights_load_path: Optional[str] = None, weights_save_path: Optional[str] = None) None ¶
Executes the training and testing process for the trading agent.
This method orchestrates the complete training workflow, capturing logs, training the agent, and testing its performance. It populates internal data structures with results that can later be used for report generation.
- Parameters:
callbacks (list[Callback]) – Keras callbacks to use during training.
weights_load_path (str, optional) – Path to load pre-trained weights from.
weights_save_path (str, optional) – Path to save trained weights to.