RMSprop

  • namespace: Rindow\NeuralNetworks\Optimizer
  • classname: RMSprop

RMSprop optimizer. Maintain a moving (discounted) average of the square of gradients.

Methods

constructor

$builer->RMSprop(
    float $lr=0.001,
    float $rho=0.9,
    float $decay=0.0,
    float $epsilon=null,
)

You can create a Adam optimizer instances with the Optimizer Builder.

Options

  • lr: learning rate.
    • float >= 0
    • default is 0.001
  • rho: Discounting factor for the history/coming gradient.
    • float >= 0
    • default is 0.9
  • decay: The exponential decay rate.
    • float >= 0
    • default is 0.0
  • epsilon: “epsilon hat” in the Kingma and Ba paper
    • float >= 0
    • default is epsilon defined in backend(see backend).

Examples

$model->compile(
    optimizer:$nn->optimizers()->RMSprop(
        lr:      0.001,
    )
);