Optimizers
This package provides an interface to the set optimizers presented in "Stochastic optimization algorithms for quantum applications" (Gidi et. al., 2022).
In the publication two groups of gain parameters are mentioned:
The default goup is
ComplexSPSA.gains
— Constantgains = Dict(
:a => 3.0,
:b => 0.1,
:A => 0.0,
:s => 0.602,
:t => 0.101,
)
Contains the gain parameters used for the optimizers defined within the ComplexSPSA
module. By default, the standard gains are used.
and the asymptotic gains are also provided as
ComplexSPSA.gains_asymptotic
— Constantgains_asymptotic = Dict(
:a => 3.0,
:b => 0.1,
:A => 0.0,
:s => 1.0,
:t => 0.166,
)
Dictionary containing the set of asymptotic gain parameters. By default, the standard set of gain parameters are used.
The optimizers are subdivided into two categories:
First-order optimizers
Options
Optimizers of this category accept the arguments:
sign
: Specifies if the objective function should be maximized (sign=1
) or minimized (sign=-1
). Default value is-1
.initial_iter
: Determines the initial value of the iteration indexk
.a
,b
,A
,s
andt
: The gain parameters. Default values are contained in the dictionaryComplexSPSA.gains
.learning_rate_constant
: Specifies if the learning rate should be decaying in the iteration numbera / (k + A)^s
(learning_rate_constant=false
) or fixed toa
across all iterations (learning_rate_constant=true
). Default value isfalse
.learning_rate_Ncalibrate
: Integer indicating how many samples to evaluate from the objective function to calibrate the leraning ratea
as proposed by Kandala et. al. (2017). Default value is0
(no calibration).perturbation_constant
: Specifies if the perturbation step for the finite-difference approximation should be decaying in the iteration numberb / k^t
(perturbation_constant=false
) or fixed tob
across all iterations (perturbation_constant=true
). Default value isfalse
.blocking
: Allows to accept only variable updates which improve the value of the function up to certain tolerance. Default value isfalse
.blocking_tol
: The tolerance used for blocking. Default value is0.0
.blocking_Ncalibrate
: Is an integer representing how many evaluations of the function on the seed value should be used to estimate its standard deviation. Ifblocking_Ncalibrate > 1
, thenblocking_tol
is overriden with the value of twice the standard deviation. The default value ofblocking_Ncalibrate
is0
.resamplings
: Dictionary containing the number of samples of the gradient estimator to average at each iteration. It must contain the key"default"
with the value which will be used for iterations without explicit specification. By default,resamplings = Dict("default" => 1)
.postprocess
: A function which takes the array of variablesz
at the end of each iteration, and returns a postprocessed version of it. The default value isidentity
, which returns its arguments identically.
Optimizers
The optimizers are
ComplexSPSA.SPSA
— FunctionSPSA(f::Function, guess::AbstractVector{<:Real}, Niters; kwargs...)
First order optimizer taking real variables.
ComplexSPSA.SPSA_on_complex
— FunctionSPSA_on_complex(f::Function, guess::AbstractVector, Niters; kwargs...)
First order optimizer taking complex variables. Takes each variable, separate them as a pair of real values, and uses the equivalent real optimizer on them.
ComplexSPSA.CSPSA
— FunctionCSPSA(f::Function, guess::AbstractVector, Niters; kwargs...)
First order optimizer taking complex variables
Preconditioned optimizers
Options
All of the options for first-order optimizers, excepting learning_rate_Ncalibrate
, may also be provided. Additionally, preconditioned optimizers take the following options:
initial_hessian
: Allows to pass a guess for the initial value of the Hessian estimator. If not given, an Identity matrix is used.resamplings
: Dictionary containing the number of samples of the gradient estimator to average at each iteration. It must contain the key"default"
with the value which will be used for iterations without explicit specification. Ifresamplings[0]
is defined, it will be used to compute an estimator as the initial Hessian, overwriting the value possibly provided withinitial_hessian.
By default,resamplings = Dict("default" => 1)
.hessian_delay
: An integer indicating how many iterations should be performed using a first-order optimization rule (while collecting information for the Hessian estimator) before using the Hessian estimator to precondition the gradient estimator. The default value is0
.a2
: Mimics the first-order gain parametera
but for preconditioned iterations. Default value is1.0
. As in the first-order case, the keywordlearning_rate_constant
may be used to control wether the learning rate should be constant or decaying on the iteration number.regularization
: A real number indicating the perturbation used on the Hessian regularization. The default value is0.001
.
Optimizers
Two categories are mixed within the preconditioned algorithms: Second order and Quantum Natural methods.
Second order
Second order methods use additional measurements of the objective function to estimate its Hessian matrix. This methods are:
ComplexSPSA.SPSA2
— FunctionSPSA2(f::Function, guess::AbstractVector{<:Real}, Niters; kwargs...)
Second order optimizer taking complex variables.
ComplexSPSA.SPSA2_on_complex
— FunctionSPSA2_on_complex(f::Function, guess::AbstractVector, Niters; kwargs...)
Second order optimizer taking complex variables. Takes each variable, separate them as a pair of real values, and uses the equivalent real optimizer on them.
ComplexSPSA.CSPSA2
— FunctionCSPSA2(f::Function, guess::AbstractVector, Niters; kwargs...)
Second order optimizer taking complex variables.
ComplexSPSA.CSPSA2_full
— FunctionCSPSA2_full(f::Function, guess::AbstractVector, Niters; kwargs...)
Second order optimizer taking complex variables. Does not consider the block-diagonal approximation for the complex Hessian estimator.
ComplexSPSA.SPSA2_scalar
— FunctionSPSA2_scalar(f::Function, guess::AbstractVector{<:Real}, Niters; kwargs...)
Second order optimizer taking real variables. Uses a scalar approximation of the Hessian estimator.
ComplexSPSA.SPSA2_scalar_on_complex
— FunctionSPSA2_on_complex(f::Function, guess::AbstractVector, Niters; kwargs...)
Second order optimizer taking complex variables. Uses a scalar approximation of the Hessian estimator. Takes each variable, separate them as a pair of real values, and uses the equivalent real optimizer on them.
ComplexSPSA.CSPSA2_scalar
— FunctionCSPSA2_scalar(f::Function, guess::AbstractVector, Niters; kwargs...)
Second order optimizer taking complex variables. Uses a scalar approximation of the Hessian estimator.
Quantum Natural
Quantum Natural methods, while following a first-order update rule, consider the metric of the problem to precondition the gradient estimate. In particular, the following optimizers require the fidelity $ \mathscr{fidelity}(z, z')$ to estimate the Fubini-Study metric:
ComplexSPSA.SPSA_QN
— FunctionSPSA_QN(f::Function, fidelity::Function,
guess::AbstractVector{<:Real}, Niters; kwargs...)
Quantum Natural optimizer taking real variables.
ComplexSPSA.SPSA_QN_on_complex
— FunctionSPSA_QN_on_complex(f::Function, fidelity::Function,
guess::AbstractVector, Niters; kwargs...)
Quantum Natural optimizer taking complex variables. Takes each variable, separate them as a pair of real values, and uses the equivalent real optimizer on them.
ComplexSPSA.CSPSA_QN
— Function CSPSA_QN(f::Function, fidelity::Function,
guess::AbstractVector, Niters; kwargs...)
Quantum Natural optimizer taking complex variables.
ComplexSPSA.CSPSA_QN_full
— FunctionCSPSA_QN_full(f::Function, fidelity::Function,
guess::AbstractVector, Niters; kwargs...)
Quantum Natural optimizer taking complex variables. Does not consider the block-diagonal approximation for the complex Hessian estimator.
ComplexSPSA.SPSA_QN_scalar
— FunctionSPSA_QN_scalar(f::Function, fidelity::Function,
guess::AbstractVector{<:Real}, Niters; kwargs...)
Quantum Natural optimizer taking real variables. Uses a scalar approximation of the Hessian estimator.
ComplexSPSA.SPSA_QN_scalar_on_complex
— FunctionSPSA_QN_scalar_on_complex(f::Function, fidelity::Function,
guess::AbstractVector, Niters; kwargs...)
Quantum Natural optimizer taking complex variables. Uses a scalar approximation of the Hessian estimator. Takes each variable, separate them as a pair of real values, and uses the equivalent real optimizer on them.
ComplexSPSA.CSPSA_QN_scalar
— FunctionCSPSA_QN_scalar(f::Function, fidelity::Function,
guess::AbstractVector, Niters; kwargs...)
Quantum Natural optimizer taking complex variables. Uses a scalar approximation of the Hessian estimator.