Optimizers

This package provides an interface to the set optimizers presented in "Stochastic optimization algorithms for quantum applications" (Gidi et. al., 2022).

In the publication two groups of gain parameters are mentioned:

The default goup is

ComplexSPSA.gainsConstant
gains = Dict(
    :a => 3.0,
    :b => 0.1,
    :A => 0.0,
    :s => 0.602,
    :t => 0.101,
)

Contains the gain parameters used for the optimizers defined within the ComplexSPSA module. By default, the standard gains are used.

source

and the asymptotic gains are also provided as

ComplexSPSA.gains_asymptoticConstant
gains_asymptotic = Dict(
    :a => 3.0,
    :b => 0.1,
    :A => 0.0,
    :s => 1.0,
    :t => 0.166,
)

Dictionary containing the set of asymptotic gain parameters. By default, the standard set of gain parameters are used.

source

The optimizers are subdivided into two categories:

First-order optimizers

Options

Optimizers of this category accept the arguments:

  • sign: Specifies if the objective function should be maximized (sign=1) or minimized (sign=-1). Default value is -1.
  • initial_iter: Determines the initial value of the iteration index k.
  • a, b, A, s and t: The gain parameters. Default values are contained in the dictionary ComplexSPSA.gains.
  • learning_rate_constant: Specifies if the learning rate should be decaying in the iteration number a / (k + A)^s (learning_rate_constant=false) or fixed to a across all iterations (learning_rate_constant=true). Default value is false.
  • learning_rate_Ncalibrate: Integer indicating how many samples to evaluate from the objective function to calibrate the leraning rate a as proposed by Kandala et. al. (2017). Default value is 0 (no calibration).
  • perturbation_constant: Specifies if the perturbation step for the finite-difference approximation should be decaying in the iteration number b / k^t (perturbation_constant=false) or fixed to b across all iterations (perturbation_constant=true). Default value is false.
  • blocking: Allows to accept only variable updates which improve the value of the function up to certain tolerance. Default value is false.
  • blocking_tol: The tolerance used for blocking. Default value is 0.0.
  • blocking_Ncalibrate: Is an integer representing how many evaluations of the function on the seed value should be used to estimate its standard deviation. If blocking_Ncalibrate > 1, then blocking_tol is overriden with the value of twice the standard deviation. The default value of blocking_Ncalibrate is 0.
  • resamplings: Dictionary containing the number of samples of the gradient estimator to average at each iteration. It must contain the key "default" with the value which will be used for iterations without explicit specification. By default, resamplings = Dict("default" => 1).
  • postprocess: A function which takes the array of variables z at the end of each iteration, and returns a postprocessed version of it. The default value is identity, which returns its arguments identically.

Optimizers

The optimizers are

ComplexSPSA.SPSAFunction
SPSA(f::Function, guess::AbstractVector{<:Real}, Niters; kwargs...)

First order optimizer taking real variables.

source
ComplexSPSA.SPSA_on_complexFunction
SPSA_on_complex(f::Function, guess::AbstractVector, Niters; kwargs...)

First order optimizer taking complex variables. Takes each variable, separate them as a pair of real values, and uses the equivalent real optimizer on them.

source
ComplexSPSA.CSPSAFunction
CSPSA(f::Function, guess::AbstractVector, Niters; kwargs...)

First order optimizer taking complex variables

source

Preconditioned optimizers

Options

All of the options for first-order optimizers, excepting learning_rate_Ncalibrate, may also be provided. Additionally, preconditioned optimizers take the following options:

  • initial_hessian: Allows to pass a guess for the initial value of the Hessian estimator. If not given, an Identity matrix is used.
  • resamplings: Dictionary containing the number of samples of the gradient estimator to average at each iteration. It must contain the key "default" with the value which will be used for iterations without explicit specification. If resamplings[0] is defined, it will be used to compute an estimator as the initial Hessian, overwriting the value possibly provided with initial_hessian. By default, resamplings = Dict("default" => 1).
  • hessian_delay: An integer indicating how many iterations should be performed using a first-order optimization rule (while collecting information for the Hessian estimator) before using the Hessian estimator to precondition the gradient estimator. The default value is 0.
  • a2: Mimics the first-order gain parameter a but for preconditioned iterations. Default value is 1.0. As in the first-order case, the keyword learning_rate_constant may be used to control wether the learning rate should be constant or decaying on the iteration number.
  • regularization: A real number indicating the perturbation used on the Hessian regularization. The default value is 0.001.

Optimizers

Two categories are mixed within the preconditioned algorithms: Second order and Quantum Natural methods.

Second order

Second order methods use additional measurements of the objective function to estimate its Hessian matrix. This methods are:

ComplexSPSA.SPSA2Function
SPSA2(f::Function, guess::AbstractVector{<:Real}, Niters; kwargs...)

Second order optimizer taking complex variables.

source
ComplexSPSA.SPSA2_on_complexFunction
SPSA2_on_complex(f::Function, guess::AbstractVector, Niters; kwargs...)

Second order optimizer taking complex variables. Takes each variable, separate them as a pair of real values, and uses the equivalent real optimizer on them.

source
ComplexSPSA.CSPSA2Function
CSPSA2(f::Function, guess::AbstractVector, Niters; kwargs...)

Second order optimizer taking complex variables.

source
ComplexSPSA.CSPSA2_fullFunction
CSPSA2_full(f::Function, guess::AbstractVector, Niters; kwargs...)

Second order optimizer taking complex variables. Does not consider the block-diagonal approximation for the complex Hessian estimator.

source
ComplexSPSA.SPSA2_scalarFunction
SPSA2_scalar(f::Function, guess::AbstractVector{<:Real}, Niters; kwargs...)

Second order optimizer taking real variables. Uses a scalar approximation of the Hessian estimator.

source
ComplexSPSA.SPSA2_scalar_on_complexFunction
SPSA2_on_complex(f::Function, guess::AbstractVector, Niters; kwargs...)

Second order optimizer taking complex variables. Uses a scalar approximation of the Hessian estimator. Takes each variable, separate them as a pair of real values, and uses the equivalent real optimizer on them.

source
ComplexSPSA.CSPSA2_scalarFunction
CSPSA2_scalar(f::Function, guess::AbstractVector, Niters; kwargs...)

Second order optimizer taking complex variables. Uses a scalar approximation of the Hessian estimator.

source

Quantum Natural

Quantum Natural methods, while following a first-order update rule, consider the metric of the problem to precondition the gradient estimate. In particular, the following optimizers require the fidelity $ \mathscr{fidelity}(z, z')$ to estimate the Fubini-Study metric:

ComplexSPSA.SPSA_QNFunction
SPSA_QN(f::Function, fidelity::Function,
        guess::AbstractVector{<:Real}, Niters; kwargs...)

Quantum Natural optimizer taking real variables.

source
ComplexSPSA.SPSA_QN_on_complexFunction
SPSA_QN_on_complex(f::Function, fidelity::Function,
                   guess::AbstractVector, Niters; kwargs...)

Quantum Natural optimizer taking complex variables. Takes each variable, separate them as a pair of real values, and uses the equivalent real optimizer on them.

source
ComplexSPSA.CSPSA_QNFunction
 CSPSA_QN(f::Function, fidelity::Function,
          guess::AbstractVector, Niters; kwargs...)

Quantum Natural optimizer taking complex variables.

source
ComplexSPSA.CSPSA_QN_fullFunction
CSPSA_QN_full(f::Function, fidelity::Function,
              guess::AbstractVector, Niters; kwargs...)

Quantum Natural optimizer taking complex variables. Does not consider the block-diagonal approximation for the complex Hessian estimator.

source
ComplexSPSA.SPSA_QN_scalarFunction
SPSA_QN_scalar(f::Function, fidelity::Function,
               guess::AbstractVector{<:Real}, Niters; kwargs...)

Quantum Natural optimizer taking real variables. Uses a scalar approximation of the Hessian estimator.

source
ComplexSPSA.SPSA_QN_scalar_on_complexFunction
SPSA_QN_scalar_on_complex(f::Function, fidelity::Function,
                          guess::AbstractVector, Niters; kwargs...)

Quantum Natural optimizer taking complex variables. Uses a scalar approximation of the Hessian estimator. Takes each variable, separate them as a pair of real values, and uses the equivalent real optimizer on them.

source
ComplexSPSA.CSPSA_QN_scalarFunction
CSPSA_QN_scalar(f::Function, fidelity::Function,
                guess::AbstractVector, Niters; kwargs...)

Quantum Natural optimizer taking complex variables. Uses a scalar approximation of the Hessian estimator.

source