# Backend¶

## Tensor Library Backends¶

QuantumFlow is designed to use a modern tensor library as a backend. The current options are tensorflow, eager, pytorch, and numpy (default).

• numpy (Default)
Python classic. Relatively fast on a single CPU, but no GPU acceleration, and no backprop.
• eager
Tensorflow eager mode. Tensorflow can automatically figure out back-propagated gradients, so we can efficiently optimize quantum networks using stochastic gradient descent.
• tensorflow
Regular tensorflow. Eager mode recommened.
• torch (Experimental)
Experimental prototype. Fast on CPU and GPU. Unfortunately stochastic gradient decent not available due to pytorch’s lack of support for complex math. Pytorch is not installed by default. See the pytorch website for installation instructions.

## Configuration¶

The default backend can be set in the configuration file, and can be overridden with the QUANTUMFLOW_BACKEND environment variable. e.g.

> QUANTUMFLOW_BACKEND=numpy pytest tests/test_flow.py


Options are tensorflow, eager, numpy, and torch.

You can also set the environment variable in python before quantumflow is imported.

>>> import os
>>> os.environ["QUANTUMFLOW_BACKEND"] = "numpy"
>>> import quantumflow as qf


## GPU¶

Unfortunately, tensorflow does not fully supports complex numbers, so we cannot run with eager or tensofrlow mode on GPUs at present. The numpy backend does not have GPU acceleration either.

The torch backened can run with GPU acceleration, which can lead to significant speed increase for simulation of large quantum states. Note that the main limiting factor is GPU memory. A single state uses 16 x 2^N bytes. We need to be able to place 2 states (and a bunch of smaller tensors) on a single GPU. Thus a 16 GiB GPU can simulate a 28 qubit system.

> QUANTUMFLOW_DEVICE=gpu QUANTUMFLOW_BACKEND=torch ./benchmark.py 24 > QUANTUMFLOW_DEVICE=cpu QUANTUMFLOW_BACKEND=torch ./benchmark.py 24

## Backend API¶

quantumflow.backend.tensormul(tensor0: Any, tensor1: Any, indices: List[int]) → BKTensor

Generalization of matrix multiplication to product tensors.

A state vector in product tensor representation has N dimension, one for each contravariant index, e.g. for 3-qubit states $$B^{b_0,b_1,b_2}$$. An operator has K dimensions, K/2 for contravariant indices (e.g. ket components) and K/2 for covariant (bra) indices, e.g. $$A^{a_0,a_1}_{a_2,a_3}$$ for a 2-qubit gate. The given indices of A are contracted against B, replacing the given positions.

E.g. tensormul(A, B, [0,2]) is equivalent to

$C^{a_0,b_1,a_1} =\sum_{i_0,i_1} A^{a_0,a_1}_{i_0,i_1} B^{i_0,b_1,i_1}$
Parameters: tensor0 – A tensor product representation of a gate tensor1 – A tensor product representation of a gate or state indices – List of indices of tensor1 on which to act. Resultant state or gate tensor

Each backend is expected to implement the following methods, with semantics that match numpy. (For instance, tensorflow’s acos() method is adapted to match numpy’s arccos())

• absolute
• arccos
• conj
• cos
• diag
• exp
• matmul
• minimum
• real
• reshape
• sin
• sum
• transpose

In addition each backend implements the following methods and variables.

QuantumFlow numpy backend

quantumflow.backend.numpybk.BKTensor = typing.Any

Type hint for backend tensors

quantumflow.backend.numpybk.CTYPE

alias of numpy.complex128

quantumflow.backend.numpybk.DEVICE = 'cpu'

Current device

quantumflow.backend.numpybk.EINSUM_SUBSCRIPTS = 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ'

A string of all characters that can be used in einsum subscripts in sorted order

quantumflow.backend.numpybk.FTYPE

alias of numpy.float64

quantumflow.backend.numpybk.MAX_QUBITS = 32

Maximum number of qubits supported by this backend. Numpy arrays can’t have more than 32 dimensions, which limits us to no more than 32 qubits. Pytorch has a similar problem, leading to a maximum of 24 qubits

quantumflow.backend.numpybk.TENSOR

alias of numpy.ndarray

quantumflow.backend.numpybk.TL = <module 'numpy' from '/home/docs/.pyenv/versions/3.6.2/lib/python3.6/site-packages/numpy/__init__.py'>

‘TensorLibrary’. The actual imported backend python package

quantumflow.backend.numpybk.TensorLike = typing.Any

Any python object that can be converted into a backend tensor

quantumflow.backend.numpybk.astensor(array: Any) → BKTensor

Converts a numpy array to the backend’s tensor object

quantumflow.backend.numpybk.astensorproduct(array: Any) → BKTensor

Converts a numpy array to the backend’s tensor object, and reshapes to [2]*N (So the number of elements must be a power of 2)

quantumflow.backend.numpybk.ccast(value: complex) → BKTensor

Cast value to complex tensor (if necessary)

quantumflow.backend.numpybk.cis(theta: float) → BKTensor
Returns: complex exponential
$\text{cis}(\theta) = \cos(\theta)+ i \sin(\theta) = \exp(i \theta)$
quantumflow.backend.numpybk.evaluate(tensor: TensorLike) → BKTensor
Returns: the value of a tensor as an ordinary python object
quantumflow.backend.numpybk.fcast(value: float) → BKTensor

Cast value to float tensor (if necessary)

quantumflow.backend.numpybk.getitem(tensor: TensorLike, key: Any) → BKTensor

Get item from tensor

quantumflow.backend.numpybk.gpu_available() → bool

Does the backend support GPU acceleration on current hardware?

quantumflow.backend.numpybk.inner(tensor0: Any, tensor1: Any) → BKTensor

Return the inner product between two tensors

quantumflow.backend.numpybk.productdiag(tensor: TensorLike) → BKTensor

Returns the matrix diagonal of the product tensor

quantumflow.backend.numpybk.rank(tensor: TensorLike) → int

Return the number of dimensions of a tensor

quantumflow.backend.numpybk.set_random_seed(seed: int) → None

Reinitialize the random number generator

quantumflow.backend.numpybk.tensormul(tensor0: Any, tensor1: Any, indices: List[int]) → BKTensor

Generalization of matrix multiplication to product tensors.

A state vector in product tensor representation has N dimension, one for each contravariant index, e.g. for 3-qubit states $$B^{b_0,b_1,b_2}$$. An operator has K dimensions, K/2 for contravariant indices (e.g. ket components) and K/2 for covariant (bra) indices, e.g. $$A^{a_0,a_1}_{a_2,a_3}$$ for a 2-qubit gate. The given indices of A are contracted against B, replacing the given positions.

E.g. tensormul(A, B, [0,2]) is equivalent to

$C^{a_0,b_1,a_1} =\sum_{i_0,i_1} A^{a_0,a_1}_{i_0,i_1} B^{i_0,b_1,i_1}$
Parameters: tensor0 – A tensor product representation of a gate tensor1 – A tensor product representation of a gate or state indices – List of indices of tensor1 on which to act. Resultant state or gate tensor