Math
Note: Functions taking Tensor
arguments can also take anything accepted by
tf.convert_to_tensor
.
[TOC]
Arithmetic Operators
TensorFlow provides several operations that you can use to add basic arithmetic operators to your graph.
tf.add(x, y, name=None)
Returns x + y element-wise.
NOTE: Add supports broadcasting. AddN does not.
Args:
x
: ATensor
. Must be one of the following types:half
,float32
,float64
,uint8
,int8
,int16
,int32
,int64
,complex64
,complex128
,string
.y
: ATensor
. Must have the same type asx
.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as x
.
tf.sub(x, y, name=None)
Returns x - y element-wise.
Args:
x
: ATensor
. Must be one of the following types:half
,float32
,float64
,int32
,int64
,complex64
,complex128
.y
: ATensor
. Must have the same type asx
.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as x
.
tf.mul(x, y, name=None)
Returns x * y element-wise.
Args:
x
: ATensor
. Must be one of the following types:half
,float32
,float64
,uint8
,int8
,int16
,int32
,int64
,complex64
,complex128
.y
: ATensor
. Must have the same type asx
.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as x
.
tf.div(x, y, name=None)
Returns x / y element-wise.
Args:
x
: ATensor
. Must be one of the following types:half
,float32
,float64
,uint8
,int8
,int16
,int32
,int64
,complex64
,complex128
.y
: ATensor
. Must have the same type asx
.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as x
.
tf.truediv(x, y, name=None)
Divides x / y elementwise, always producing floating point results.
The same as tf.div
for floating point arguments, but casts integer arguments
to floating point before dividing so that the result is always floating point.
This op is generated by normal x / y
division in Python 3 and in Python 2.7
with from __future__ import division
. If you want integer division that
rounds down, use x // y
or tf.floordiv
.
x
and y
must have the same numeric type. If the inputs are floating
point, the output will have the same type. If the inputs are integral, the
inputs are cast to float32
for int8
and int16
and float64
for int32
and int64
(matching the behavior of Numpy).
Args:
x
:Tensor
numerator of numeric type.y
:Tensor
denominator of numeric type.name
: A name for the operation (optional).
Returns:
x / y
evaluated in floating point.
Raises:
TypeError
: Ifx
andy
have different dtypes.
tf.floordiv(x, y, name=None)
Divides x / y
elementwise, rounding down for floating point.
The same as tf.div(x,y)
for integers, but uses tf.floor(tf.div(x,y))
for
floating point arguments so that the result is always an integer (though
possibly an integer represented as floating point). This op is generated by
x // y
floor division in Python 3 and in Python 2.7 with
from __future__ import division
.
Note that for efficiency, floordiv
uses C semantics for negative numbers
(unlike Python and Numpy).
x
and y
must have the same type, and the result will have the same type
as well.
Args:
x
:Tensor
numerator of real numeric type.y
:Tensor
denominator of real numeric type.name
: A name for the operation (optional).
Returns:
x / y
rounded down (except possibly towards zero for negative integers).
Raises:
TypeError
: If the inputs are complex.
tf.mod(x, y, name=None)
Returns element-wise remainder of division.
Args:
x
: ATensor
. Must be one of the following types:int32
,int64
,float32
,float64
.y
: ATensor
. Must have the same type asx
.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as x
.
tf.cross(a, b, name=None)
Compute the pairwise cross product.
a
and b
must be the same shape; they can either be simple 3-element vectors,
or any shape where the innermost dimension is 3. In the latter case, each pair
of corresponding 3-element vectors is cross-multiplied independently.
Args:
a
: ATensor
. Must be one of the following types:float32
,float64
,int32
,int64
,uint8
,int16
,int8
,uint16
,half
. A tensor containing 3-element vectors.b
: ATensor
. Must have the same type asa
. Another tensor, of same type and shape asa
.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as a
.
Pairwise cross product of the vectors in a
and b
.
Basic Math Functions
TensorFlow provides several operations that you can use to add basic mathematical functions to your graph.
tf.add_n(inputs, name=None)
Add all input tensors element wise.
Args:
inputs
: A list of at least 1Tensor
objects of the same type in:float32
,float64
,int64
,int32
,uint8
,uint16
,int16
,int8
,complex64
,complex128
,qint8
,quint8
,qint32
,half
. Must all be the same size and shape.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as inputs
.
tf.abs(x, name=None)
Computes the absolute value of a tensor.
Given a tensor of real numbers x
, this operation returns a tensor
containing the absolute value of each element in x
. For example, if x is
an input element and y is an output element, this operation computes
\(y = |x|\).
See tf.complex_abs()
to compute the absolute value of a complex
number.
Args:
x
: ATensor
of typefloat
,double
,int32
, orint64
.name
: A name for the operation (optional).
Returns:
A Tensor
the same size and type as x
with absolute values.
tf.neg(x, name=None)
Computes numerical negative value element-wise.
I.e., \(y = -x\).
Args:
x
: ATensor
. Must be one of the following types:half
,float32
,float64
,int32
,int64
,complex64
,complex128
.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as x
.
tf.sign(x, name=None)
Returns an element-wise indication of the sign of a number.
y = sign(x) = -1
if x < 0
; 0 if x == 0
; 1 if x > 0
.
For complex numbers, y = sign(x) = x / |x|
if x != 0
, otherwise y = 0
.
Args:
x
: ATensor
. Must be one of the following types:half
,float32
,float64
,int32
,int64
,complex64
,complex128
.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as x
.
tf.inv(x, name=None)
Computes the reciprocal of x element-wise.
I.e., \(y = 1 / x\).
Args:
x
: ATensor
. Must be one of the following types:half
,float32
,float64
,int32
,int64
,complex64
,complex128
.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as x
.
tf.square(x, name=None)
Computes square of x element-wise.
I.e., \(y = x * x = x^2\).
Args:
x
: ATensor
. Must be one of the following types:half
,float32
,float64
,int32
,int64
,complex64
,complex128
.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as x
.
tf.round(x, name=None)
Rounds the values of a tensor to the nearest integer, element-wise.
For example:
# 'a' is [0.9, 2.5, 2.3, -4.4]
tf.round(a) ==> [ 1.0, 3.0, 2.0, -4.0 ]
Args:
x
: ATensor
of typefloat
ordouble
.name
: A name for the operation (optional).
Returns:
A Tensor
of same shape and type as x
.
tf.sqrt(x, name=None)
Computes square root of x element-wise.
I.e., \(y = \sqrt{x} = x^{1/2}\).
Args:
x
: ATensor
. Must be one of the following types:half
,float32
,float64
,complex64
,complex128
.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as x
.
tf.rsqrt(x, name=None)
Computes reciprocal of square root of x element-wise.
I.e., \(y = 1 / \sqrt{x}\).
Args:
x
: ATensor
. Must be one of the following types:half
,float32
,float64
,complex64
,complex128
.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as x
.
tf.pow(x, y, name=None)
Computes the power of one value to another.
Given a tensor x
and a tensor y
, this operation computes \(x^y\) for
corresponding elements in x
and y
. For example:
# tensor 'x' is [[2, 2], [3, 3]]
# tensor 'y' is [[8, 16], [2, 3]]
tf.pow(x, y) ==> [[256, 65536], [9, 27]]
Args:
x
: ATensor
of typefloat
,double
,int32
,int64
,complex64
, orcomplex128
.y
: ATensor
of typefloat
,double
,int32
,int64
,complex64
, orcomplex128
.name
: A name for the operation (optional).
Returns:
A Tensor
.
tf.exp(x, name=None)
Computes exponential of x element-wise. \(y = e^x\).
Args:
x
: ATensor
. Must be one of the following types:half
,float32
,float64
,complex64
,complex128
.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as x
.
tf.log(x, name=None)
Computes natural logarithm of x element-wise.
I.e., \(y = \log_e x\).
Args:
x
: ATensor
. Must be one of the following types:half
,float32
,float64
,complex64
,complex128
.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as x
.
tf.ceil(x, name=None)
Returns element-wise smallest integer in not less than x.
Args:
x
: ATensor
. Must be one of the following types:half
,float32
,float64
.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as x
.
tf.floor(x, name=None)
Returns element-wise largest integer not greater than x.
Args:
x
: ATensor
. Must be one of the following types:half
,float32
,float64
.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as x
.
tf.maximum(x, y, name=None)
Returns the max of x and y (i.e. x > y ? x : y) element-wise, broadcasts.
Args:
x
: ATensor
. Must be one of the following types:half
,float32
,float64
,int32
,int64
.y
: ATensor
. Must have the same type asx
.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as x
.
tf.minimum(x, y, name=None)
Returns the min of x and y (i.e. x < y ? x : y) element-wise, broadcasts.
Args:
x
: ATensor
. Must be one of the following types:half
,float32
,float64
,int32
,int64
.y
: ATensor
. Must have the same type asx
.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as x
.
tf.cos(x, name=None)
Computes cos of x element-wise.
Args:
x
: ATensor
. Must be one of the following types:half
,float32
,float64
,complex64
,complex128
.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as x
.
tf.sin(x, name=None)
Computes sin of x element-wise.
Args:
x
: ATensor
. Must be one of the following types:half
,float32
,float64
,complex64
,complex128
.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as x
.
tf.lbeta(x, name='lbeta')
Computes ln(|Beta(x)|)
, reducing along the last dimension.
Given one-dimensional z = [z_0,...,z_{K-1}]
, we define
Beta(z) = \prod_j Gamma(z_j) / Gamma(\sum_j z_j)
And for n + 1
dimensional x
with shape [N1, ..., Nn, K]
, we define
lbeta(x)[i1, ..., in] = Log(|Beta(x[i1, ..., in, :])|)
. In other words,
the last dimension is treated as the z
vector.
Note that if z = [u, v]
, then
Beta(z) = int_0^1 t^{u-1} (1 - t)^{v-1} dt
, which defines the traditional
bivariate beta function.
Args:
x
: A rankn + 1
Tensor
with typefloat
, ordouble
.name
: A name for the operation (optional).
Returns:
The logarithm of |Beta(x)|
reducing along the last dimension.
Raises:
ValueError
: Ifx
is empty with rank one or less.
tf.tan(x, name=None)
Computes tan of x element-wise.
Args:
x
: ATensor
. Must be one of the following types:half
,float32
,float64
,int32
,int64
,complex64
,complex128
.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as x
.
tf.acos(x, name=None)
Computes acos of x element-wise.
Args:
x
: ATensor
. Must be one of the following types:half
,float32
,float64
,int32
,int64
,complex64
,complex128
.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as x
.
tf.asin(x, name=None)
Computes asin of x element-wise.
Args:
x
: ATensor
. Must be one of the following types:half
,float32
,float64
,int32
,int64
,complex64
,complex128
.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as x
.
tf.atan(x, name=None)
Computes atan of x element-wise.
Args:
x
: ATensor
. Must be one of the following types:half
,float32
,float64
,int32
,int64
,complex64
,complex128
.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as x
.
tf.lgamma(x, name=None)
Computes the log of the absolute value of Gamma(x)
element-wise.
Args:
x
: ATensor
. Must be one of the following types:half
,float32
,float64
.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as x
.
tf.digamma(x, name=None)
Computes Psi, the derivative of Lgamma (the log of the absolute value of
Gamma(x)
), element-wise.
Args:
x
: ATensor
. Must be one of the following types:half
,float32
,float64
.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as x
.
tf.erf(x, name=None)
Computes the Gauss error function of x
element-wise.
Args:
x
: ATensor
. Must be one of the following types:half
,float32
,float64
.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as x
.
tf.erfc(x, name=None)
Computes the complementary error function of x
element-wise.
Args:
x
: ATensor
. Must be one of the following types:half
,float32
,float64
.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as x
.
tf.squared_difference(x, y, name=None)
Returns (x - y)(x - y) element-wise.
Args:
x
: ATensor
. Must be one of the following types:half
,float32
,float64
,int32
,int64
,complex64
,complex128
.y
: ATensor
. Must have the same type asx
.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as x
.
tf.igamma(a, x, name=None)
Compute the lower regularized incomplete Gamma function Q(a, x)
.
The lower regularized incomplete Gamma function is defined as:
P(a, x) = gamma(a, x) / Gamma(x) = 1 - Q(a, x)
where
gamma(a, x) = int_{0}^{x} t^{a-1} exp(-t) dt
is the lower incomplete Gamma function.
Note, above Q(a, x)
(Igammac
) is the upper regularized complete
Gamma function.
Args:
a
: ATensor
. Must be one of the following types:float32
,float64
.x
: ATensor
. Must have the same type asa
.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as a
.
tf.igammac(a, x, name=None)
Compute the upper regularized incomplete Gamma function Q(a, x)
.
The upper regularized incomplete Gamma function is defined as:
Q(a, x) = Gamma(a, x) / Gamma(x) = 1 - P(a, x)
where
Gamma(a, x) = int_{x}^{\infty} t^{a-1} exp(-t) dt
is the upper incomplete Gama function.
Note, above P(a, x)
(Igamma
) is the lower regularized complete
Gamma function.
Args:
a
: ATensor
. Must be one of the following types:float32
,float64
.x
: ATensor
. Must have the same type asa
.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as a
.
tf.zeta(x, q, name=None)
Compute the Hurwitz zeta function \(\zeta(x, q)\).
The Hurwitz zeta function is defined as:
\zeta(x, q) = \sum_{n=0}^{\infty} (q + n)^{-x}
Args:
x
: ATensor
. Must be one of the following types:float32
,float64
.q
: ATensor
. Must have the same type asx
.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as x
.
tf.polygamma(a, x, name=None)
Compute the polygamma function \(\psi^{(n)}(x)\).
The polygamma function is defined as:
\psi^{(n)}(x) = \frac{d^n}{dx^n} \psi(x)
where \(\psi(x)\) is the digamma function.
Args:
a
: ATensor
. Must be one of the following types:float32
,float64
.x
: ATensor
. Must have the same type asa
.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as a
.
Matrix Math Functions
TensorFlow provides several operations that you can use to add linear algebra functions on matrices to your graph.
tf.batch_matrix_diag(diagonal, name=None)
Returns a batched diagonal tensor with a given batched diagonal values.
Given a diagonal
, this operation returns a tensor with the diagonal
and
everything else padded with zeros. The diagonal is computed as follows:
Assume diagonal
has k
dimensions [I, J, K, ..., N]
, then the output is a
tensor of rank k+1
with dimensions [I, J, K, ..., N, N]` where:
output[i, j, k, ..., m, n] = 1{m=n} * diagonal[i, j, k, ..., n]
.
For example:
# 'diagonal' is [[1, 2, 3, 4], [5, 6, 7, 8]]
and diagonal.shape = (2, 4)
tf.batch_matrix_diag(diagonal) ==> [[[1, 0, 0, 0]
[0, 2, 0, 0]
[0, 0, 3, 0]
[0, 0, 0, 4]],
[[5, 0, 0, 0]
[0, 6, 0, 0]
[0, 0, 7, 0]
[0, 0, 0, 8]]]
which has shape (2, 4, 4)
Args:
diagonal
: ATensor
. Rankk
, wherek >= 1
.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as diagonal
.
Rank k+1
, with output.shape = diagonal.shape + [diagonal.shape[-1]]
.
tf.batch_matrix_diag_part(input, name=None)
Returns the batched diagonal part of a batched tensor.
This operation returns a tensor with the diagonal
part
of the batched input
. The diagonal
part is computed as follows:
Assume input
has k
dimensions [I, J, K, ..., N, N]
, then the output is a
tensor of rank k - 1
with dimensions [I, J, K, ..., N]
where:
diagonal[i, j, k, ..., n] = input[i, j, k, ..., n, n]
.
The input must be at least a matrix.
For example:
# 'input' is [[[1, 0, 0, 0]
[0, 2, 0, 0]
[0, 0, 3, 0]
[0, 0, 0, 4]],
[[5, 0, 0, 0]
[0, 6, 0, 0]
[0, 0, 7, 0]
[0, 0, 0, 8]]]
and input.shape = (2, 4, 4)
tf.batch_matrix_diag_part(input) ==> [[1, 2, 3, 4], [5, 6, 7, 8]]
which has shape (2, 4)
Args:
input
: ATensor
. Rankk
tensor wherek >= 2
and the last two dimensions are equal.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as input
.
The extracted diagonal(s) having shape
diagonal.shape = input.shape[:-1]
.
tf.batch_matrix_band_part(input, num_lower, num_upper, name=None)
Copy a tensor setting everything outside a central band in each innermost matrix
to zero.
The band
part is computed as follows:
Assume input
has k
dimensions [I, J, K, ..., M, N]
, then the output is a
tensor with the same shape where
band[i, j, k, ..., m, n] = in_band(m, n) * input[i, j, k, ..., m, n]
.
The indicator function 'in_band(m, n)is one if
(num_lower < 0 || (m-n) <= num_lower)) &&
(num_upper < 0 || (n-m) <= num_upper)`, and zero otherwise.
For example:
# if 'input' is [[ 0, 1, 2, 3]
[-1, 0, 1, 2]
[-2, -1, 0, 1]
[-3, -2, -1, 0]],
tf.batch_matrix_band_part(input, 1, -1) ==> [[ 0, 1, 2, 3]
[-1, 0, 1, 2]
[ 0, -1, 0, 1]
[ 0, 0, -1, 0]],
tf.batch_matrix_band_part(input, 2, 1) ==> [[ 0, 1, 0, 0]
[-1, 0, 1, 0]
[-2, -1, 0, 1]
[ 0, -2, -1, 0]]
Useful special cases:
tf.batch_matrix_band_part(input, 0, -1) ==> Upper triangular part.
tf.batch_matrix_band_part(input, -1, 0) ==> Lower triangular part.
tf.batch_matrix_band_part(input, 0, 0) ==> Diagonal.
Args:
input
: ATensor
. Rankk
tensor.num_lower
: ATensor
of typeint64
. 0-D tensor. Number of subdiagonals to keep. If negative, keep entire lower triangle.num_upper
: ATensor
of typeint64
. 0-D tensor. Number of superdiagonals to keep. If negative, keep entire upper triangle.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as input
.
Rank k
tensor of the same shape as input. The extracted banded tensor.
tf.diag(diagonal, name=None)
Returns a diagonal tensor with a given diagonal values.
Given a diagonal
, this operation returns a tensor with the diagonal
and
everything else padded with zeros. The diagonal is computed as follows:
Assume diagonal
has dimensions [D1,..., Dk], then the output is a tensor of
rank 2k with dimensions [D1,..., Dk, D1,..., Dk] where:
output[i1,..., ik, i1,..., ik] = diagonal[i1, ..., ik]
and 0 everywhere else.
For example:
# 'diagonal' is [1, 2, 3, 4]
tf.diag(diagonal) ==> [[1, 0, 0, 0]
[0, 2, 0, 0]
[0, 0, 3, 0]
[0, 0, 0, 4]]
Args:
diagonal
: ATensor
. Must be one of the following types:float32
,float64
,int32
,int64
,complex64
. Rank k tensor where k is at most 3.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as diagonal
.
tf.diag_part(input, name=None)
Returns the diagonal part of the tensor.
This operation returns a tensor with the diagonal
part
of the input
. The diagonal
part is computed as follows:
Assume input
has dimensions [D1,..., Dk, D1,..., Dk]
, then the output is a
tensor of rank k
with dimensions [D1,..., Dk]
where:
diagonal[i1,..., ik] = input[i1, ..., ik, i1,..., ik]
.
For example:
# 'input' is [[1, 0, 0, 0]
[0, 2, 0, 0]
[0, 0, 3, 0]
[0, 0, 0, 4]]
tf.diag_part(input) ==> [1, 2, 3, 4]
Args:
input
: ATensor
. Must be one of the following types:float32
,float64
,int32
,int64
,complex64
. Rank k tensor where k is 2, 4, or 6.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as input
. The extracted diagonal.
tf.trace(x, name=None)
Compute the trace of a tensor x
.
trace(x)
returns the sum of along the diagonal.
For example:
# 'x' is [[1, 1],
# [1, 1]]
tf.trace(x) ==> 2
# 'x' is [[1,2,3],
# [4,5,6],
# [7,8,9]]
tf.trace(x) ==> 15
Args:
x
: 2-D tensor.name
: A name for the operation (optional).
Returns:
The trace of input tensor.
tf.transpose(a, perm=None, name='transpose')
Transposes a
. Permutes the dimensions according to perm
.
The returned tensor's dimension i will correspond to the input dimension
perm[i]
. If perm
is not given, it is set to (n-1...0), where n is
the rank of the input tensor. Hence by default, this operation performs a
regular matrix transpose on 2-D input Tensors.
For example:
# 'x' is [[1 2 3]
# [4 5 6]]
tf.transpose(x) ==> [[1 4]
[2 5]
[3 6]]
# Equivalently
tf.transpose(x, perm=[1, 0]) ==> [[1 4]
[2 5]
[3 6]]
# 'perm' is more useful for n-dimensional tensors, for n > 2
# 'x' is [[[1 2 3]
# [4 5 6]]
# [[7 8 9]
# [10 11 12]]]
# Take the transpose of the matrices in dimension-0
tf.transpose(x, perm=[0, 2, 1]) ==> [[[1 4]
[2 5]
[3 6]]
[[7 10]
[8 11]
[9 12]]]
Args:
a
: ATensor
.perm
: A permutation of the dimensions ofa
.name
: A name for the operation (optional).
Returns:
A transposed Tensor
.
tf.matmul(a, b, transpose_a=False, transpose_b=False, a_is_sparse=False, b_is_sparse=False, name=None)
Multiplies matrix a
by matrix b
, producing a
* b
.
The inputs must be two-dimensional matrices, with matching inner dimensions, possibly after transposition.
Both matrices must be of the same type. The supported types are:
float
, double
, int32
, complex64
.
Either matrix can be transposed on the fly by setting the corresponding flag
to True
. This is False
by default.
If one or both of the matrices contain a lot of zeros, a more efficient
multiplication algorithm can be used by setting the corresponding
a_is_sparse
or b_is_sparse
flag to True
. These are False
by default.
For example:
# 2-D tensor `a`
a = tf.constant([1, 2, 3, 4, 5, 6], shape=[2, 3]) => [[1. 2. 3.]
[4. 5. 6.]]
# 2-D tensor `b`
b = tf.constant([7, 8, 9, 10, 11, 12], shape=[3, 2]) => [[7. 8.]
[9. 10.]
[11. 12.]]
c = tf.matmul(a, b) => [[58 64]
[139 154]]
Args:
a
:Tensor
of typefloat
,double
,int32
orcomplex64
.b
:Tensor
with same type asa
.transpose_a
: IfTrue
,a
is transposed before multiplication.transpose_b
: IfTrue
,b
is transposed before multiplication.a_is_sparse
: IfTrue
,a
is treated as a sparse matrix.b_is_sparse
: IfTrue
,b
is treated as a sparse matrix.name
: Name for the operation (optional).
Returns:
A Tensor
of the same type as a
.
tf.batch_matmul(x, y, adj_x=None, adj_y=None, name=None)
Multiplies slices of two tensors in batches.
Multiplies all slices of Tensor
x
and y
(each slice can be
viewed as an element of a batch), and arranges the individual results
in a single output tensor of the same batch size. Each of the
individual slices can optionally be adjointed (to adjoint a matrix
means to transpose and conjugate it) before multiplication by setting
the adj_x
or adj_y
flag to True
, which are by default False
.
The input tensors x
and y
are 3-D or higher with shape [..., r_x, c_x]
and [..., r_y, c_y]
.
The output tensor is 3-D or higher with shape [..., r_o, c_o]
, where:
r_o = c_x if adj_x else r_x
c_o = r_y if adj_y else c_y
It is computed as:
output[..., :, :] = matrix(x[..., :, :]) * matrix(y[..., :, :])
Args:
x
: ATensor
. Must be one of the following types:half
,float32
,float64
,int32
,complex64
,complex128
. 3-D or higher with shape[..., r_x, c_x]
.y
: ATensor
. Must have the same type asx
. 3-D or higher with shape[..., r_y, c_y]
.adj_x
: An optionalbool
. Defaults toFalse
. IfTrue
, adjoint the slices ofx
. Defaults toFalse
.adj_y
: An optionalbool
. Defaults toFalse
. IfTrue
, adjoint the slices ofy
. Defaults toFalse
.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as x
.
3-D or higher with shape [..., r_o, c_o]
tf.matrix_determinant(input, name=None)
Calculates the determinant of a square matrix.
Args:
input
: ATensor
. Must be one of the following types:float32
,float64
. A tensor of shape[M, M]
.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as input
.
A scalar, equal to the determinant of the input.
tf.batch_matrix_determinant(input, name=None)
Calculates the determinants for a batch of square matrices.
The input is a tensor of shape [..., M, M]
whose inner-most 2 dimensions
form square matrices. The output is a 1-D tensor containing the determinants
for all input submatrices [..., :, :]
.
Args:
input
: ATensor
. Must be one of the following types:float32
,float64
. Shape is[..., M, M]
.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as input
. Shape is [...]
.
tf.matrix_inverse(input, adjoint=None, name=None)
Calculates the inverse of a square invertible matrix or its adjoint (conjugate
transpose).
The op uses LU decomposition with partial pivoting to compute the inverse.
If the matrix is not invertible there is no guarantee what the op does. It may detect the condition and raise an exception or it may simply return a garbage result.
Args:
input
: ATensor
. Must be one of the following types:float64
,float32
. Shape is[M, M]
.adjoint
: An optionalbool
. Defaults toFalse
.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as input
.
Shape is [M, M]
. If adjoint
is False
then output
contains the
matrix inverse of input
. If adjoint
is True
then output
contains the
matrix inverse of the adjoint of input
.
tf.batch_matrix_inverse(input, adjoint=None, name=None)
Calculates the inverse of square invertible matrices or their adjoints
(conjugate transposes).
The input is a tensor of shape [..., M, M]
whose inner-most 2 dimensions
form square matrices. The output is a tensor of the same shape as the input
containing the inverse for all input submatrices [..., :, :]
.
The op uses LU decomposition with partial pivoting to compute the inverses.
If a matrix is not invertible there is no guarantee what the op does. It may detect the condition and raise an exception or it may simply return a garbage result.
Args:
input
: ATensor
. Must be one of the following types:float64
,float32
. Shape is[..., M, M]
.adjoint
: An optionalbool
. Defaults toFalse
.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as input
. Shape is [..., M, M]
.
tf.cholesky(input, name=None)
Calculates the Cholesky decomposition of a square matrix.
The input has to be symmetric and positive definite. Only the lower-triangular part of the input will be used for this operation. The upper-triangular part will not be read.
The result is the lower-triangular matrix of the Cholesky decomposition of the
input, L
, so that input = L L^*
.
Args:
input
: ATensor
. Must be one of the following types:float64
,float32
. Shape is[M, M]
.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as input
. Shape is [M, M]
.
tf.batch_cholesky(input, name=None)
Calculates the Cholesky decomposition of a batch of square matrices.
The input is a tensor of shape [..., M, M]
whose inner-most 2 dimensions
form square matrices, with the same constraints as the single matrix Cholesky
decomposition above. The output is a tensor of the same shape as the input
containing the Cholesky decompositions for all input submatrices [..., :, :]
.
Args:
input
: ATensor
. Must be one of the following types:float64
,float32
. Shape is[..., M, M]
.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as input
. Shape is [..., M, M]
.
tf.cholesky_solve(chol, rhs, name=None)
Solve linear equations A X = RHS
, given Cholesky factorization of A
.
# Solve one system of linear equations (K = 1).
A = [[3, 1], [1, 3]]
RHS = [[2], [22]] # shape 2 x 1
chol = tf.cholesky(A)
X = tf.cholesky_solve(chol, RHS)
# tf.matmul(A, X) ~ RHS
X[:, 0] # Solution to the linear system A x = RHS[:, 0]
# Solve five systems of linear equations (K = 5).
A = [[3, 1], [1, 3]]
RHS = [[1, 2, 3, 4, 5], [11, 22, 33, 44, 55]] # shape 2 x 5
...
X[:, 2] # Solution to the linear system A x = RHS[:, 2]
Args:
chol
: ATensor
. Must befloat32
orfloat64
, shape is[M, M]
. Cholesky factorization ofA
, e.g.chol = tf.cholesky(A)
. For that reason, only the lower triangular part (including the diagonal) ofchol
is used. The strictly upper part is assumed to be zero and not accessed.rhs
: ATensor
, same type aschol
, shape is[M, K]
, designatingK
systems of linear equations.name
: A name to give thisOp
. Defaults tocholesky_solve
.
Returns:
Solution to A X = RHS
, shape [M, K]
. The solutions to the K
systems.
tf.batch_cholesky_solve(chol, rhs, name=None)
Solve batches of linear eqns A X = RHS
, given Cholesky factorizations.
# Solve one linear system (K = 1) for every member of the length 10 batch.
A = ... # shape 10 x 2 x 2
RHS = ... # shape 10 x 2 x 1
chol = tf.batch_cholesky(A) # shape 10 x 2 x 2
X = tf.batch_cholesky_solve(chol, RHS) # shape 10 x 2 x 1
# tf.matmul(A, X) ~ RHS
X[3, :, 0] # Solution to the linear system A[3, :, :] x = RHS[3, :, 0]
# Solve five linear systems (K = 5) for every member of the length 10 batch.
A = ... # shape 10 x 2 x 2
RHS = ... # shape 10 x 2 x 5
...
X[3, :, 2] # Solution to the linear system A[3, :, :] x = RHS[3, :, 2]
Args:
chol
: ATensor
. Must befloat32
orfloat64
, shape is[..., M, M]
. Cholesky factorization ofA
, e.g.chol = tf.batch_cholesky(A)
. For that reason, only the lower triangular parts (including the diagonal) of the last two dimensions ofchol
are used. The strictly upper part is assumed to be zero and not accessed.rhs
: ATensor
, same type aschol
, shape is[..., M, K]
.name
: A name to give thisOp
. Defaults tobatch_cholesky_solve
.
Returns:
Solution to A x = rhs
, shape [..., M, K]
.
tf.self_adjoint_eig(input, name=None)
Calculates the Eigen Decomposition of a square Self-Adjoint matrix.
Only the lower-triangular part of the input will be used in this case. The upper-triangular part will not be read.
The result is a M+1 x M matrix whose first row is the eigenvalues, and subsequent rows are eigenvectors.
Args:
input
: ATensor
. Must be one of the following types:float64
,float32
. Shape is[M, M]
.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as input
. Shape is [M+1, M]
.
tf.batch_self_adjoint_eig(input, name=None)
Calculates the Eigen Decomposition of a batch of square self-adjoint matrices.
The input is a tensor of shape [..., M, M]
whose inner-most 2 dimensions
form square matrices, with the same constraints as the single matrix
SelfAdjointEig.
The result is a '[..., M+1, M] matrix with [..., 0,:] containing the eigenvalues, and subsequent [...,1:, :] containing the eigenvectors.
Args:
input
: ATensor
. Must be one of the following types:float64
,float32
. Shape is[..., M, M]
.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as input
. Shape is [..., M+1, M]
.
tf.matrix_solve(matrix, rhs, adjoint=None, name=None)
Solves a system of linear equations. Checks for invertibility.
Args:
matrix
: ATensor
. Must be one of the following types:float64
,float32
. Shape is[M, M]
.rhs
: ATensor
. Must have the same type asmatrix
. Shape is[M, K]
.adjoint
: An optionalbool
. Defaults toFalse
. Boolean indicating whether to solve withmatrix
or its adjoint.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as matrix
.
Shape is [M, K]
. If adjoint
is False
then output
that solves
matrix
output
= rhs
. If adjoint
is True
then output
that solves
adjoint(matrix)
output
= rhs
.
tf.batch_matrix_solve(matrix, rhs, adjoint=None, name=None)
Solves systems of linear equations. Checks for invertibility.
Matrix is a tensor of shape [..., M, M]
whose inner-most 2 dimensions
form square matrices. Rhs is a tensor of shape
[..., M, K]
. The output is a tensor shape [..., M, K]
. If adjoint
is False
then each output
matrix satisfies matrix[..., :, :] * output[..., :, :] = rhs[..., :, :]
.
If adjoint
is True
then each output
matrix satisfies adjoint(matrix[..., :, :]) * output[..., :, :] = rhs[..., :, :]
.
Args:
matrix
: ATensor
. Must be one of the following types:float64
,float32
. Shape is[..., M, M]
.rhs
: ATensor
. Must have the same type asmatrix
. Shape is[..., M, K]
.adjoint
: An optionalbool
. Defaults toFalse
. Boolean indicating whether to solve withmatrix
or its (block-wise) adjoint.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as matrix
. Shape is [..., M, K]
.
tf.matrix_triangular_solve(matrix, rhs, lower=None, adjoint=None, name=None)
Solves a system of linear equations with an upper or lower triangular matrix by
backsubstitution.
matrix
is a matrix of shape [M, M]
. If lower
is True
then the strictly
upper triangular part of matrix
is assumed to be zero and not accessed.
If lower
is False then the strictly lower triangular part of matrix
is
assumed to be zero and not accessed.
rhs
is a matrix of shape [M, K]`.
The output is a matrix of shape [M, K]
. If adjoint
is False
the output
satisfies the matrix equation matrix
output
= rhs
.
If adjoint
is False
then output
satisfies the matrix equation
matrix
output
= rhs
.
If adjoint
is True
then output
satisfies the matrix equation
adjoint(matrix)
* output
= rhs
.
Args:
matrix
: ATensor
. Must be one of the following types:float64
,float32
. Shape is[M, M]
.rhs
: ATensor
. Must have the same type asmatrix
. Shape is[M, K]
.lower
: An optionalbool
. Defaults toTrue
. Boolean indicating whethermatrix
is lower or upper triangularadjoint
: An optionalbool
. Defaults toFalse
. Boolean indicating whether to solve withmatrix
or its adjoint.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as matrix
. Shape is [M, K]
.
tf.batch_matrix_triangular_solve(matrix, rhs, lower=None, adjoint=None, name=None)
Solves systems of linear equations with upper or lower triangular matrices by
backsubstitution.
matrix
is a tensor of shape [..., M, M]
whose inner-most 2 dimensions form
square matrices. If lower
is True
then the strictly upper triangular part
of each inner-most matrix is assumed to be zero and not accessed.
If lower
is False then the strictly lower triangular part of each inner-most
matrix is assumed to be zero and not accessed.
rhs
is a tensor of shape [..., M, K]`.
The output is a tensor of shape [..., M, K]
. If adjoint
is True
then the
innermost matrices in outputsatisfy matrix equations
matrix[..., :, :] output[..., :, :] = rhs[..., :, :].
If
adjointis
Falsethen the strictly then the innermost matrices in
outputsatisfy matrix equations
adjoint(matrix[..., i, k]) output[..., k, j] = rhs[..., i, j]`.
Args:
matrix
: ATensor
. Must be one of the following types:float64
,float32
. Shape is[..., M, M]
.rhs
: ATensor
. Must have the same type asmatrix
. Shape is[..., M, K]
.lower
: An optionalbool
. Defaults toTrue
. Boolean indicating whether the innermost matrices inmatrix
are lower or upper triangular.adjoint
: An optionalbool
. Defaults toFalse
. Boolean indicating whether to solve withmatrix
or its (block-wise) adjoint.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as matrix
. Shape is [..., M, K]
.
tf.matrix_solve_ls(matrix, rhs, l2_regularizer=0.0, fast=True, name=None)
Solves a linear least-squares problem.
Below we will use the following notation
matrix
=\(A \in \Re^{m \times n}\),
rhs
=\(B \in \Re^{m \times k}\),
output
=\(X \in \Re^{n \times k}\),
l2_regularizer
=\(\lambda\).
If fast
is True
, then the solution is computed by solving the normal
equations using Cholesky decomposition. Specifically, if \(m \ge n\) then
\(X = (A^T A + \lambda I)^{-1} A^T B\), which solves the regularized
least-squares problem \(X = \mathrm{argmin}{Z \in \Re^{n \times k}}
||A Z - B||_F^2 + \lambda ||Z||_F^2\). If \(m \lt n\) then output
is
computed as \(X = A^T (A A^T + \lambda I)^{-1} B\),
which (for \(\lambda = 0\)) is the minimum-norm solution to the
under-determined linear system, i.e.
\(X = \mathrm{argmin}{Z \in \Re^{n \times k}} ||Z||F^2 \),
subject to \(A Z = B\).
Notice that the fast path is only numerically stable when \(A\) is
numerically full rank and has a condition number
\(\mathrm{cond}(A) \lt \frac{1}{\sqrt{\epsilon{mach}}}\)
or \(\lambda\) is sufficiently large.
If fast
is False
then the solution is computed using the rank revealing
QR decomposition with column pivoting. This will always compute a
least-squares solution that minimizes the residual norm
\(||A X - B||_F^2 \), even when \(A\) is rank deficient or
ill-conditioned. Notice: The current version does not compute a minimum norm
solution. If fast
is False
then l2_regularizer
is ignored.
Args:
matrix
: 2-DTensor
of shape[M, N]
.rhs
: 2-DTensor
of shape is[M, K]
.l2_regularizer
: 0-Ddouble
Tensor
. Ignored iffast=False
.fast
: bool. Defaults toTrue
.name
: string, optional name of the operation.
Returns:
output
: Matrix of shape[N, K]
containing the matrix that solvesmatrix * output = rhs
in the least-squares sense.
tf.batch_matrix_solve_ls(matrix, rhs, l2_regularizer=0.0, fast=True, name=None)
Solves multiple linear least-squares problems.
matrix
is a tensor of shape [..., M, N]
whose inner-most 2 dimensions
form M
-by-N
matrices. Rhs is a tensor of shape [..., M, K]
whose
inner-most 2 dimensions form M
-by-K
matrices. The computed output is a
Tensor
of shape [..., N, K]
whose inner-most 2 dimensions form M
-by-K
matrices that solve the equations
matrix[..., :, :] * output[..., :, :] = rhs[..., :, :]
in the least squares
sense.
Below we will use the following notation for each pair of matrix and right-hand sides in the batch:
matrix
=\(A \in \Re^{m \times n}\),
rhs
=\(B \in \Re^{m \times k}\),
output
=\(X \in \Re^{n \times k}\),
l2_regularizer
=\(\lambda\).
If fast
is True
, then the solution is computed by solving the normal
equations using Cholesky decomposition. Specifically, if \(m \ge n\) then
\(X = (A^T A + \lambda I)^{-1} A^T B\), which solves the least-squares
problem \(X = \mathrm{argmin}{Z \in \Re^{n \times k}} ||A Z - B||_F^2 +
\lambda ||Z||_F^2\). If \(m \lt n\) then output
is computed as
\(X = A^T (A A^T + \lambda I)^{-1} B\), which (for \(\lambda = 0\)) is
the minimum-norm solution to the under-determined linear system, i.e.
\(X = \mathrm{argmin}{Z \in \Re^{n \times k}} ||Z||F^2 \), subject to
\(A Z = B\). Notice that the fast path is only numerically stable when
\(A\) is numerically full rank and has a condition number
\(\mathrm{cond}(A) \lt \frac{1}{\sqrt{\epsilon{mach}}}\) or\(\lambda\)
is sufficiently large.
If fast
is False
an algorithm based on the numerically robust complete
orthogonal decomposition is used. This computes the minimum-norm
least-squares solution, even when \(A\) is rank deficient. This path is
typically 6-7 times slower than the fast path. If fast
is False
then
l2_regularizer
is ignored.
Args:
matrix
:Tensor
of shape[..., M, N]
.rhs
:Tensor
of shape[..., M, K]
.l2_regularizer
: 0-Ddouble
Tensor
. Ignored iffast=False
.fast
: bool. Defaults toTrue
.name
: string, optional name of the operation.
Returns:
output
:Tensor
of shape[..., N, K]
whose inner-most 2 dimensions formM
-by-K
matrices that solve the equationsmatrix[..., :, :] * output[..., :, :] = rhs[..., :, :]
in the least squares sense.
Complex Number Functions
TensorFlow provides several operations that you can use to add complex number functions to your graph.
tf.complex(real, imag, name=None)
Converts two real numbers to a complex number.
Given a tensor real
representing the real part of a complex number, and a
tensor imag
representing the imaginary part of a complex number, this
operation returns complex numbers elementwise of the form (a + bj), where
a represents the real
part and b represents the imag
part.
The input tensors real
and imag
must have the same shape.
For example:
# tensor 'real' is [2.25, 3.25]
# tensor `imag` is [4.75, 5.75]
tf.complex(real, imag) ==> [[2.25 + 4.75j], [3.25 + 5.75j]]
Args:
real
: ATensor
. Must be one of the following types:float32
,float64
.imag
: ATensor
. Must have the same type asreal
.name
: A name for the operation (optional).
Returns:
A Tensor
of type complex64
or complex128
.
tf.complex_abs(x, name=None)
Computes the complex absolute value of a tensor.
Given a tensor x
of complex numbers, this operation returns a tensor of type
float
or double
that is the absolute value of each element in x
. All
elements in x
must be complex numbers of the form \(a + bj\). The
absolute value is computed as \( \sqrt{a^2 + b^2}\).
For example:
# tensor 'x' is [[-2.25 + 4.75j], [-3.25 + 5.75j]]
tf.complex_abs(x) ==> [5.25594902, 6.60492229]
Args:
x
: ATensor
of typecomplex64
orcomplex128
.name
: A name for the operation (optional).
Returns:
A Tensor
of type float32
or float64
.
tf.conj(input, name=None)
Returns the complex conjugate of a complex number.
Given a tensor input
of complex numbers, this operation returns a tensor of
complex numbers that are the complex conjugate of each element in input
. The
complex numbers in input
must be of the form \(a + bj\), where a is the
real part and b is the imaginary part.
The complex conjugate returned by this operation is of the form \(a - bj\).
For example:
# tensor 'input' is [-2.25 + 4.75j, 3.25 + 5.75j]
tf.conj(input) ==> [-2.25 - 4.75j, 3.25 - 5.75j]
Args:
input
: ATensor
. Must be one of the following types:complex64
,complex128
.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as input
.
tf.imag(input, name=None)
Returns the imaginary part of a complex number.
Given a tensor input
of complex numbers, this operation returns a tensor of
type float
or double
that is the imaginary part of each element in
input
. All elements in input
must be complex numbers of the form (a +
bj), where a is the real part and b is the imaginary part returned by
this operation.
For example:
# tensor 'input' is [-2.25 + 4.75j, 3.25 + 5.75j]
tf.imag(input) ==> [4.75, 5.75]
Args:
input
: ATensor
. Must be one of the following types:complex64
,complex128
.name
: A name for the operation (optional).
Returns:
A Tensor
of type float
or double
.
tf.real(input, name=None)
Returns the real part of a complex number.
Given a tensor input
of complex numbers, this operation returns a tensor of
type float
or double
that is the real part of each element in input
.
All elements in input
must be complex numbers of the form (a + bj),
where a is the real part returned by this operation and b is the
imaginary part.
For example:
# tensor 'input' is [-2.25 + 4.75j, 3.25 + 5.75j]
tf.real(input) ==> [-2.25, 3.25]
Args:
input
: ATensor
. Must be one of the following types:complex64
,`complex128`.
name
: A name for the operation (optional).
Returns:
A Tensor
of type float
or double
.
tf.fft(input, name=None)
Compute the 1-dimensional discrete Fourier Transform.
Args:
input
: ATensor
of typecomplex64
. A complex64 vector.name
: A name for the operation (optional).
Returns:
A Tensor
of type complex64
. The 1D Fourier Transform of input
.
tf.ifft(input, name=None)
Compute the inverse 1-dimensional discrete Fourier Transform.
Args:
input
: ATensor
of typecomplex64
. A complex64 vector.name
: A name for the operation (optional).
Returns:
A Tensor
of type complex64
.
The inverse 1D Fourier Transform of input
.
tf.fft2d(input, name=None)
Compute the 2-dimensional discrete Fourier Transform.
Args:
input
: ATensor
of typecomplex64
. A complex64 matrix.name
: A name for the operation (optional).
Returns:
A Tensor
of type complex64
. The 2D Fourier Transform of input
.
tf.ifft2d(input, name=None)
Compute the inverse 2-dimensional discrete Fourier Transform.
Args:
input
: ATensor
of typecomplex64
. A complex64 matrix.name
: A name for the operation (optional).
Returns:
A Tensor
of type complex64
.
The inverse 2D Fourier Transform of input
.
tf.fft3d(input, name=None)
Compute the 3-dimensional discrete Fourier Transform.
Args:
input
: ATensor
of typecomplex64
. A complex64 3-D tensor.name
: A name for the operation (optional).
Returns:
A Tensor
of type complex64
. The 3D Fourier Transform of input
.
tf.ifft3d(input, name=None)
Compute the inverse 3-dimensional discrete Fourier Transform.
Args:
input
: ATensor
of typecomplex64
. A complex64 3-D tensor.name
: A name for the operation (optional).
Returns:
A Tensor
of type complex64
.
The inverse 3D Fourier Transform of input
.
tf.batch_fft(input, name=None)
Compute the 1-dimensional discrete Fourier Transform over the inner-most
dimension of input
.
Args:
input
: ATensor
of typecomplex64
. A complex64 tensor.name
: A name for the operation (optional).
Returns:
A Tensor
of type complex64
.
A complex64 tensor of the same shape as input
. The inner-most
dimension of input
is replaced with its 1D Fourier Transform.
tf.batch_ifft(input, name=None)
Compute the inverse 1-dimensional discrete Fourier Transform over the inner-most
dimension of input
.
Args:
input
: ATensor
of typecomplex64
. A complex64 tensor.name
: A name for the operation (optional).
Returns:
A Tensor
of type complex64
.
A complex64 tensor of the same shape as input
. The inner-most
dimension of input
is replaced with its inverse 1D Fourier Transform.
tf.batch_fft2d(input, name=None)
Compute the 2-dimensional discrete Fourier Transform over the inner-most
2 dimensions of input
.
Args:
input
: ATensor
of typecomplex64
. A complex64 tensor.name
: A name for the operation (optional).
Returns:
A Tensor
of type complex64
.
A complex64 tensor of the same shape as input
. The inner-most 2
dimensions of input
are replaced with their 2D Fourier Transform.
tf.batch_ifft2d(input, name=None)
Compute the inverse 2-dimensional discrete Fourier Transform over the inner-most
2 dimensions of input
.
Args:
input
: ATensor
of typecomplex64
. A complex64 tensor.name
: A name for the operation (optional).
Returns:
A Tensor
of type complex64
.
A complex64 tensor of the same shape as input
. The inner-most 2
dimensions of input
are replaced with their inverse 2D Fourier Transform.
tf.batch_fft3d(input, name=None)
Compute the 3-dimensional discrete Fourier Transform over the inner-most 3
dimensions of input
.
Args:
input
: ATensor
of typecomplex64
. A complex64 tensor.name
: A name for the operation (optional).
Returns:
A Tensor
of type complex64
.
A complex64 tensor of the same shape as input
. The inner-most 3
dimensions of input
are replaced with their 3D Fourier Transform.
tf.batch_ifft3d(input, name=None)
Compute the inverse 3-dimensional discrete Fourier Transform over the inner-most
3 dimensions of input
.
Args:
input
: ATensor
of typecomplex64
. A complex64 tensor.name
: A name for the operation (optional).
Returns:
A Tensor
of type complex64
.
A complex64 tensor of the same shape as input
. The inner-most 3
dimensions of input
are replaced with their inverse 3D Fourier Transform.
Reduction
TensorFlow provides several operations that you can use to perform common math computations that reduce various dimensions of a tensor.
tf.reduce_sum(input_tensor, reduction_indices=None, keep_dims=False, name=None)
Computes the sum of elements across dimensions of a tensor.
Reduces input_tensor
along the dimensions given in reduction_indices
.
Unless keep_dims
is true, the rank of the tensor is reduced by 1 for each
entry in reduction_indices
. If keep_dims
is true, the reduced dimensions
are retained with length 1.
If reduction_indices
has no entries, all dimensions are reduced, and a
tensor with a single element is returned.
For example:
# 'x' is [[1, 1, 1]
# [1, 1, 1]]
tf.reduce_sum(x) ==> 6
tf.reduce_sum(x, 0) ==> [2, 2, 2]
tf.reduce_sum(x, 1) ==> [3, 3]
tf.reduce_sum(x, 1, keep_dims=True) ==> [[3], [3]]
tf.reduce_sum(x, [0, 1]) ==> 6
Args:
input_tensor
: The tensor to reduce. Should have numeric type.reduction_indices
: The dimensions to reduce. IfNone
(the default), reduces all dimensions.keep_dims
: If true, retains reduced dimensions with length 1.name
: A name for the operation (optional).
Returns:
The reduced tensor.
tf.reduce_prod(input_tensor, reduction_indices=None, keep_dims=False, name=None)
Computes the product of elements across dimensions of a tensor.
Reduces input_tensor
along the dimensions given in reduction_indices
.
Unless keep_dims
is true, the rank of the tensor is reduced by 1 for each
entry in reduction_indices
. If keep_dims
is true, the reduced dimensions
are retained with length 1.
If reduction_indices
has no entries, all dimensions are reduced, and a
tensor with a single element is returned.
Args:
input_tensor
: The tensor to reduce. Should have numeric type.reduction_indices
: The dimensions to reduce. IfNone
(the default), reduces all dimensions.keep_dims
: If true, retains reduced dimensions with length 1.name
: A name for the operation (optional).
Returns:
The reduced tensor.
tf.reduce_min(input_tensor, reduction_indices=None, keep_dims=False, name=None)
Computes the minimum of elements across dimensions of a tensor.
Reduces input_tensor
along the dimensions given in reduction_indices
.
Unless keep_dims
is true, the rank of the tensor is reduced by 1 for each
entry in reduction_indices
. If keep_dims
is true, the reduced dimensions
are retained with length 1.
If reduction_indices
has no entries, all dimensions are reduced, and a
tensor with a single element is returned.
Args:
input_tensor
: The tensor to reduce. Should have numeric type.reduction_indices
: The dimensions to reduce. IfNone
(the default), reduces all dimensions.keep_dims
: If true, retains reduced dimensions with length 1.name
: A name for the operation (optional).
Returns:
The reduced tensor.
tf.reduce_max(input_tensor, reduction_indices=None, keep_dims=False, name=None)
Computes the maximum of elements across dimensions of a tensor.
Reduces input_tensor
along the dimensions given in reduction_indices
.
Unless keep_dims
is true, the rank of the tensor is reduced by 1 for each
entry in reduction_indices
. If keep_dims
is true, the reduced dimensions
are retained with length 1.
If reduction_indices
has no entries, all dimensions are reduced, and a
tensor with a single element is returned.
Args:
input_tensor
: The tensor to reduce. Should have numeric type.reduction_indices
: The dimensions to reduce. IfNone
(the default), reduces all dimensions.keep_dims
: If true, retains reduced dimensions with length 1.name
: A name for the operation (optional).
Returns:
The reduced tensor.
tf.reduce_mean(input_tensor, reduction_indices=None, keep_dims=False, name=None)
Computes the mean of elements across dimensions of a tensor.
Reduces input_tensor
along the dimensions given in reduction_indices
.
Unless keep_dims
is true, the rank of the tensor is reduced by 1 for each
entry in reduction_indices
. If keep_dims
is true, the reduced dimensions
are retained with length 1.
If reduction_indices
has no entries, all dimensions are reduced, and a
tensor with a single element is returned.
For example:
# 'x' is [[1., 1.]
# [2., 2.]]
tf.reduce_mean(x) ==> 1.5
tf.reduce_mean(x, 0) ==> [1.5, 1.5]
tf.reduce_mean(x, 1) ==> [1., 2.]
Args:
input_tensor
: The tensor to reduce. Should have numeric type.reduction_indices
: The dimensions to reduce. IfNone
(the default), reduces all dimensions.keep_dims
: If true, retains reduced dimensions with length 1.name
: A name for the operation (optional).
Returns:
The reduced tensor.
tf.reduce_all(input_tensor, reduction_indices=None, keep_dims=False, name=None)
Computes the "logical and" of elements across dimensions of a tensor.
Reduces input_tensor
along the dimensions given in reduction_indices
.
Unless keep_dims
is true, the rank of the tensor is reduced by 1 for each
entry in reduction_indices
. If keep_dims
is true, the reduced dimensions
are retained with length 1.
If reduction_indices
has no entries, all dimensions are reduced, and a
tensor with a single element is returned.
For example:
# 'x' is [[True, True]
# [False, False]]
tf.reduce_all(x) ==> False
tf.reduce_all(x, 0) ==> [False, False]
tf.reduce_all(x, 1) ==> [True, False]
Args:
input_tensor
: The boolean tensor to reduce.reduction_indices
: The dimensions to reduce. IfNone
(the default), reduces all dimensions.keep_dims
: If true, retains reduced dimensions with length 1.name
: A name for the operation (optional).
Returns:
The reduced tensor.
tf.reduce_any(input_tensor, reduction_indices=None, keep_dims=False, name=None)
Computes the "logical or" of elements across dimensions of a tensor.
Reduces input_tensor
along the dimensions given in reduction_indices
.
Unless keep_dims
is true, the rank of the tensor is reduced by 1 for each
entry in reduction_indices
. If keep_dims
is true, the reduced dimensions
are retained with length 1.
If reduction_indices
has no entries, all dimensions are reduced, and a
tensor with a single element is returned.
For example:
# 'x' is [[True, True]
# [False, False]]
tf.reduce_any(x) ==> True
tf.reduce_any(x, 0) ==> [True, True]
tf.reduce_any(x, 1) ==> [True, False]
Args:
input_tensor
: The boolean tensor to reduce.reduction_indices
: The dimensions to reduce. IfNone
(the default), reduces all dimensions.keep_dims
: If true, retains reduced dimensions with length 1.name
: A name for the operation (optional).
Returns:
The reduced tensor.
tf.accumulate_n(inputs, shape=None, tensor_dtype=None, name=None)
Returns the element-wise sum of a list of tensors.
Optionally, pass shape
and tensor_dtype
for shape and type checking,
otherwise, these are inferred.
For example:
# tensor 'a' is [[1, 2], [3, 4]]
# tensor `b` is [[5, 0], [0, 6]]
tf.accumulate_n([a, b, a]) ==> [[7, 4], [6, 14]]
# Explicitly pass shape and type
tf.accumulate_n([a, b, a], shape=[2, 2], tensor_dtype=tf.int32)
==> [[7, 4], [6, 14]]
Args:
inputs
: A list ofTensor
objects, each with same shape and type.shape
: Shape of elements ofinputs
.tensor_dtype
: The type ofinputs
.name
: A name for the operation (optional).
Returns:
A Tensor
of same shape and type as the elements of inputs
.
Raises:
ValueError
: Ifinputs
don't all have same shape and dtype or the shape cannot be inferred.
Segmentation
TensorFlow provides several operations that you can use to perform common
math computations on tensor segments.
Here a segmentation is a partitioning of a tensor along
the first dimension, i.e. it defines a mapping from the first dimension onto
segment_ids
. The segment_ids
tensor should be the size of
the first dimension, d0
, with consecutive IDs in the range 0
to k
,
where k<d0
.
In particular, a segmentation of a matrix tensor is a mapping of rows to
segments.
For example:
c = tf.constant([[1,2,3,4], [-1,-2,-3,-4], [5,6,7,8]])
tf.segment_sum(c, tf.constant([0, 0, 1]))
==> [[0 0 0 0]
[5 6 7 8]]
tf.segment_sum(data, segment_ids, name=None)
Computes the sum along segments of a tensor.
Read the section on Segmentation for an explanation of segments.
Computes a tensor such that
\(output_i = \sum_j data_j\) where sum is over j
such
that segment_ids[j] == i
.
Args:
data
: ATensor
. Must be one of the following types:float32
,float64
,int64
,int32
,uint8
,uint16
,int16
,int8
,complex64
,complex128
,qint8
,quint8
,qint32
,half
.segment_ids
: ATensor
. Must be one of the following types:int32
,int64
. A 1-D tensor whose rank is equal to the rank ofdata
's first dimension. Values should be sorted and can be repeated.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as data
.
Has same shape as data, except for dimension 0 which
has size k
, the number of segments.
tf.segment_prod(data, segment_ids, name=None)
Computes the product along segments of a tensor.
Read the section on Segmentation for an explanation of segments.
Computes a tensor such that
\(output_i = \prod_j data_j\) where the product is over j
such
that segment_ids[j] == i
.
Args:
data
: ATensor
. Must be one of the following types:float32
,float64
,int64
,int32
,uint8
,uint16
,int16
,int8
,complex64
,complex128
,qint8
,quint8
,qint32
,half
.segment_ids
: ATensor
. Must be one of the following types:int32
,int64
. A 1-D tensor whose rank is equal to the rank ofdata
's first dimension. Values should be sorted and can be repeated.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as data
.
Has same shape as data, except for dimension 0 which
has size k
, the number of segments.
tf.segment_min(data, segment_ids, name=None)
Computes the minimum along segments of a tensor.
Read the section on Segmentation for an explanation of segments.
Computes a tensor such that
\(output_i = \min_j(data_j)\) where min
is over j
such
that segment_ids[j] == i
.
Args:
data
: ATensor
. Must be one of the following types:float32
,float64
,int32
,int64
,uint8
,int16
,int8
,uint16
,half
.segment_ids
: ATensor
. Must be one of the following types:int32
,int64
. A 1-D tensor whose rank is equal to the rank ofdata
's first dimension. Values should be sorted and can be repeated.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as data
.
Has same shape as data, except for dimension 0 which
has size k
, the number of segments.
tf.segment_max(data, segment_ids, name=None)
Computes the maximum along segments of a tensor.
Read the section on Segmentation for an explanation of segments.
Computes a tensor such that
\(output_i = \max_j(data_j)\) where max
is over j
such
that segment_ids[j] == i
.
Args:
data
: ATensor
. Must be one of the following types:float32
,float64
,int32
,int64
,uint8
,int16
,int8
,uint16
,half
.segment_ids
: ATensor
. Must be one of the following types:int32
,int64
. A 1-D tensor whose rank is equal to the rank ofdata
's first dimension. Values should be sorted and can be repeated.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as data
.
Has same shape as data, except for dimension 0 which
has size k
, the number of segments.
tf.segment_mean(data, segment_ids, name=None)
Computes the mean along segments of a tensor.
Read the section on Segmentation for an explanation of segments.
Computes a tensor such that
\(output_i = \frac{\sum_j data_j}{N}\) where mean
is
over j
such that segment_ids[j] == i
and N
is the total number of
values summed.
Args:
data
: ATensor
. Must be one of the following types:float32
,float64
,int32
,int64
,uint8
,int16
,int8
,uint16
,half
.segment_ids
: ATensor
. Must be one of the following types:int32
,int64
. A 1-D tensor whose rank is equal to the rank ofdata
's first dimension. Values should be sorted and can be repeated.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as data
.
Has same shape as data, except for dimension 0 which
has size k
, the number of segments.
tf.unsorted_segment_sum(data, segment_ids, num_segments, name=None)
Computes the sum along segments of a tensor.
Read the section on Segmentation for an explanation of segments.
Computes a tensor such that
\(output_i = \sum_j data_j\) where sum is over j
such
that segment_ids[j] == i
. Unlike SegmentSum
, segment_ids
need not be sorted and need not cover all values in the full
range of valid values.
If the sum is empty for a given segment ID i
, output[i] = 0
.
num_segments
should equal the number of distinct segment IDs.
Args:
data
: ATensor
. Must be one of the following types:float32
,float64
,int64
,int32
,uint8
,uint16
,int16
,int8
,complex64
,complex128
,qint8
,quint8
,qint32
,half
.segment_ids
: ATensor
. Must be one of the following types:int32
,int64
. A 1-D tensor whose rank is equal to the rank ofdata
's first dimension.num_segments
: ATensor
of typeint32
.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as data
.
Has same shape as data, except for dimension 0 which
has size num_segments
.
tf.sparse_segment_sum(data, indices, segment_ids, name=None)
Computes the sum along sparse segments of a tensor.
Read the section on Segmentation for an explanation of segments.
Like SegmentSum
, but segment_ids
can have rank less than data
's first
dimension, selecting a subset of dimension 0, specified by indices
.
For example:
c = tf.constant([[1,2,3,4], [-1,-2,-3,-4], [5,6,7,8]])
# Select two rows, one segment.
tf.sparse_segment_sum(c, tf.constant([0, 1]), tf.constant([0, 0]))
==> [[0 0 0 0]]
# Select two rows, two segment.
tf.sparse_segment_sum(c, tf.constant([0, 1]), tf.constant([0, 1]))
==> [[ 1 2 3 4]
[-1 -2 -3 -4]]
# Select all rows, two segments.
tf.sparse_segment_sum(c, tf.constant([0, 1, 2]), tf.constant([0, 0, 1]))
==> [[0 0 0 0]
[5 6 7 8]]
# Which is equivalent to:
tf.segment_sum(c, tf.constant([0, 0, 1]))
Args:
data
: ATensor
. Must be one of the following types:float32
,float64
,int32
,int64
,uint8
,int16
,int8
,uint16
,half
.indices
: ATensor
of typeint32
. A 1-D tensor. Has same rank assegment_ids
.segment_ids
: ATensor
of typeint32
. A 1-D tensor. Values should be sorted and can be repeated.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as data
.
Has same shape as data, except for dimension 0 which
has size k
, the number of segments.
tf.sparse_segment_mean(data, indices, segment_ids, name=None)
Computes the mean along sparse segments of a tensor.
Read the section on Segmentation for an explanation of segments.
Like SegmentMean
, but segment_ids
can have rank less than data
's first
dimension, selecting a subset of dimension 0, specified by indices
.
Args:
data
: ATensor
. Must be one of the following types:float32
,float64
.indices
: ATensor
of typeint32
. A 1-D tensor. Has same rank assegment_ids
.segment_ids
: ATensor
of typeint32
. A 1-D tensor. Values should be sorted and can be repeated.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as data
.
Has same shape as data, except for dimension 0 which
has size k
, the number of segments.
tf.sparse_segment_sqrt_n(data, indices, segment_ids, name=None)
Computes the sum along sparse segments of a tensor divided by the sqrt of N.
N is the size of the segment being reduced.
Read the section on Segmentation for an explanation of segments.
Args:
data
: ATensor
. Must be one of the following types:float32
,float64
.indices
: ATensor
of typeint32
. A 1-D tensor. Has same rank assegment_ids
.segment_ids
: ATensor
of typeint32
. A 1-D tensor. Values should be sorted and can be repeated.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as data
.
Has same shape as data, except for dimension 0 which
has size k
, the number of segments.
Sequence Comparison and Indexing
TensorFlow provides several operations that you can use to add sequence comparison and index extraction to your graph. You can use these operations to determine sequence differences and determine the indexes of specific values in a tensor.
tf.argmin(input, dimension, name=None)
Returns the index with the smallest value across dimensions of a tensor.
Args:
input
: ATensor
. Must be one of the following types:float32
,float64
,int64
,int32
,uint8
,uint16
,int16
,int8
,complex64
,complex128
,qint8
,quint8
,qint32
,half
.dimension
: ATensor
of typeint32
. int32, 0 <= dimension < rank(input). Describes which dimension of the input Tensor to reduce across. For vectors, use dimension = 0.name
: A name for the operation (optional).
Returns:
A Tensor
of type int64
.
tf.argmax(input, dimension, name=None)
Returns the index with the largest value across dimensions of a tensor.
Args:
input
: ATensor
. Must be one of the following types:float32
,float64
,int64
,int32
,uint8
,uint16
,int16
,int8
,complex64
,complex128
,qint8
,quint8
,qint32
,half
.dimension
: ATensor
of typeint32
. int32, 0 <= dimension < rank(input). Describes which dimension of the input Tensor to reduce across. For vectors, use dimension = 0.name
: A name for the operation (optional).
Returns:
A Tensor
of type int64
.
tf.listdiff(x, y, name=None)
Computes the difference between two lists of numbers or strings.
Given a list x
and a list y
, this operation returns a list out
that
represents all values that are in x
but not in y
. The returned list out
is sorted in the same order that the numbers appear in x
(duplicates are
preserved). This operation also returns a list idx
that represents the
position of each out
element in x
. In other words:
out[i] = x[idx[i]] for i in [0, 1, ..., len(out) - 1]
For example, given this input:
x = [1, 2, 3, 4, 5, 6]
y = [1, 3, 5]
This operation would return:
out ==> [2, 4, 6]
idx ==> [1, 3, 5]
Args:
x
: ATensor
. 1-D. Values to keep.y
: ATensor
. Must have the same type asx
. 1-D. Values to remove.name
: A name for the operation (optional).
Returns:
A tuple of Tensor
objects (out, idx).
out
: ATensor
. Has the same type asx
. 1-D. Values present inx
but not iny
.idx
: ATensor
of typeint32
. 1-D. Positions ofx
values preserved inout
.
tf.where(input, name=None)
Returns locations of true values in a boolean tensor.
This operation returns the coordinates of true elements in input
. The
coordinates are returned in a 2-D tensor where the first dimension (rows)
represents the number of true elements, and the second dimension (columns)
represents the coordinates of the true elements. Keep in mind, the shape of
the output tensor can vary depending on how many true values there are in
input
. Indices are output in row-major order.
For example:
# 'input' tensor is [[True, False]
# [True, False]]
# 'input' has two true values, so output has two coordinates.
# 'input' has rank of 2, so coordinates have two indices.
where(input) ==> [[0, 0],
[1, 0]]
# `input` tensor is [[[True, False]
# [True, False]]
# [[False, True]
# [False, True]]
# [[False, False]
# [False, True]]]
# 'input' has 5 true values, so output has 5 coordinates.
# 'input' has rank of 3, so coordinates have three indices.
where(input) ==> [[0, 0, 0],
[0, 1, 0],
[1, 0, 1],
[1, 1, 1],
[2, 1, 1]]
Args:
input
: ATensor
of typebool
.name
: A name for the operation (optional).
Returns:
A Tensor
of type int64
.
tf.unique(x, name=None)
Finds unique elements in a 1-D tensor.
This operation returns a tensor y
containing all of the unique elements of x
sorted in the same order that they occur in x
. This operation also returns a
tensor idx
the same size as x
that contains the index of each value of x
in the unique output y
. In other words:
y[idx[i]] = x[i] for i in [0, 1,...,rank(x) - 1]
For example:
# tensor 'x' is [1, 1, 2, 4, 4, 4, 7, 8, 8]
y, idx = unique(x)
y ==> [1, 2, 4, 7, 8]
idx ==> [0, 0, 1, 2, 2, 2, 3, 4, 4]
Args:
x
: ATensor
. 1-D.name
: A name for the operation (optional).
Returns:
A tuple of Tensor
objects (y, idx).
y
: ATensor
. Has the same type asx
. 1-D.idx
: ATensor
of typeint32
. 1-D.
tf.edit_distance(hypothesis, truth, normalize=True, name='edit_distance')
Computes the Levenshtein distance between sequences.
This operation takes variable-length sequences (hypothesis
and truth
),
each provided as a SparseTensor
, and computes the Levenshtein distance.
You can normalize the edit distance by length of truth
by setting
normalize
to true.
For example, given the following input:
# 'hypothesis' is a tensor of shape `[2, 1]` with variable-length values:
# (0,0) = ["a"]
# (1,0) = ["b"]
hypothesis = tf.SparseTensor(
[[0, 0, 0],
[1, 0, 0]],
["a", "b"]
(2, 1, 1))
# 'truth' is a tensor of shape `[2, 2]` with variable-length values:
# (0,0) = []
# (0,1) = ["a"]
# (1,0) = ["b", "c"]
# (1,1) = ["a"]
truth = tf.SparseTensor(
[[0, 1, 0],
[1, 0, 0],
[1, 0, 1],
[1, 1, 0]]
["a", "b", "c", "a"],
(2, 2, 2))
normalize = True
This operation would return the following:
# 'output' is a tensor of shape `[2, 2]` with edit distances normalized
# by 'truth' lengths.
output ==> [[inf, 1.0], # (0,0): no truth, (0,1): no hypothesis
[0.5, 1.0]] # (1,0): addition, (1,1): no hypothesis
Args:
hypothesis
: ASparseTensor
containing hypothesis sequences.truth
: ASparseTensor
containing truth sequences.normalize
: Abool
. IfTrue
, normalizes the Levenshtein distance by length oftruth.
name
: A name for the operation (optional).
Returns:
A dense Tensor
with rank R - 1
, where R is the rank of the
SparseTensor
inputs hypothesis
and truth
.
Raises:
TypeError
: If eitherhypothesis
ortruth
are not aSparseTensor
.
tf.invert_permutation(x, name=None)
Computes the inverse permutation of a tensor.
This operation computes the inverse of an index permutation. It takes a 1-D
integer tensor x
, which represents the indices of a zero-based array, and
swaps each value with its index position. In other words, for an output tensor
y
and an input tensor x
, this operation computes the following:
y[x[i]] = i for i in [0, 1, ..., len(x) - 1]
The values must include 0. There can be no duplicate values or negative values.
For example:
# tensor `x` is [3, 4, 0, 2, 1]
invert_permutation(x) ==> [2, 4, 3, 0, 1]
Args:
x
: ATensor
of typeint32
. 1-D.name
: A name for the operation (optional).
Returns:
A Tensor
of type int32
. 1-D.
Other Functions and Classes
tf.scalar_mul(scalar, x)
Multiplies a scalar times a Tensor
or IndexedSlices
object.
Intended for use in gradient code which might deal with IndexedSlices
objects, which are easy to multiply by a scalar but more expensive to
multiply with arbitrary tensors.
Args:
scalar
: A 0-D scalarTensor
. Must have known shape.x
: ATensor
orIndexedSlices
to be scaled.
Returns:
scalar * x
of the same type (Tensor
or IndexedSlices
) as x
.
Raises:
ValueError
: if scalar is not a 0-Dscalar
.
tf.sparse_segment_sqrt_n_grad(grad, indices, segment_ids, output_dim0, name=None)
Computes gradients for SparseSegmentSqrtN.
Returns tensor "output" with same shape as grad, except for dimension 0 whose value is output_dim0.
Args:
grad
: ATensor
. Must be one of the following types:float32
,float64
. gradient propagated to the SparseSegmentSqrtN op.indices
: ATensor
of typeint32
. indices passed to the corresponding SparseSegmentSqrtN op.segment_ids
: ATensor
of typeint32
. segment_ids passed to the corresponding SparseSegmentSqrtN op.output_dim0
: ATensor
of typeint32
. dimension 0 of "data" passed to SparseSegmentSqrtN op.name
: A name for the operation (optional).
Returns:
A Tensor
. Has the same type as grad
.