tx.ops
matrix_indices
.matrix_indices(
index_tensor, dtype = tf.int64, sort_indices = True, name = 'matrix_indices'
)
Transforms a batch of column indices into a batch of matrix indices
Args
- index_tensor (
Tensor
) : a tensor with shape(b,n)
with a batch ofn
column indices. - dtype (
DType
) : the output dtype for the indices. Defaults toint64
. - sort_indices (
bool
) : ifTrue
, output indices are sorted in canonical row-major order. - name (
str
) : name for this op.
Returns
- tensor (
Tensor
) : tensor with shape[b,2]
for each index in the input tensor with the corresponding matrix indices
empty_sparse_tensor
.empty_sparse_tensor(
dense_shape, dtype = tf.float32, name = 'empty_sp_tensor'
)
Creates an empty SparseTensor
Args
- dense_shape (
TensorShape
) : a 1-D tensor, python list, or numpy array with the output shape for the sparse tensor - dtype (
DType
) : the dtype of the values for the empty tf.SparseTensor - name (
str
) : a name for this operation
Returns
- sp_tensor (
SparseTensor
) : an empty sparse tensor with a given shape
sparse_ones
.sparse_ones(
indices, dense_shape, dtype = tf.float32, name = 'sparse_ones'
)
Creates a new SparseTensor
with the given indices having value 1
Args
- indices (
Tensor
) : a rank 2 tensor with the(row,column)
indices for the resulting sparse tensor - dense_shape (
Tensor
orTensorShape
) : the output dense shape - dtype (
tf.DType
) : the tensor type for the values - name (
str
) : sparse_ones op
Returns
- sp_tensor (
SparseTensor
) : a new sparse tensor with values set to 1
sparse_zeros
.sparse_zeros(
indices, dense_shape, dtype = tf.float32, name = 'sparse_zeros'
)
Creates a new SparseTensor
with the given indices having value 0
Args
- indices (
Tensor
) : a rank 2 tensor with the(row,column)
indices for the resulting sparse tensor - dense_shape (
Tensor
orTensorShape
) : the output dense shape - dtype (
tf.DType
) : the tensor type for the values - name (
str
) : sparse_ones op
Returns
- sp_tensor (
SparseTensor
) : a new sparse tensor with values set to 0
sparse_overlap
.sparse_overlap(
sp_tensor1, sp_tensor2, name = 'sparse_overlap'
)
sparse overlap
Returns a SparseTensor
where the indices of the overlapping indices in the two
sparse tensors with the values of the first one.
Args
- sp_tensor1 (
SparseTensor
) : a sparse tensor - sp_tensor2 (
SparseTensor
) : another sparse tensor - name (
str
) : name for sparse_overlap op
Returns
- sp_tensor (
SparseTensor
) : sparse tensor with the overlapping indices and the values ofsp_tensor1
apply_gate
.apply_gate(
tensor, gate
)
Applies a gate tensor to the given input
if input tensor outer dimension is a multiple of gate outer dimension we use broadcasting to apply the gate evenly across the input tensor.
Example
tx.apply_gate(tf.ones([1,4]),[1.,0.])
[[1., 1., 0., 0.]]
Args
- tensor (
Tensor
) : an input tensor - gate (
Tensor
) : float tensor that is multiplied by the input tensor. The outer dimension of the input tensor should either match the gate tensor or be a multiple of gate tensor.
Returns
- gated (
Tensor
) : input tensor gated using the given gate weights
sparse_indices
.sparse_indices(
sp_values, name = 'sparse_indices'
)
Returns a SparseTensor
with the values containing column indices for the active values on a
given SparseTensor
.
Use Case
To be used with embedding_lookup_sparse
when we need two SparseTensor
objects with the indices and
values
Args
- sp_values (
SparseTensor
) : a sparse tensor for which we extract the active indices. - name (
str
) : name for sparse_indices op
Returns
- sp_indices (
SparseTensor
) : a sparse tensor with the column indices
sparse_matrix_indices
.sparse_matrix_indices(
column_indices, num_cols, dtype = tf.float32, name = 'sparse_one_hot'
)
Transforms a batch of column indices to a one-hot encoding SparseTensor
.
Example
indices = [[0,1,4],
[1,2,6]]
dense_shape = [2,10]
sp_one_hot = sparse_one_hot(indices,dense_shape)
expected = tf.SparseTensor(indices=[[0,0],
[0,1],
[0,4],
[1,1],
[1,2],
[1,6]],
values=[1,1,1,1,1,1],
dense_shape=[2,10])
Args
- column_indices (
Tensor
) : a dense tensor with the indices to be active for each sample (row) - num_cols (
int
) : number of columns for the one-hot encoding - dtype (
tf.DType
) : the type for the output values. - name (
str
) : name for this op
Returns
- sp_tensor (
SparseTensor
) : a sparse tensor with the one hot encoding for the given indices
dropout
.dropout(
tensor, noise_shape = None, random_mask = None, probability = 0.1, scale = True,
seed = None, return_mask = False, name = 'dropout'
)
With probability probability
, outputs 0
otherwise outputs the input element. If scale
is True, the
input elements are scaled up by 1 / (1-probability)
so that the expected sum of the activations is unchanged.
By default, each element is kept or dropped independently. If noise_shape
is specified, it must be broadcastable
to the shape of x
, and only dimensions with noise_shape[i] == shape(x)[i]
will make independent decisions. For example, if shape(x) = [k, l, m, n]
and noise_shape = [k, 1, 1, n]
, each batch and channel component will be
kept independently and each row and column will be kept or not kept together.
Args
- tensor (
Tensor
) : an input tensor - noise_shape (
Tensor
) : A 1-DTensor
of typeint32
, representing the shape for randomly generated drop flags - return_mask (
bool
) : ifTrue
, returns the random mask used - random_mask (
Tensor
) : a tensor used to create the random bernoulli mask - probability (
float
orTensor
) : A scalarTensor
with the same type as x. The probability that each element is kept. - scale (
bool
) : if true rescales the non-zero elements to 1 / (1-drop_probability) - seed (
int
) : A Python integer with the random number generator seed - name (
str
) : a name for this operation
Returns
- tensor (
Tensor
) : output tensor with the sameDType
as the input
Raises
- ValueError : if
probability
is not in[0, 1]
or ifx
is not a floating point tensor.
alpha_dropout
.alpha_dropout(
tensor, noise_shape = None, random_mask = None, probability = 0.1, seed = None,
return_mask = False, name = 'dropout'
)
Alpha Dropout keeps mean and variance of inputs in order to ensure the self-normalization after dropout. Alpha dropout is proposed for Scaled Exponential Linear Units (SELUs) because it randomly sets activations to the negative saturation value rather than 0.
The multiplicative noise will have standard deviation $\sqrt{\frac{probability}{(1-probability)}}
References
Args
- tensor (
Tensor
) : A floating point tensor. - noise_shape (
Tensor
) : A 1-DTensor
of typeint32
, representing the shape for randomly generated drop flags - return_mask (
bool
) : if true, returns the random mask used - random_mask (
Tensor
) : a tensor used to create the random bernoulli mask - probability (
float
orTensor
) : A scalarTensor
with the same type as x. The probability that each element is kept. - seed (
int
) : A Python integer with the random number generator seed - name (
str
) : a name for this operation (optional)
Returns
- result (
Tensor
) : a tensor with the same shape as the input with the dropped units set to negative values
sparse_dropout
.sparse_dropout(
sp_tensor, probability = 0.2, scale = True, seed = None, mask = None,
return_mask = False, alpha = False, name = 'sparse_dropout'
)
Performs a dropout on a SparseTensor
.
With probability keep_prob
, outputs the input element scaled up by
1 / keep_prob
, otherwise outputs 0
. The scaling is so that the expected
sum is unchanged.
Args
- sp_tensor (
SparseTensor
) : a sparse tensor on which the dropout is performed. - mask (
Tensor
) : a binary random mask to be applied to the values of this tensor - return_mask (
bool
) : if true returns the random_mask used to perform dropout (result,random_mask) - probability (
float
,Tensor
) : A scalar tensor with the same type as x. The probability that each element is kept. - scale (
bool
) : if True rescales the input to 1 / keep_prob else simply drops without rescaling - seed (
int) : A Python integer used as seed. (See
TensorFlow` documentation fortf.set_random_seed
for behavior.) - alpha (
bool
) : if True usesalpha_dropout
instead ofdropout
in the inputs - name (
str
) : A name for this operation (optional).
binary_random_mask
.binary_random_mask(
tensor, mask_probability = 0.0, seed = None
)
Creates a binary mask with the same shape as the given tensor, randomly generated from the given mask probability.
Args
- tensor (
Tensor
) : tensor for which we would like to create a mask - mask_probability (
float
,Tensor
) : scalar tensor or float with probability of masking a given value - seed (
int
) : seed for random number generator
Returns
- binary_mask (
Tensor
) : a tensor with values0
or1
with the same shape as the input tensor
to_sparse
.to_sparse(
tensor, name = 'to_sparse'
)
Converts a given Tensor
in a SparseTensor
Example
For a dense Tensor
such as:
tensor = [[1,0],
[2,3]]
this returns an op that creates the following two SparseTensor
:
tf.SparseTensor(indices = [[0,0],
[1,0],
[1,1]],
values = [1,2,3],
dense_shape = [2,2])
Args
- tensor (
Tensor
) : a dense tensor - name (
str
) : name for to_sparse op
Returns
- sp_tensor (
SparseTensor
) : a sparse tensor with sparse index and value tensors with the non-zero entries of the given input.
embedding_lookup_sparse
.embedding_lookup_sparse(
params, sp_tensor, combiner = None, max_norm = None,
name = 'embedding_lookup_sparse'
)
Computes embeddings for the given ids and weights.
Info
assumes that there is at least one id for each row in the dense tensor represented by sp_ids (i.e. there are no rows with empty features), and that all the indices of sp_ids are in canonical row-major order. It also assumes that all id values lie in the range [0, p0), where p0 is the sum of the size of params along dimension 0.
Note
in tensorflow's implementation, sparse gradients do not propagate through gather.
Args
- params : A single tensor representing the complete embedding tensor, or a
- sp_tensor (
SparseTensor
) : N x MSparseTensor
with the ids and weights where N is typically batch size and M is arbitrary. - combiner : A string specifying the reduction op. Currently "mean", "sqrtn"
- max_norm : If not
None
, each embedding is clipped if its l2-norm is larger - name (
str
) : op name sp_tensor: list of P tensors all of same shape except for the first dimension, representing sharded embedding tensors. Alternatively, aPartitionedVariable
, created by partitioning along dimension 0. Each element must be appropriately sized for the givenpartition_strategy
.
and "sum" are supported. "sum" computes the weighted sum of the embedding results for each row. "mean" is the weighted sum divided by the total weight. "sqrtn" is the weighted sum divided by the square root of the sum of the squares of the weights.
than this value, before combining.
Returns
- tensor (
Tensor
) : dense tensor representing the combined embeddings for the sparse ids. For each row in the dense tensor represented bysp_ids
, the op looks up the embeddings for all ids in that row, multiplies them by the corresponding weight, and combines these embeddings as specified.
Raises
- TypeError : If
sp_ids
is not aSparseTensor
, or ifsp_weights
is - ValueError : If
combiner
is not one of {"mean", "sqrtn", "sum"}. neitherNone
norSparseTensor
.
sparse_overlap
.sparse_overlap(
sp_tensor1, sp_tensor2, name = 'sparse_overlap'
)
sparse overlap
Returns a SparseTensor
where the indices of the overlapping indices in the two
sparse tensors with the values of the first one.
Args
- sp_tensor1 (
SparseTensor
) : a sparse tensor - sp_tensor2 (
SparseTensor
) : another sparse tensor - name (
str
) : name for sparse_overlap op
Returns
- sp_tensor (
SparseTensor
) : sparse tensor with the overlapping indices and the values ofsp_tensor1
sort_by_first
.sort_by_first(
tensor1, tensor2, ascending = True, name = 'sort_by_first'
)
sort_by_first
Sorts two tensors. Sorts the second by the changes in the first sort
Args
- tensor1 (
Tensor
) : tensor to determine the oder by which the second is sorted - tensor2 (
Tensor
) : tensor to be sorted according to the sorting of the first - ascending (
Bool
) : if True sorts by ascending order of value - name (
str
) : name of the op
Returns
- tensor2 (
Tensor
,Tensor
) : sorted first tensor, second tensor sorted according to the indices of the first tensor sorting
ranges
.ranges(
range_sizes, name = 'ranges'
)
ranges
similar to concatenating multiple tf.range
calls applied
to each element of a given 1D tensor with range sizes.
Example
ranges([1,2,4])
[0,0,1,0,1,2,3]
the enums are [0]
, [0,1]
, [0,1,2,3]
Args
- range_sizes (
Tensor
) : 1D tensor with range sizes - name (
str
) : ranges op name
Returns
- ranges (
Tensor
) : a 1DTensor
withtf.reduce_sum(range_sizes)
dimensions
grid_2d
.grid_2d(
shape, name = 'grid_2d'
)
creates a tensor with a grid 2d coordinates
Args
- shape (
Tensor
) : an Tensor of tf.int32 with a 2D shape for the grid - name (
str
) : grid_2d op name
Returns
- grid_coordinates (
Tensor
) : 2D tensor with grid coordinates
gather_sparse
.gather_sparse(
sp_tensor, ids, name = 'gather_sparse'
)
gather_sparse
gather rows from a sparse tensor by the given ids and returns a sparse tensor
Warning
gathering from a SparseTensor
is inefficient
Example
gather_sparse(sp_tensor,[1,1,4])
returns a [3,sp_tensor.dense_shape[-1]]
SparseTensor
Args
- sp_tensor (
SparseTensor
) : sparse tensor - ids (
Tensor
) : an int tensor with the ids of the rows to be returned - name (
str
) : on name
Returns
- sp_gathered (
SparseTensor
) : a sparse tensor with the gathered rows.
sparse_tile
.sparse_tile(
sp_tensor, num, name = 'sparse_tile'
)
Constructs a SparseTensor
by replicating the input sparse tensor num
times
Args
- sp_tensor (
SparseTensor
) : a sparse input tensor to be tiled - num (
int
) : number of repetitions - name (
str
) : name for the op
Returns
- sp_tile (
SparseTensor
) : result sparse tensor
pairs
.pairs(
tensor1, tensor2, name = 'pairs'
)
Pairwise combination of elements from the two tensors.
Example
t1 = [[0],[1]]
t2 = [2,3,4]
t12 = [[0,2],[1,2],[0,3],[1,3],[0,4],[1,4]]
p12 = tx.pairs(t1,t2)
tf.reduce_all(tf.equal(p12,t12))
Args
- tensor1 (
Tensor
) : a tensor, python list, or numpy array - tensor2 (
Tensor
) : a tensor, python list, or numpy array - name (
str
) : name for pairs op)
Returns
- tensor (
Tensor
) : a tensor with the pairwise combination of input tensors
sparse_put
.sparse_put(
sp_tensor, sp_updates, name = 'sparse_put'
)
sparse_put
Changes a given tf.SparseTensor according to the updates specified in a tf.SparseTensor.
Creates a new tensor where the values of the updates override the
values in the original tensor. The input tensors must have the same
dense_shape
.
Args
- sp_tensor (
SparseTensor
) : a sparse tensor we which to set some indices to given values - sp_updates (`SparseTensor) : a
SparseTensor
with the indices to be changed and the respective values - name (
str
) : sparse_put op name
Returns
- sparse_tensor (
SparseTensor
) : a sparse tensor with the updated values.
put
.put(
tensor, sp_updates, name = 'put'
)
put
Changes a given dense Tensor
according to the updates specified in a SparseTensor
.
Creates a new Tensor
where the values of the updates override the
values in the original tensor. The tensor shape
must be the same as the updates dense_shape
.
Args
- tensor (
Tensor
) : tensor to be updated - sp_updates (
SparseTensor
) : sparse tensor with the indices to be changed and the respective values. - name (
str
) : put op name
Returns
- tensor (
Tensor
) : a tensor with the updated values.
filter_nd
.filter_nd(
condition, params, name = 'filter_nd'
)
filter_nd Filters a given tensor based on a condition tensor condition and params must have the same shape
Args
- condition (
Tensor
) : abool
tensor used to filter params - params (
Tensor
) : the tensor to be filtered - name (
str
) : name for filter_nd op
Returns
- sp_tensor (
SparseTensor
) : a sparse tensor with the values in params filtered according to condition