Skip to content

tx.activation

Activation functions are mainly used with the Activation layer, but these need not be TensorX functions only. Any function from Tensorflow or any generic function that takes tensors as inputs and outputs a Tensor or SparseTensor objects, can be used. This namespace is included for convenience and future extra activation functions.

identity

source

.identity(
   x, name: str = None
)

Identity function

Returns a tensor with the same content as the input tensor.

Args

  • x (Tensor) : The input tensor.
  • name (str) : name for this op

Returns

  • tensor (Tensor) : of the same shape, type and content of the input tensor.

sigmoid

source

.sigmoid(
   x
)

Sigmoid function

Element-wise sigmoid function, defined as:

f(x) = \frac{1}{1 + \exp(-x)}

Args

  • x (Tensor) : A tensor or variable.

Returns

  • tensor (Tensor) : with the result of applying the sigmoid function to the input tensor.

tanh

source

.tanh(
   x
)

Hyperbolic tangent (tanh) function.

The element-wise hyperbolic tangent function is essentially a rescaled sigmoid function. The sigmoid function with range [0,1] is defined as follows:

f(x) = \frac{1}{1 + \exp(-x)}

the hyperbolic tangent is a re-scaled function such that it's outputs range [-1,1] defined as: $$ tanh(x) = 2f(2x)−1 $$

which leads us to the standard definition of hyperbolic tangent

tanh(x)=\frac{e^{x}-e^{-x}}{e^{x}+e^{-x}}

Args

  • x (Tensor) : an input tensor

Returns

  • tensor (Tensor) : a tensor with the result of applying the element-wise hyperbolic tangent to the input

relu

source

.relu(
   x
)

relu activation

A Rectifier linear unit [1] is defined as:

f(x)= \max(0, x)

Args

  • x (Tensor) : input tensor

Returns

tensor (Tensor) that results in element-wise rectifier applied to x.


elu

source

.elu(
   x, alpha = 1.0
)

elu activation

An Exponential Linear Unit (ELU) is defined as:

f(x)=\left\{\begin{array}{cc}x & x>0 \\ \alpha \cdot \left(e^{x}-1\right) & x<=0 \end{array}\right\}

Args

  • x (Tensor) : an input tensor
  • alpha (float) : A scalar, slope of positive section.

Returns

  • tensor (Tensor) : resulting from the application of the elu activation to the input tensor.

gelu

source

.gelu(
   x, approximate: bool = True
)

Gaussian Error Linear Unit.

Computes gaussian error linear: 0.5 * x * (1 + tanh(sqrt(2 / pi) * (x + 0.044715 * x^3))) or x * P(X <= x) = 0.5 * x * (1 + erf(x / sqrt(2))), where P(X) ~ N(0, 1), depending on whether approximation is enabled.

Args

  • x (Tensor) : Must be one of the following types: float16, float32, float64.
  • approximate (bool) : whether to enable approximation.

Returns

  • tensor (Tensor) : with the same type as x

softmax

source

.softmax(
   x, axis = None, name = None
)

softmax activation

Softmax activation function, is equivalent to softmax = tf.exp(logits) / tf.reduce_sum(tf.exp(logits), axis) and it is defined as:

\sigma(\mathbf{z})_{i}=\frac{e^{z_{i}}}{\sum_{j=1}^{K} e^{z_{j}}}

Args

  • x (Tensor) : input tensor
  • axis (int) : the dimension softmax would be performed on. The default is -1 which indicates the last dimension.
  • name (str) : name for this op

Returns

  • tensor (Tensor) : output resulting from the application of the softmax function to the input tensor

sparsemax

source

.sparsemax(
   logits, name: str = None
)

Computes the sparsemax activation function [1]

For each batch i and class j we have sparsemax[i, j] = max(logits[i, j] - tau(logits[i, :]), 0)

References

  • https://arxiv.org/abs/1602.02068

Args

  • logits (Tensor) : tensor with dtype: half, float32,float64.
  • name (str) : A name for the operation (optional).

Returns

  • tensor (Tensor) : with the same type as the input logits.