Skip to content

Lookup

source

Lookup(
   input_layer, seq_size, embedding_shape, weight_init = glorot_uniform_init(),
   batch_size = None, add_bias = False, bias_init = tf.initializers.zeros(),
   bias = None, weights = None, shape = None, dtype = tf.float32, name = 'lookup',
   share_state_with = None, batch_padding = True
)

A Lookup or Embeddings layer that gathers rows of a given parameter table given integer indices.

Similar to the embedding_lookup operation from TensorFlow or the Embedding layer from Keras with added functionality.

Note

If a SparseTensor is passed as input, Lookup outputs one vector per row of the SparseTensor. If an exact batch_size is given the aggregation and padding is done based on this batch_size.

If we want to lookup a batch of 2 sequences of 4 elements encoded in a SparseTensor, this should have the shape (4*batch_size,d) where batch_size=2 and d is the input n_units.

Args

  • input_layer (Layer) : an Input or other Layer representing indices for the lookup
  • seq_size (int) : size of the sequence to be looked-up
  • weight_init (Callable[tf.Tensor]) : embedding table initializer
  • embedding_shape (tf.TensorShape) : lookup table shape
  • batch_size (int or None) : number of sequences to be looked up, if not None, will force a padding up to the specified batch_size.
  • add_bias (bool) : if True adds a bias to the lookup output.
  • bias (tf.Tensor or tf.Variable) : optionally pass bias value to the lookup operator
  • weights (tf.Tensor or tf.Variable) : optional lookup table value
  • shape : (tf.TensorShape): expected output shape for the lookup. overrides lookup.shape inference
  • dtype (tf.DType) : output data type
  • name (str) : layer name
  • share_state_with (Lookup) : a Lookup layer with which this layer shares its state
  • batch_padding (bool) : if True, pads the output according to seq_size and given (or inferred) batch_size

Returns

  • embeddings (Tensor) : output tensor

Methods:

.compute_shape

source

.compute_shape()

.init_state

source

.init_state()

.compute

source

.compute(
   input_tensor
)

.as_concat

source

.as_concat()

concatenates the sequence produced by a lookup and returns the current lookup viewed as a concat sequence layer

Returns

  • seq_concat (Wrap) : a SeqConcat layer as a view for the Lookup layer

.permute_batch_time

source

.permute_batch_time()

.reuse_with

source

.reuse_with(
   input_layer, name = None
)

Reuses the current layer on a different input.

Uses the variables in this layer to create a new Layer instance with a different input_layer

Args

  • input_layer : a `Lookup Layer
  • name : name for the new Layer

Return: Layer: a new layer with shared variables with the current layer.