Descripción common del contenido
- Introducción
- Extraer rodajas de tensor
- Insertar datos en tensores
- Más lecturas y recursos
Cuando se trabaja en aplicaciones ML como la detección de objetos y la PNL, a veces es necesario trabajar con subsecciones (cortes) de tensores. Por ejemplo, si la arquitectura de su modelo incluye el enrutamiento, donde una capa podría controlar qué ejemplo de entrenamiento se enruta a la siguiente capa. En este caso, puede usar OPS de corte de tensor para dividir los tensores y volver a armarlos en el orden correcto.
En las aplicaciones de PNL, puede usar el corte de tensor para realizar enmascaramiento de palabras mientras se capacita. Por ejemplo, puede generar datos de capacitación a partir de una lista de oraciones eligiendo un índice de palabras para enmascarar en cada oración, sacar la palabra como etiqueta y luego reemplazar la palabra elegida con un token de máscara.
En esta guía, aprenderá cómo usar las API TensorFlow para:
- Extraer rodajas de un tensor
- Inserte datos en índices específicos en un tensor
Esta guía asume familiaridad con la indexación de tensor. Lea las secciones de indexación de las guías Numpy Tensor y TensorFlow antes de comenzar con esta guía.
Configuración
import tensorflow as tf
import numpy as np
2024-08-15 02:59:22.387959: E exterior/local_xla/xla/stream_executor/cuda/cuda_fft.cc:485] Unable to register cuFFT manufacturing unit: Making an attempt to register manufacturing unit for plugin cuFFT when one has already been registered
2024-08-15 02:59:22.409014: E exterior/local_xla/xla/stream_executor/cuda/cuda_dnn.cc:8454] Unable to register cuDNN manufacturing unit: Making an attempt to register manufacturing unit for plugin cuDNN when one has already been registered
2024-08-15 02:59:22.415391: E exterior/local_xla/xla/stream_executor/cuda/cuda_blas.cc:1452] Unable to register cuBLAS manufacturing unit: Making an attempt to register manufacturing unit for plugin cuBLAS when one has already been registered
Realizar el corte de tensor tipo Numpy usando tf.slice
.
t1 = tf.fixed([0, 1, 2, 3, 4, 5, 6, 7])
print(tf.slice(t1,
start=[1],
measurement=[3]))
tf.Tensor([1 2 3], form=(3,), dtype=int32)
WARNING: All log messages earlier than absl::InitializeLog() known as are written to STDERR
I0000 00:00:1723690765.029742 169903 cuda_executor.cc:1015] profitable NUMA node learn from SysFS had unfavourable worth (-1), however there should be at the very least one NUMA node, so returning NUMA node zero. See extra at
I0000 00:00:1723690765.033507 169903 cuda_executor.cc:1015] profitable NUMA node learn from SysFS had unfavourable worth (-1), however there should be at the very least one NUMA node, so returning NUMA node zero. See extra at
I0000 00:00:1723690765.037154 169903 cuda_executor.cc:1015] profitable NUMA node learn from SysFS had unfavourable worth (-1), however there should be at the very least one NUMA node, so returning NUMA node zero. See extra at
I0000 00:00:1723690765.040869 169903 cuda_executor.cc:1015] profitable NUMA node learn from SysFS had unfavourable worth (-1), however there should be at the very least one NUMA node, so returning NUMA node zero. See extra at
I0000 00:00:1723690765.052765 169903 cuda_executor.cc:1015] profitable NUMA node learn from SysFS had unfavourable worth (-1), however there should be at the very least one NUMA node, so returning NUMA node zero. See extra at
I0000 00:00:1723690765.056359 169903 cuda_executor.cc:1015] profitable NUMA node learn from SysFS had unfavourable worth (-1), however there should be at the very least one NUMA node, so returning NUMA node zero. See extra at
I0000 00:00:1723690765.059719 169903 cuda_executor.cc:1015] profitable NUMA node learn from SysFS had unfavourable worth (-1), however there should be at the very least one NUMA node, so returning NUMA node zero. See extra at
I0000 00:00:1723690765.063138 169903 cuda_executor.cc:1015] profitable NUMA node learn from SysFS had unfavourable worth (-1), however there should be at the very least one NUMA node, so returning NUMA node zero. See extra at
I0000 00:00:1723690765.066532 169903 cuda_executor.cc:1015] profitable NUMA node learn from SysFS had unfavourable worth (-1), however there should be at the very least one NUMA node, so returning NUMA node zero. See extra at
I0000 00:00:1723690765.070040 169903 cuda_executor.cc:1015] profitable NUMA node learn from SysFS had unfavourable worth (-1), however there should be at the very least one NUMA node, so returning NUMA node zero. See extra at
I0000 00:00:1723690765.073650 169903 cuda_executor.cc:1015] profitable NUMA node learn from SysFS had unfavourable worth (-1), however there should be at the very least one NUMA node, so returning NUMA node zero. See extra at
I0000 00:00:1723690765.077235 169903 cuda_executor.cc:1015] profitable NUMA node learn from SysFS had unfavourable worth (-1), however there should be at the very least one NUMA node, so returning NUMA node zero. See extra at
I0000 00:00:1723690766.300773 169903 cuda_executor.cc:1015] profitable NUMA node learn from SysFS had unfavourable worth (-1), however there should be at the very least one NUMA node, so returning NUMA node zero. See extra at
I0000 00:00:1723690766.302797 169903 cuda_executor.cc:1015] profitable NUMA node learn from SysFS had unfavourable worth (-1), however there should be at the very least one NUMA node, so returning NUMA node zero. See extra at
I0000 00:00:1723690766.304766 169903 cuda_executor.cc:1015] profitable NUMA node learn from SysFS had unfavourable worth (-1), however there should be at the very least one NUMA node, so returning NUMA node zero. See extra at
I0000 00:00:1723690766.306851 169903 cuda_executor.cc:1015] profitable NUMA node learn from SysFS had unfavourable worth (-1), however there should be at the very least one NUMA node, so returning NUMA node zero. See extra at
I0000 00:00:1723690766.308843 169903 cuda_executor.cc:1015] profitable NUMA node learn from SysFS had unfavourable worth (-1), however there should be at the very least one NUMA node, so returning NUMA node zero. See extra at
I0000 00:00:1723690766.310723 169903 cuda_executor.cc:1015] profitable NUMA node learn from SysFS had unfavourable worth (-1), however there should be at the very least one NUMA node, so returning NUMA node zero. See extra at
I0000 00:00:1723690766.312589 169903 cuda_executor.cc:1015] profitable NUMA node learn from SysFS had unfavourable worth (-1), however there should be at the very least one NUMA node, so returning NUMA node zero. See extra at
I0000 00:00:1723690766.314574 169903 cuda_executor.cc:1015] profitable NUMA node learn from SysFS had unfavourable worth (-1), however there should be at the very least one NUMA node, so returning NUMA node zero. See extra at
I0000 00:00:1723690766.316423 169903 cuda_executor.cc:1015] profitable NUMA node learn from SysFS had unfavourable worth (-1), however there should be at the very least one NUMA node, so returning NUMA node zero. See extra at
I0000 00:00:1723690766.318291 169903 cuda_executor.cc:1015] profitable NUMA node learn from SysFS had unfavourable worth (-1), however there should be at the very least one NUMA node, so returning NUMA node zero. See extra at
I0000 00:00:1723690766.320168 169903 cuda_executor.cc:1015] profitable NUMA node learn from SysFS had unfavourable worth (-1), however there should be at the very least one NUMA node, so returning NUMA node zero. See extra at
I0000 00:00:1723690766.322150 169903 cuda_executor.cc:1015] profitable NUMA node learn from SysFS had unfavourable worth (-1), however there should be at the very least one NUMA node, so returning NUMA node zero. See extra at
I0000 00:00:1723690766.360968 169903 cuda_executor.cc:1015] profitable NUMA node learn from SysFS had unfavourable worth (-1), however there should be at the very least one NUMA node, so returning NUMA node zero. See extra at
I0000 00:00:1723690766.362946 169903 cuda_executor.cc:1015] profitable NUMA node learn from SysFS had unfavourable worth (-1), however there should be at the very least one NUMA node, so returning NUMA node zero. See extra at
I0000 00:00:1723690766.364849 169903 cuda_executor.cc:1015] profitable NUMA node learn from SysFS had unfavourable worth (-1), however there should be at the very least one NUMA node, so returning NUMA node zero. See extra at
I0000 00:00:1723690766.366867 169903 cuda_executor.cc:1015] profitable NUMA node learn from SysFS had unfavourable worth (-1), however there should be at the very least one NUMA node, so returning NUMA node zero. See extra at
I0000 00:00:1723690766.368710 169903 cuda_executor.cc:1015] profitable NUMA node learn from SysFS had unfavourable worth (-1), however there should be at the very least one NUMA node, so returning NUMA node zero. See extra at
I0000 00:00:1723690766.370591 169903 cuda_executor.cc:1015] profitable NUMA node learn from SysFS had unfavourable worth (-1), however there should be at the very least one NUMA node, so returning NUMA node zero. See extra at
I0000 00:00:1723690766.372447 169903 cuda_executor.cc:1015] profitable NUMA node learn from SysFS had unfavourable worth (-1), however there should be at the very least one NUMA node, so returning NUMA node zero. See extra at
I0000 00:00:1723690766.374435 169903 cuda_executor.cc:1015] profitable NUMA node learn from SysFS had unfavourable worth (-1), however there should be at the very least one NUMA node, so returning NUMA node zero. See extra at
I0000 00:00:1723690766.376302 169903 cuda_executor.cc:1015] profitable NUMA node learn from SysFS had unfavourable worth (-1), however there should be at the very least one NUMA node, so returning NUMA node zero. See extra at
I0000 00:00:1723690766.378677 169903 cuda_executor.cc:1015] profitable NUMA node learn from SysFS had unfavourable worth (-1), however there should be at the very least one NUMA node, so returning NUMA node zero. See extra at
I0000 00:00:1723690766.380966 169903 cuda_executor.cc:1015] profitable NUMA node learn from SysFS had unfavourable worth (-1), however there should be at the very least one NUMA node, so returning NUMA node zero. See extra at
I0000 00:00:1723690766.383365 169903 cuda_executor.cc:1015] profitable NUMA node learn from SysFS had unfavourable worth (-1), however there should be at the very least one NUMA node, so returning NUMA node zero. See extra at
Alternativamente, puede usar una sintaxis más pitónica. Tenga en cuenta que las rodajas de tensor están espaciadas uniformemente en un rango de inicio.
print(t1[1:4])
tf.Tensor([1 2 3], form=(3,), dtype=int32)
print(t1[-3:])
tf.Tensor([5 6 7], form=(3,), dtype=int32)
Para tensores bidimensionales, puede usar algo como:
t2 = tf.fixed([[0, 1, 2, 3, 4],
[5, 6, 7, 8, 9],
[10, 11, 12, 13, 14],
[15, 16, 17, 18, 19]])
print(t2[:-1, 1:3])
tf.Tensor(
[[ 1 2]
[ 6 7]
[11 12]], form=(3, 2), dtype=int32)
Puedes usar tf.slice
en tensores dimensionales más altos también.
t3 = tf.fixed([[[1, 3, 5, 7],
[9, 11, 13, 15]],
[[17, 19, 21, 23],
[25, 27, 29, 31]]
])
print(tf.slice(t3,
start=[1, 1, 0],
measurement=[1, 1, 2]))
tf.Tensor([[[25 27]]], form=(1, 1, 2), dtype=int32)
También puedes usar tf.strided_slice
extraer rodajas de tensores ‘caminando’ sobre las dimensiones del tensor.
Usar tf.collect
extraer índices específicos de un solo eje de un tensor.
print(tf.collect(t1,
indices=[0, 3, 6]))
# That is much like doing
t1[::3]
tf.tensor ([0 3 6]forma = (3,), dtype = int32)[0,">
tf.gather
does not require indices to be evenly spaced.
alphabet = tf.constant(list('abcdefghijklmnopqrstuvwxyz'))
print(tf.gather(alphabet,
indices=[2, 0, 19, 18]))
tf.Tensor([b'c' b'a' b't' b's'], form=(4,), dtype=string)
Para extraer rodajas de múltiples ejes de un tensor, use tf.gather_nd
. Esto es útil cuando desea reunir los elementos de una matriz en lugar de solo sus filas o columnas.
t4 = tf.fixed([[0, 5],
[1, 6],
[2, 7],
[3, 8],
[4, 9]])
print(tf.gather_nd(t4,
indices=[[2], [3], [0]]))
tf.Tensor(
[[2 7]
[3 8]
[0 5]], form=(3, 2), dtype=int32)
t5 = np.reshape(np.arange(18), [2, 3, 3])
print(tf.gather_nd(t5,
indices=[[0, 0, 0], [1, 2, 1]]))
tf.Tensor([ 0 16], form=(2,), dtype=int64)
# Return a listing of two matrices
print(tf.gather_nd(t5,
indices=[[[0, 0], [0, 2]], [[1, 0], [1, 2]]]))
tf.Tensor(
[[[ 0 1 2]
[ 6 7 8]]
[[ 9 10 11]
[15 16 17]]], form=(2, 2, 3), dtype=int64)
# Return one matrix
print(tf.gather_nd(t5,
indices=[[0, 0], [0, 2], [1, 0], [1, 2]]))
tf.Tensor(
[[ 0 1 2]
[ 6 7 8]
[ 9 10 11]
[15 16 17]], form=(4, 3), dtype=int64)
Insertar datos en tensores
Usar tf.scatter_nd
para insertar datos en cortes/índices específicos de un tensor. Tenga en cuenta que el tensor en el que inserta valores está inicializado cero.
t6 = tf.fixed([10])
indices = tf.fixed([[1], [3], [5], [7], [9]])
knowledge = tf.fixed([2, 4, 6, 8, 10])
print(tf.scatter_nd(indices=indices,
updates=knowledge,
form=t6))
tf.Tensor([ 0 2 0 4 0 6 0 8 0 10], form=(10,), dtype=int32)
Métodos como tf.scatter_nd
que requieren tensores inicializados cero son similares a los inicializadores de tensores dispersos. Puedes usar tf.gather_nd
y tf.scatter_nd
para imitar el comportamiento de las OP de tensor dispersas.
Considere un ejemplo en el que construye un tensor escaso utilizando estos dos métodos en conjunto.
# Collect values from one tensor by specifying indices
new_indices = tf.fixed([[0, 2], [2, 1], [3, 3]])
t7 = tf.gather_nd(t2, indices=new_indices)
# Add these values into a brand new tensor
t8 = tf.scatter_nd(indices=new_indices, updates=t7, form=tf.fixed([4, 5]))
print(t8)
tf.Tensor(
[[ 0 0 2 0 0]
[ 0 0 0 0 0]
[ 0 11 0 0 0]
[ 0 0 0 18 0]], form=(4, 5), dtype=int32)
Esto es comparable a:
t9 = tf.SparseTensor(indices=[[0, 2], [2, 1], [3, 3]],
values=[2, 11, 18],
dense_shape=[4, 5])
print(t9)
SparseTensor(indices=tf.Tensor(
[[0 2]
[2 1]
[3 3]], form=(3, 2), dtype=int64), values=tf.Tensor([ 2 11 18], form=(3,), dtype=int32), dense_shape=tf.Tensor([4 5], form=(2,), dtype=int64))
# Convert the sparse tensor right into a dense tensor
t10 = tf.sparse.to_dense(t9)
print(t10)
tf.Tensor(
[[ 0 0 2 0 0]
[ 0 0 0 0 0]
[ 0 11 0 0 0]
[ 0 0 0 18 0]], form=(4, 5), dtype=int32)
Para insertar datos en un tensor con valores preexistentes, use tf.tensor_scatter_nd_add
.
t11 = tf.fixed([[2, 7, 0],
[9, 0, 1],
[0, 3, 8]])
# Convert the tensor right into a magic sq. by inserting numbers at applicable indices
t12 = tf.tensor_scatter_nd_add(t11,
indices=[[0, 2], [1, 1], [2, 0]],
updates=[6, 5, 4])
print(t12)
tf.Tensor(
[[2 7 6]
[9 5 1]
[4 3 8]], form=(3, 3), dtype=int32)
Del mismo modo, use tf.tensor_scatter_nd_sub
para restar valores de un tensor con valores preexistentes.
# Convert the tensor into an id matrix
t13 = tf.tensor_scatter_nd_sub(t11,
indices=[[0, 0], [0, 1], [1, 0], [1, 1], [1, 2], [2, 1], [2, 2]],
updates=[1, 7, 9, -1, 1, 3, 7])
print(t13)
tf.Tensor(
[[1 0 0]
[0 1 0]
[0 0 1]], form=(3, 3), dtype=int32)
Usar tf.tensor_scatter_nd_min
Para copiar valores mínimos en términos de elemento de un tensor a otro.
t14 = tf.fixed([[-2, -7, 0],
[-9, 0, 1],
[0, -3, -8]])
t15 = tf.tensor_scatter_nd_min(t14,
indices=[[0, 2], [1, 1], [2, 0]],
updates=[-6, -5, -4])
print(t15)
tf.Tensor(
[[-2 -7 -6]
[-9 -5 1]
[-4 -3 -8]], form=(3, 3), dtype=int32)
Del mismo modo, use tf.tensor_scatter_nd_max
Para copiar valores máximos en cuanto al elemento de un tensor a otro.
t16 = tf.tensor_scatter_nd_max(t14,
indices=[[0, 2], [1, 1], [2, 0]],
updates=[6, 5, 4])
print(t16)
tf.Tensor(
[[-2 -7 6]
[-9 5 1]
[ 4 -3 -8]], form=(3, 3), dtype=int32)
Más lecturas y recursos
En esta guía, aprendió cómo usar los OP de corte de tensor disponibles con TensorFlow para ejercer un management más fino sobre los elementos en sus tensores.
- Echa un vistazo a las operaciones de corte disponibles con TensorFlow Numpy como
tf.experimental.numpy.take_along_axis
ytf.experimental.numpy.take
. - También consulte la guía de tensor y la guía variable.
Publicado originalmente en el