Tensor thread tensors using argmax

I want to make a dynamic loss function in a tensor flow. I want to calculate the energy of the FFT signal, or rather, only a window of size 3 around the most dominant peak. I can not realize in the TF, as it generates a lot of errors, such as StrideandInvalidArgumentError (see above for traceback): Expected begin, end, and strides to be 1D equal size tensors, but got shapes [1,64], [1,64], and [1] instead.

My code is:

self.spec = tf.fft(self.signal)
self.spec_mag = tf.complex_abs(self.spec[:,1:33])
self.argm = tf.cast(tf.argmax(self.spec_mag, 1), dtype=tf.int32)
self.frac = tf.reduce_sum(self.spec_mag[self.argm-1:self.argm+2], 1)

Since I calculated the packet 64 and the data size as 64, the form self.signalis equal (64,64). I want to calculate only the FFT AC components. Since the signal is truly estimated, only half of the spectrum will do the job. Consequently, the form self.spec_magis equal (64,32).

The max in this fft is in self.argm, which has the form (64,1).

Now I want to calculate the energy of the three elements around the maximum peak through: self.spec_mag[self.argm-1:self.argm+2].

However, when I run the code and try to get the value self.frac, I get a few errors.

+6
source share
2 answers

It seems that you were absent and indicated when accessing argm. Here is a fixed version of version 1, 64.

import tensorflow as tf
import numpy as np

x = np.random.rand(1, 64)
xt = tf.constant(value=x, dtype=tf.complex64)

signal = xt
print('signal', signal.shape)
print('signal', signal.eval())

spec = tf.fft(signal)
print('spec', spec.shape)
print('spec', spec.eval())

spec_mag = tf.abs(spec[:,1:33])
print('spec_mag', spec_mag.shape)
print('spec_mag', spec_mag.eval())

argm = tf.cast(tf.argmax(spec_mag, 1), dtype=tf.int32)
print('argm', argm.shape)
print('argm', argm.eval())

frac = tf.reduce_sum(spec_mag[0][(argm[0]-1):(argm[0]+2)], 0)
print('frac', frac.shape)
print('frac', frac.eval())

and here is the extended version (batch, m, n)

import tensorflow as tf
import numpy as np

x = np.random.rand(1, 1, 64)
xt = tf.constant(value=x, dtype=tf.complex64)

signal = xt
print('signal', signal.shape)
print('signal', signal.eval())

spec = tf.fft(signal)
print('spec', spec.shape)
print('spec', spec.eval())

spec_mag = tf.abs(spec[:, :, 1:33])
print('spec_mag', spec_mag.shape)
print('spec_mag', spec_mag.eval())

argm = tf.cast(tf.argmax(spec_mag, 2), dtype=tf.int32)
print('argm', argm.shape)
print('argm', argm.eval())

frac = tf.reduce_sum(spec_mag[0][0][(argm[0][0]-1):(argm[0][0]+2)], 0)
print('frac', frac.shape)
print('frac', frac.eval())

you can correct function names since I am editing this code in a newer version of tensor flow.

+4
source

Indexing Indexing uses tf.Tensor.getitem :

. NumPy , . ,

tf.slice tf.strided_slice .

tf.gather indices Tensor, tf.gather_nd, indices N Tensor, N = indices.shape[-1]

3 max, , , , tf.stack

import tensorflow as tf

signal = tf.placeholder(shape=(64, 64), dtype=tf.complex64)
spec = tf.fft(signal)
spec_mag = tf.abs(spec[:,1:33])
argm = tf.cast(tf.argmax(spec_mag, 1), dtype=tf.int32)

frac = tf.stack([tf.gather_nd(spec,tf.transpose(tf.stack(
             [tf.range(64), argm+i]))) for i in [-1, 0, 1]])

frac = tf.reduce_sum(frac, 1)

, argm , .

+3

All Articles