This is probably a little late for your class, but hopefully it helps someone.
If your goal is to simply output the len(input)xlen(input) array, you can multiply the 1xlen(input) matrix tensor by your input array after expanding it to len(input)x1 :
input_ = tf.placeholder(tf.float32, [len(input)]) input_shape = input_.get_shape().as_list() tfvar = tf.Variable(tf.random_normal([1,input_shape[0]], mean=0.0, stddev=.01, dtype=tf.float32)) def function(input_): x = tf.expand_dims(input_, axis=1) # dims = len(input)x1 return tf.matmul(x,tfvar) # mtrx multiplication produces 3x3 mtrx
This function should be generalized to any 1D input_ tensor and express the square tensor len(input_)xlen(input_) .
If your goal is to train the tensorflow variable to accurately produce the provided output, you can train tfvar with the loss function and optimizer:
desired_output = tf.constant([[0.09003057, 0.24472847, 0.66524096], [0.26894142, 0.73105858, 0.0 ], [1.0, 0.0, 0.0 ]], dtype=tf.float32) actual_output = function(input_) loss = tf.reduce_mean(tf.square(actual_output-desired_output)) optimizer = tf.train.AdamOptimizer().minimize(loss) init = tf.global_variables_initializer() with tf.Session() as sess: sess.run(init) cost, opt = sess.run([loss, optimizer], feed_dict={input_:input})
Please note: if you want a more solid workout, add offset, non-linearity and other layers.
saetch_g
source share