I can find two main differences between one and the other:
Firstly, tf.Variable always creates a new variable, regardless of whether tf.get_variable gets an existing variable with these parameters from the graph, and if it does not exist, it creates a new one.
tf.Variable requires an initial value.
It is important to clarify that the tf.get_variable function prefix name contains the current scope of variables for performing repeated checks. For example:
with tf.variable_scope("one"): a = tf.get_variable("v", [1]) #a.name == "one/v:0" with tf.variable_scope("one"): b = tf.get_variable("v", [1]) #ValueError: Variable one/v already exists with tf.variable_scope("one", reuse = True): c = tf.get_variable("v", [1]) #c.name == "one/v:0" with tf.variable_scope("two"): d = tf.get_variable("v", [1]) #d.name == "two/v:0" e = tf.Variable(1, name = "v", expected_shape = [1]) #e.name == "two/v_1:0" assert(a is c) #Assertion is true, they refer to the same object. assert(a is d) #AssertionError: they are different objects assert(d is e) #AssertionError: they are different objects
The last assertion error is interesting: two variables with the same name in the same scope must be the same variable. But if you check the names of the variables d and e , you will realize that Tensorflow changed the name of the variable e :
d.name #d.name == "two/v:0" e.name #e.name == "two/v_1:0"
Jadiel de Armas May 19 '17 at 17:58 2017-05-19 17:58
source share