What do the options in ConfigProto mean, such as allow_soft_placement and log_device_placement?

We see this quite often in many TensorFlow tutorials:

sess = tf.Session(config=tf.ConfigProto(allow_soft_placement=True, log_device_placement=True)) 

What do allow_soft_placement and log_device_placement ?

+28
tensorflow
source share
4 answers

If you look at the ConfigProto API on line 278, you will see the following:

  // Whether soft placement is allowed. If allow_soft_placement is true, // an op will be placed on CPU if // 1. there no GPU implementation for the OP // or // 2. no GPU devices are known or registered // or // 3. need to co-locate with reftype input(s) which are from CPU. bool allow_soft_placement = 7; 

This really means that if you do something like this without allow_soft_placement=True , TensorFlow will throw an error.

 with tf.device('/gpu:0'): # some op that does not have a GPU implementation 

Right below it you will see line 281:

  // Whether device placements should be logged. bool log_device_placement = 8; 

When log_device_placement=True , you will get verbose output of something like this:

 2017-07-03 01:13:59.466748: I tensorflow/core/common_runtime/simple_placer.cc:841] Placeholder_1: (Placeholder)/job:localhost/replica:0/task:0/cpu:0 Placeholder: (Placeholder): /job:localhost/replica:0/task:0/cpu:0 2017-07-03 01:13:59.466765: I tensorflow/core/common_runtime/simple_placer.cc:841] Placeholder: (Placeholder)/job:localhost/replica:0/task:0/cpu:0 Variable/initial_value: (Const): /job:localhost/replica:0/task:0/cpu:0 2017-07-03 01:13:59.466783: I tensorflow/core/common_runtime/simple_placer.cc:841] Variable/initial_value: (Const)/job:localhost/replica:0/task:0/cpu:0 

You can see where each operation is displayed. In this case, they are all mapped to /cpu:0 , but if you are in a distributed configuration, there will be many more devices.

+30
source share

In addition to the comments in tensorflow / core / protobuf / config.proto ( allow_soft_placement , log_device_placement ) is also explained in TF using the GPU tutorial .

To find out which devices are assigned to your operations and tensors, create a session with the log_device_placement configuration log_device_placement to True.

This is useful for debugging. For each of the nodes in your graph, you will see the device to which it was assigned.


If you want TensorFlow to automatically select an existing and supporting device to start operations in case the specified one does not exist, you can set allow_soft_placement to True in the configuration when creating a session.

This will help you if you accidentally manually specified the wrong device or device that does not support a specific operating system. This is useful if you are writing code that can be executed in environments that you do not know. You can still provide useful defaults, but in the event of failure, an elegant reserve.

+10
source share

allow_soft_placement

This option allows you to assign flexible devices, but only works when your flow tensor is not compiled using the GPU. If your flow tensor is supported by the GPU, operations are always performed on the GPU, regardless of whether allow_soft_placement is set or not, and even if you set the device as the CPU. But if you set it as false and the device as a GPU, but the GPU cannot be found on your computer, it will cause an error.

log_device_placement

This config tells which device the operation is distributed when plotting. He can always find a priority device with the best performance on your machine. It seems to just ignore your settings.

+3
source share

In simple words:

allow_soft_placement allows you to dynamically allocate GPU memory,

log_device_placement prints device information

+2
source share

All Articles