I am new to TensorFlow and am looking at custom op development right now. I already read the official tutorial, but I feel that a lot is going on behind the scenes, and I don't always want to put my user statements in the user_ops directory.
So I reviewed the word2vec example
which uses the custom "Skipgram" op, the registration of which is defined here:
/word2vec_ops.cc
and the kernel implementation of which is here:
/word2vec_kernels.cc
By looking at the build file, I tried to create separate targets
1) bazel build -c opt tensorflow/models/embedding:word2vec_ops
This generates a bunch of object files as expected.
2) bazel build -c opt tensorflow/models/embedding:word2vec_kernels
The same goes for that.
3) bazel build -c opt tensorflow/models/embedding:word2vec_kernels:gen_word2vec
This latest build uses a custom rule, namely tf_op_gen_wrapper_py https://github.com/tensorflow/tensorflow/blob/master/tensorflow/tensorflow.bzl#L197-L231
It is interesting to note that this depends only on Op Registration, and not on the kernel itself.
In the end, if I build py_binary using
bazel build -c opt tensorflow/models/embedding:word2vec
It works fine, but I donβt see where and how the C ++ kernel code is related?
In addition, I would also like to understand the tf_op_gen_wrapper_py rule and the whole compilation / linking procedure that goes behind the scenes for registering ops.
Thanks.