Why does it not work
Currently, the Value template parameter is displayed in two different places when calling Apply : from the pointer to the argument of the member function and from the last argument. From &Foo::MyFunc , Value is inferred as int const& . From f.GetValue() , Value is inferred as int . This is due to the fact that the reference and upper levels of cv-qualifiers are discarded to subtract the template. Since the two outputs are different for the Value parameter, no output is possible - which removes Apply() from the overload set, and as a result we do not have a viable overload.
How to fix it
The problem is that Value is displayed in two different places, so let's just prevent this. One way is to wrap one of the uses in an undetectable context:
template <class T> struct non_deduced { using type = T; }; template <class T> using non_deduced_t = typename non_deduced<T>::type; template<class T, class Value> void Apply(void (T::*cb)(Value), T* obj, non_deduced_t<Value> v) { (obj->*cb)(v); }
The last argument of v is of type non_deduced_t<Value> , which, as the name implies, is an non-deducible context. Therefore, during the process of template subtraction, Value is derived as int const& from the pointer to the member function (as before), and now we just connect it to the type for v .
Alternatively, you can choose cb as your own template parameter. At this point, Apply() just boils down to std::invoke() .
Barry
source share