The behavior of the language with respect to implicit transformations is very clearly defined:
if you call the m method of the o object of class C , and this class does not support the m method, then Scala will look for an implicit conversion from C to what m supports.
http://docs.scala-lang.org/tutorials/FAQ/finding-implicits.html
In other words, an implicit conversion will never be applied to heyMan in the heyMan expression if the (statically known) heyMan class / attribute already defines a hello method - transparent transformations only when you call a method that it does not yet define.
For me, the correct behavior should be that the implicit definition always displays the real definition, so that the user code is protected from the appearance of new methods in the API that it calls.
Is that not so? If the implicit conversion really takes precedence, then the user will be threatened by their long-defined methods that have been around for 5 years, suddenly obscured by a new implicit conversion in the new version of the library dependency.
This case seems much more insidious and difficult to debug than the case when the user-defined definition of a new method takes precedence.
Is there a way to make this safe or should we stop using implicits this way?
If it is really important that you get implicit behavior, perhaps you should force a conversion with an explicit type:
object Main extends App { val heyMan = new HeyMan implicit class ImplicitHeyMan(heyMan: HeyMan) { def hello = println("What up ?") } heyMan.hello
From our (extended) conversation in the comments, it seems like you need a way to verify that the base class will not determine the method you are using through implicit conversion.
I think the Łukasz comment below is correct - this is what you should catch when testing. In particular, ScalaTest assertTypeError can be used for this. Just try calling the method outside of your implicit scope, and it should not enter validation (and pass the test):