How bad are implicit definitions?

I like implicit definitions. They make the code enjoyable; they make the user feel that some functions are naturally available to the class when it is just an implicit definition. However, I was thinking about JS prototypes, where you can basically define a method in a class that you did not write. But if in the next version this class defines a method with the same signature and makes assumptions about its behavior, you are screwed.

Scala implicits allow you to do almost the same thing with one significant difference: implicit definitions are limited, so there is no risk for the class author to have code entered by an implicit definition in another user's code. But what about user code? Is she protected from changing the class to which he adds no-show?

Consider this code:

class HeyMan { def hello = println("Hello") } object Main extends App { val heyMan = new HeyMan implicit class ImplicitHeyMan(heyMan: HeyMan) { def hello = println("What up ?") } heyMan.hello // prints Hello } 

Pretty bad, isn't it? For me, the correct behavior should be that the implicit definition always displays the real definition, so the user code is protected from the appearance of new methods in the API that it calls.

What do you think? Is there a way to make this safe or should we stop using implicits this way?

+7
scala implicit implicit-conversion
source share
1 answer

The behavior of the language with respect to implicit transformations is very clearly defined:

if you call the m method of the o object of class C , and this class does not support the m method, then Scala will look for an implicit conversion from C to what m supports.

http://docs.scala-lang.org/tutorials/FAQ/finding-implicits.html

In other words, an implicit conversion will never be applied to heyMan in the heyMan expression if the (statically known) heyMan class / attribute already defines a hello method - transparent transformations only when you call a method that it does not yet define.


For me, the correct behavior should be that the implicit definition always displays the real definition, so that the user code is protected from the appearance of new methods in the API that it calls.

Is that not so? If the implicit conversion really takes precedence, then the user will be threatened by their long-defined methods that have been around for 5 years, suddenly obscured by a new implicit conversion in the new version of the library dependency.

This case seems much more insidious and difficult to debug than the case when the user-defined definition of a new method takes precedence.


Is there a way to make this safe or should we stop using implicits this way?

If it is really important that you get implicit behavior, perhaps you should force a conversion with an explicit type:

 object Main extends App { val heyMan = new HeyMan implicit class ImplicitHeyMan(heyMan: HeyMan) { def hello = println("What up ?") } heyMan.hello // prints Hello val iHeyMan: ImplicitHeyMan // force conversion via implicit iHeyMan.hello // prints What up } 

From our (extended) conversation in the comments, it seems like you need a way to verify that the base class will not determine the method you are using through implicit conversion.

I think the Łukasz comment below is correct - this is what you should catch when testing. In particular, ScalaTest assertTypeError can be used for this. Just try calling the method outside of your implicit scope, and it should not enter validation (and pass the test):

 // Should pass only if your implicit isn't in scope, // and the underlying class doesn't define the hello method assertTypeError("(new HeyMan).hello") 
+10
source share

All Articles