What happens is that hadoop registers the jmx bean for monitoring. At the first start of hasoop, it registers a bean, but the second time the hasoop starts, a bean with this name is already registered, which leads to the error above.
Either you do not close MiniDFSCluster correctly, or you start it more than once, or an error appears in MiniDFSCluster, because of which it does not clean up properly.
You call cluster.shutdown()
in the teardown method, as shown here .
sbridges
source share