Is this the expected behavior for a set of arrays in Ruby?

We do a little work in Ruby 1.8.7, which requires crossing and splitting an undirected graph that was unsuccessful in production. When I reinstall the failed code to its most complex components, I get this strange failure:

it 'should be able to clear a ruby set of arrays' do a = ["2", "b", "d"] b = ["1", "a", "c", "e", "f"] set = Set.new([a, b]) a.concat(b) p "before clear: #{set.inspect}" set.clear p "after clear: #{set.inspect}" set.size.should == 0 end 

The test does not work with this output:

 "before clear: #<Set: {[\"1\", \"a\", \"c\", \"e\", \"f\"], [\"2\", \"b\", \"d\", \"1\", \"a\", \"c\", \"e\", \"f\"]}>" "after clear: #<Set: {[\"2\", \"b\", \"d\", \"1\", \"a\", \"c\", \"e\", \"f\"]}>" expected: 0 got: 1 (using ==) 

Attempts to remove from the set also behave strangely. I assume that Ruby gets the hash of the keys hanging in the array changing under concat (), but of course I have to clear the Set anyway. Right?

+4
source share
2 answers

There is a workaround for this, if you duplicate the set after changing the keys, the new set will have updated keys and will be cleared correctly. Therefore, setting set = set.dup fixes this problem.

+1
source

The .dup approach really was my first workaround and made an advertisement.

I ended up adding the following set of monkey patches to Set:

 class Set def rehash @hash.rehash end end 

which allows me to rename given keys after any operation that changes their hash values.

It all seems to be fixed in Ruby 1.9.

+1
source

All Articles