Is variable allocation really expensive in Ruby or is it optimizing string-string methods?

I have the following method I wrote for Project Euler - issue 36 . All he does is add all the numbers less than 1,000,000, which are palindromes in both base 10 and base 2.

def problem_36 (1...1_000_000).select do |n| n.to_s == n.to_s.reverse && n.to_s(2) == n.to_s(2).reverse end end 

Now it works and gives the correct result in just 1 second. I wanted to get it in less than 1 second, so I decided that I would reduce the number of conversions of numbers to strings. So I made the following changes:

 def problem_36 (1...1_000_000).select do |n| base10 = n.to_s base2 = n.to_s(2) base10 == base10.reverse && base2 == base2.reverse end end 

The problem is that this version actually works about 50% slower than the original. So the question is, is this really a slow allocation of these two variables? Or does Ruby optimize chained method calls?

+7
source share
2 answers

In this line

 n.to_s == n.to_s.reverse && n.to_s(2) == n.to_s(2).reverse 

the second part fails if the first part is false (Ruby && operator short circuit, unlike its & ). This saves a lot of to_s(2) calls.

+7
source

Interesting.

A common rule of Ruby performance is that if a program uses built-in operations, it will be fast. If it is not, it will be slow.

Although your second version may or may not do less conversion, it does three lines of Ruby vs one.

I once read a large log file. In my first version, I kept an efficient linked list of strings using Ruby code.

1. The complexity of time: O (1).

Then I changed it to just use << and bind each new node to the end of the array.

2. Time complexity: O (N 2 ) or at least O (N 1 + ε )

The second version (1) was faster.


1. Obviously, the implementation expanded the array in pieces, so it was not completely quadratic, but still worked much more than a linked list.

0
source

All Articles