Convert Real to Radicals

Suppose I have a real number. I want to approximate it with something like form + sqrt (b) for integers a and b. But I do not know the values ​​of a and b. Of course, I would rather get a good approximation with small values ​​of a and b. Let it be undefined at the moment, which means β€œgood” and β€œsmall”. Any reasonable definition of these terms will do.

Is there any reasonable way to find them? Something like a fractional continuation algorithm for finding fractional approximations of decimal places. See here for more on this issue.

EDIT. To clarify, this is an arbitrary real number. All I have is a bunch of his numbers. Therefore, depending on how good the approximations are, a and b may or may not exist. Naturally, brute force is not a particularly good algorithm. The best I can think of is to start adding integers to my real one, squaring the result, and see if I'm approaching the integer. Pretty much brute force, not a particularly good algorithm. But if nothing better exists, it would be interesting to know.

EDIT: Obviously, b must be zero or positive. But there can be any integer.

+7
source share
5 answers

No need to continue fractions; just calculate the square root of all the "small" b values ​​(to any value that you still feel "small"), delete everything to the decimal point and collect / save everything (along with the b that generated it).

Then, when you need to approach a real number, find the root, the decimal part of which is in the closet to the decimal number of real numbers. This gives you b - choosing the right a is just a matter of subtraction.

+6
source

This is actually more a math problem than a computer problem, but to answer the question, I think you're right that you can use continued fractions. What you do first represents the target number as a continuous fraction. For example, if you want to approximate pi (3.14159265), then CF:

3: 7, 15, 1, 288, 1, 2, 1, 3, 1, 7, 4 ...

The next step is to create a CF table for square roots, then you compare the values ​​in the table with the fractional part of the target value (here: 7, 15, 1, 288, 1, 2, 1, 3, 1, 7, 4 ...). For example, suppose your table had square roots for only 1-99. Then you will find the next match will be sqrt (51), which has CF 7: 7,14, repeating itself. 7.14 is closest to pi 7.15. So your answer would be as follows:

SQRT (51) -4

. As the closest approximation given b <100, which is disabled at 0.00016. If you allow increasing b, you can get a better approximation.

The advantage of using CF is that it is faster than working, say, doubles or uses floating point. For example, in the above case, you only need to compare two integers (7 and 15), and you can also use indexing to find the nearest record in the table very quickly.

+5
source

This can be done with mixed integer quadratic programming very efficiently (although there is no runtime guarantee since MIQP is NP-complete).

Definition:

 d := the real number you wish to approximate b, a := two integers such that a + sqrt(b) is as "close" to d as possible r := (d - a)^2 - b, is the residual of the approximation 

The goal is to minimize r . Set up your quadratic program as follows:

 x := [ sbt ] D := | 1 0 0 | | 0 0 0 | | 0 0 0 | c := [0 -1 0]^T with the constraint that s - t = f (where f is the fractional part of d) and b,t are integers (s is not) 

This is a convex (therefore optimally solvable) mixed integer quadratic program, since D is positive semidefinite.

After calculating s,b,t just getting the answer with b=b , s=da and t can be ignored.

Your problem may be NP-complete, it would be interesting to prove if so.

+1
source

Some previous answers use methods that have the temporal or spatial complexity of O (n), where n is the largest "small number" that will be accepted. In contrast, the next method is O (sqrt (n)) in time and O (1) in space.

Suppose that a positive real number is r = x + y , where x=floor(r) and 0 ≀ y < 1 . We want to bring r closer to the number of forms a + √b . If x+y β‰ˆ a+√b , then x+ya β‰ˆ √b ; therefore, √b β‰ˆ h+y for some integer displacement h and b β‰ˆ (h+y)^2 . To make b an integer, we want to minimize the fractional part (h+y)^2 over all suitable h . No more than √n admissible values ​​of h . See the following python code and sample output.

 import math, random def findb(y, rhi): bestb = loerror = 1; for r in range(2,rhi): v = (r+y)**2 u = round(v) err = abs(vu) if round(math.sqrt(u))**2 == u: continue if err < loerror: bestb, loerror = u, err return bestb #random.seed(123456) # set a seed if testing repetitively f = [math.pi-3] + sorted([random.random() for i in range(24)]) print (' frac sqrt(b) error b') for frac in f: b = findb(frac, 12) r = math.sqrt(b) t = math.modf(r)[0] # Get fractional part of sqrt(b) print ('{:9.5f} {:9.5f} {:11.7f} {:5.0f}'.format(frac, r, t-frac, b)) 

(Note 1: this code is in demo form, the findb() parameters are equal to y , the fractional part of r and rhi , the square root of the largest small number. To change the use of parameters. Note 2:
if round(math.sqrt(u))**2 == u: continue
a line of code prevents findb() from returning the values ​​of the perfect square b , except for the value b = 1, since no perfect square can improve the accuracy suggested by b = 1.)

The following is the result of the selection. About a dozen lines were released in the middle. The first output line shows that this procedure gives b=51 to represent the fractional part of pi , which is the same value that is reported in some other answers.

  frac sqrt(b) error b 0.14159 7.14143 -0.0001642 51 0.11975 4.12311 0.0033593 17 0.12230 4.12311 0.0008085 17 0.22150 9.21954 -0.0019586 85 0.22681 11.22497 -0.0018377 126 0.25946 2.23607 -0.0233893 5 0.30024 5.29150 -0.0087362 28 0.36772 8.36660 -0.0011170 70 0.42452 8.42615 0.0016309 71 ... 0.93086 6.92820 -0.0026609 48 0.94677 8.94427 -0.0024960 80 0.96549 11.95826 -0.0072333 143 0.97693 11.95826 -0.0186723 143 

When you add the following code at the end of the program, the output shown below also appears. This shows closer approximations for the fractional part pi.

 frac, rhi = math.pi-3, 16 print (' frac sqrt(b) error b bMax') while rhi < 1000: b = findb(frac, rhi) r = math.sqrt(b) t = math.modf(r)[0] # Get fractional part of sqrt(b) print ('{:11.7f} {:11.7f} {:13.9f} {:7.0f} {:7.0f}'.format(frac, r, t-frac, b,rhi**2)) rhi = 3*rhi/2 frac sqrt(b) error b bMax 0.1415927 7.1414284 -0.000164225 51 256 0.1415927 7.1414284 -0.000164225 51 576 0.1415927 7.1414284 -0.000164225 51 1296 0.1415927 7.1414284 -0.000164225 51 2916 0.1415927 7.1414284 -0.000164225 51 6561 0.1415927 120.1415831 -0.000009511 14434 14641 0.1415927 120.1415831 -0.000009511 14434 32761 0.1415927 233.1415879 -0.000004772 54355 73441 0.1415927 346.1415895 -0.000003127 119814 164836 0.1415927 572.1415909 -0.000001786 327346 370881 0.1415927 911.1415916 -0.000001023 830179 833569 
+1
source

I don’t know if there is any standard algorithm for this kind of problems, but it intrigues me, so I’ll try to develop an algorithm that will find the approximation I need.

Call the actual number in question r . Then first suppose that a can be negative, and in this case we can reduce the problem, and now we need to find a b such that the decimal part of sqrt(b) is a good approximation of the decimal part of r . Now we write r as r = xy with x being an integer and y decimal part.

 Now: b = r^2 = (xy)^2 = (x + .y)^2 = x^2 + 2 * x * .y + .y^2 = 2 * x * .y + .y^2 (mod 1) 

We can only find a x such that 0 = .y^2 + 2 * x * .y (mod 1) (approximately).

Filling x into the above formulas, we get b and then calculate a as a = r - b . (Of course, all these calculations should be carefully rounded.)

Now, while I'm not sure if there is a way to find this x without forcibly forcing. But even then, you can simply use a simple loop to find x well enough.

I am thinking of something similar (semi pseudo code):

 max_diff_low = 0.01 // arbitrary accuracy max_diff_high = 1 - max_diff_low y = r % 1 v = y^2 addend = 2 * y x = 0 while (v < max_diff_high && v > max_diff_low) x++; v = (v + addend) % 1 c = (x + y) ^ 2 b = round(c) a = round(r - c) 

Now, I think this algorithm is quite efficient, even allowing you to specify the desired approximation accuracy. One thing that could be done by turning it into an O (1) algorithm, calculates all x and puts them in the lookup table. If we consider only the first three decimal digits of r (for example), the lookup table will have only 1000 values, which is only 4 KB of memory (provided that 32-bit integers are used).

Hope this will be helpful in general. If someone finds something wrong with the algorithm, let me know in the comment and I will fix it.

EDIT: In reflecting, I give up my claim of effectiveness. In fact, as far as I can tell, there is no guarantee that the algorithm described above will ever end, and even if it does, it may take a long time to find a very large x that adequately solves the equation.

One could keep track of the best x found so far and lower the limits of accuracy over time to ensure that the algorithm completes quickly with a possible price of accuracy.

These problems, of course, do not exist if you simply pre-compute the lookup table.

0
source

All Articles