It will fail for inputs such as a = 0.1, b = 0.100001 (in binary, i.e. a = 0.5, b = 0.515625 in decimal). In this case, the correct answer will be 0.1, but instead, your algorithm will produce 0.11 instead, which is not only not minimal, but also greater than b: - (
Your digit check looks good to me - the problem is that when you made the (right) decision to end the loop, you are building the wrong output if the length of line b is longer. One easy way to fix this is to print the numbers in turn, while you see another character, you know that you must include the current character.
One more tip: I don't know Javascript, but I think both calls to parseInt() not needed, since nothing you do with u or v actually requires arithmetic.
[EDIT]
Here is an example of a time-digit code that includes several other considerations:
var binaryInInterval = function(a, b) { if (a < 0 || b > 1 || a >= b) return undefined; if (a == 0) return '0'; // Special: only number that can end with a 0 var i, j, u, v, x = ''; a = a.toString(2).replace('.', ''); b = b.toString(2).replace('.', ''); for (i = 0; i < Math.min(a.length, b.length); i++) { u = a.substr(i, 1); v = b.substr(i, 1); if (u != v) { // We know that u must be '0' and v must be '1'. // We therefore also know that u must have at least 1 more '1' digit, // since you cannot have a '0' as the last digit. if (i < b.length - 1) { // b has more digits, meaning it must // have more '1' digits, meaning it must be larger than // x if we add a '1' here, so it safe to do that and stop. x += '1'; // This is >= a, because we know u = '0'. } else { // To ensure x is >= a, we need to look for the first // '0' in a from this point on, change it to a '1', // and stop. If a only contains '1 from here on out, // it suffices to copy them, and not bother appending a '1'. x += '0'; for (j = i + 1; j < a.length; ++j) { if (a.substr(j, 1) == '0') { break; } } } break; // We're done. Fall through to fix the binary point. } else { x += u; // Business as usual. } } // If we make it to here, it must be because either (1) we hit a // different digit, in which case we have prepared an x that is correct // except for the binary point, or (2) a and b agree on all // their leftmost min(len(a), len(b)) digits. For (2), it must therefore be // that b has more digits (and thus more '1' digits), because if a // had more it would necessarily be larger than b, which is impossible. // In this case, x will simply be a. // So in both cases (1) and (2), all we need to do is fix the binary point. if (x.length > 1) x = '0.' + x.substr(1); return x; };