It is very difficult to evaluate a shader, because there is a ton of context, and they are very specific to the GPU.
You might find out if one shader is faster than the other using performance.now before and after drawing a bunch of things with this shader (from several thousand to millions of call calls), then stopping the GPU by calling gl.readPixels , It will tell you that faster. He will not tell you how quickly they start and stop time from the moment the GPU stops.
Think of a race car. For dragster, you speed up time to dest. For a racing car you are one lap at full speed. You let the car go one lap ahead of time, you are the time of the second lap, the car crosses the start line, running at full speed, and the finish line is also in full swing. Thus, you get the speed of the car, where, like for the dragster, you get your acceleration (which is not the case with the GPU, because if you are going for speed, you should never start and stop them).
Another way to time without adding start / stop time is to draw a bunch between requestAnimationFrame frames. Continue to increase the number until the time between frames moves to the entire frame. Then compare the amount between the shaders.
There are other problems, although in real use. For example, a tiled graphics processor (such as PowerVR on many mobile devices) tries to select parts of the primitives that will be overloaded. Thus, a heavy shader with a large number of overloads on an unreadable GPU can be quite fast on a tiled GPU.
Also make sure that you are the right fit. If you specify a vertex shader, you probably want to make your canvas 1x1 pixels in size, and you can possibly fragment the shader as much as possible and move as many vertices into one draw call (to remove the call time). If you are counting down a fragmented shader, then you probably need a large canvas and a vertex set containing several full stripes of canvas.
Also see WebGL / OpenGL: performance comparison