Question
How do you programmatically and accurately determine the best preview size for an application that displays a camera preview on the device screen? (Or inside any representation of variable sizes, really).
BEFORE YOU CALL IT AS A DUPLICATE OF ONE OF THE MILLIONS OF OTHER MATTERS RELATED TO FOREIGN ASPECTS, please understand that I am looking for a different solution than what is usually available . I ask this question because I read so many “answers”, but they all point to a solution that, in my opinion, is incomplete (and potentially erroneous, as I will describe here). If this is not a flaw, then please help me understand what I am doing wrong.
I read a lot of different implementations about how applications choose preview sizes, and most of them use an approach that I call “fairly close” (when you decide which option is best based on subtracting the size ratio, the option from the screen resolution ratio and find parameter with the smallest value). This approach does not seem to guarantee that he chooses the best option, but guarantees that he will not choose the worst option.
For example, if I try to iterate through each available preview size on a device with a screen resolution of 720x1184 and display a full preview screen ( 720x1184 ), here are the results of the camera’s preview (sorted by the parameters that are closest to the screen resolution in abs(ratio of size option - screen resolution ratio) ). All sizes are taken from . GetSupportedPreviewSizes ). ("Results" are visual observations using a test case that uses a static circle that appears in the camera's viewfinder and is not software)
720.0/1184.0 = 0.6081081081081081 Res Ratio Delta in ratios Result ---------------------------------------------------- 176/144 = 1.22222222222 (0.614114114114) (vertically stretched) 352/288 = 1.22222222222 (0.614114114114) (vertically stretched) 320/240 = 1.33333333333 (0.725225225225) (vertically stretched) 640/480 = 1.33333333333 (0.725225225225) (vertically stretched) 720/480 = 1.5 (0.891891891892) (vertically stretched) 800/480 = 1.66666666667 (1.05855855856) (looks good) 640/360 = 1.77777777778 (1.16966966967) (horizontally squashed) 1280/720 = 1.77777777778 (1.16966966967) (slight horizontal squash) 1920/1080 = 1.77777777778 (1.16966966967) (slight horizontal squash)
This would not be the right test for Android code without running it on another device. Below are the results of displaying a camera preview on a device with a screen resolution of 800x1216 and displaying a preview with the same resolution ( 800x1216 )
800/1216.0 = 0.657894736842 Res Ratio Delta in ratios Results ------------------------------------------------ 176/144 = 1.22222222222 (0.56432748538) (looks vertically stretched) 352/288 = 1.22222222222 (0.56432748538) (looks vertically stretched) 480/368 = 1.30434782609 (0.646453089245) (looks vertically stretched) 320/240 = 1.33333333333 (0.675438596491) (looks vertically stretched) 640/480 = 1.33333333333 (0.675438596491) (looks vertically stretched) 800/600 = 1.33333333333 (0.675438596491) (looks vertically stretched) 480/320 = 1.5 (0.842105263158) (looks good) 720/480 = 1.5 (0.842105263158) (looks good) 800/480 = 1.66666666667 (1.00877192982) (looks horizontally squashed) 960/540 = 1.77777777778 (1.11988304094) (looks horizontally squashed) 1280/720 = 1.77777777778 (1.11988304094) (looks horizontally squashed) 1920/1080 = 1.77777777778 (1.11988304094) (looks horizontally squashed) 864/480 = 1.8 (1.14210526316) (looks horizontally squashed)
The "close enough" approach (assuming that any delta coefficient equal to or less than 1.4d is acceptable) will return 1920x1080 on both devices if iteration is performed with minimum values to the highest values. If iteration is performed with the highest and lowest values, then for DeviceB and 176x144 for DeviceB, 176x144 for DeviceA and 176x144 will be selected. Both of these options (although "close enough") are not the best options.
Question
When examining the above results, how can I programmatically get “good looking” values? I cannot get these values using the “close enough” approach, so I don’t understand the relationship between the screen size, the view that I am showing the preview and the preview sizes. What am I missing?
Screen dimensions = 720x1184 View dimensions = 720x1184 Screen and View Aspect Ratio = 0.6081081081081081 Best Preview Size ratio (720x480) = 1.5
Why are the best options not the values that have the lowest delta? The results are surprising, as everyone else seems to think that the best option is to calculate the smallest difference in the odds, but what I see is that the best option seems to be in the middle of all the options. And that their width is closer to the width of the view, which will display a preview.
Based on the above observations (the best option is not the value with the lowest delta coefficient), I developed this algorithm, which repeats all possible preview sizes, checks to see if it matches my criteria “close enough”, saves the sizes that match these criteria , and finally find a value that is at least greater than or equal to the given width.
public static Size getBestAspectPreviewSize(int displayOrientation, int width, int height, Camera.Parameters parameters, double closeEnough) { double targetRatio=(double)width / height; Size bestSize = null; if (displayOrientation == 90 || displayOrientation == 270) { targetRatio=(double)height / width; } List<Size> sizes=parameters.getSupportedPreviewSizes(); TreeMap<Double, List> diffs = new TreeMap<Double, List>(); for (Size size : sizes) { double ratio=(double)size.width / size.height; double diff = Math.abs(ratio - targetRatio); if (diff < closeEnough){ if (diffs.keySet().contains(diff)){
Obviously, this algorithm does not know how to choose the best option, it just knows how to choose the option that is not the worst, like any other implementation that I saw there. I need to understand the relationship between sizes that really look good and the sizes of the views that will be displayed in the preview before I can improve my algorithm in order to actually choose the best option.
I looked at the implementation of how the CommonWares CODAC CWAC project works with this, and it seems to also use a "close enough" algorithm. If I applied the same logic to my project, then I will return to values that are decent, but not "perfect" size. I am returning 1920x1080 for both devices. Although this value is not the worst option, it also squishes slightly. I am going to run its code in my test application with test cases to determine if it slightly distortes the image, as I already know that it will return a size that is not as optimal as it could be.