How do Google Maps make their panoramas?

How do Google Maps make their panoramas in Street View?

Yes, I know its Flash, but how do they distort bitmaps using Correct texture mapping?

Do they do it at the pixel level, like most Flash 3D engines? or just apply some kind of complicated bitmap conversion in MovieClips?

+4
source share
6 answers

Flash Panorama Player can help achieve a similar result!

It uses 6 peer images (cube faces) stitched together seamlessly with some “magic” ActionScript.

Also see these parts of flashpanos.com for plugins and tutorials with (possibly) documentation.

A quick panorama guide so you can view them using FPP (Flash Panorama Player).

Cubic projection . The cube lines are actually 90x90 degrees straightforward images, similar to those you get from a regular camera lens. ~ What is VR photography?

+4
source

Check out http://www.panoguide.com/ . They have howtos, software links, etc.

There are basically two components in this process: stitching software that creates one panoramic photo from many separate image sources, then there is a panoramic viewer that distorts the image when you change your POV to mimic what your eyes would see if you were there.

+1
source

My company uses Papervision3D's flash rendering engine and displays a panoramic image (still image or video) on a 3D sphere. We found that using a spherical object with approximately 25 divisions along both axes gives a much better visual result than displaying the same image on six faces of a cube. Check it out at http://www.panocast.com .

Actually, you could, of course, distort your image in advance, so that when it is displayed on the faces of the cube, its perspective is fine, but this requires a complete repetition of your images.

With the help of additional “magic”, we can also port still images as needed, depending on where the user is and at what zoom level (unlike Google Street View).

+1
source

In terms of what Google actually does, Bork had that right. I'm not sure about the exact details (and I'm not sure that I can post the details even if I did), but Google stores separate 360-degree street view scenes in an equitable presentation for maintenance. The flash player then uses a series of affine transformations to display the image in perspective. Affine transformations are approximate, but good enough to combine them with a decent image in general.

Calculation of serviced images is very in demand, since there are many stages of image processing that need to be done to remove faces, consider flowering, etc. etc. As for the panorama stitching itself, the algorithms for this ( Wikipedia article ). Only one interesting thing that I would like to note, although, as food for thought, in 360-degree panoramas on the street, you can see the road at the bottom of the image where there was no camera on the cars. Now this is stitching.

+1
source

Dear camera. 360 degree video

Very impressive is watching the video, which allows you to pan in all directions ... which is a street view without bandwidth to support full video.

+1
source

For those who are wondering about how photographers and Google VR editors add a foundation to their Equirectangular panoramas, check out a feature called point-of-view correction, as shown in software such as PTGui:

ptgui.com/excamples/vptutorial.html

(Please note that this is NOT software used by Google)

If you look closer at the ground outside, you will see that the line seems outstretched, and sometimes it even overlaps with information from the point of view next to the current one. (I mean, you can see something in one place, and suddenly this same feature is shown as ground in the next place, showing the technique used to stitch the ground).

0
source

All Articles