Convert satellite photos of the Earth into texture maps on a sphere (OpenGL ES)

We have 5 geostationary satellites located around the equator (uneven, but almost), photographing the Earth every day. The result of each photo is a surprise! - A photograph of a sphere taken from a long distance.

I need to collect these photos into one texture map, and I'm not sure how best to do this. Main problems:

  • Photos - obviously - are massively distorted, the further you go from the center, as they look at the sphere.
  • There are many hundreds of β€œsets” of 5 photographs taken at different times of the day. Any solution should be software - I can't just do it manually :(
  • The output platform is iPad3: Open GL ES 2, textures up to 4096x4096 - but not as powerful as desktop GPUs. I'm not very good with shaders (although I did a lot of preliminary OpenGL shaders)
  • The photos themselves have high resolution, and I'm not sure that I can download all 5 textures at the same time. I also have a very high resolution texture loaded for the surface of the planet (under satellite photos).

I already have one rectangular texture applied to the sphere (my sphere is a standard grid wrapped in a sphere with vertices evenly distributed over the entire surface), so ... I tried to convert 5 photos of the spheres into a single rectangular map (but not yet managed to succeed, although someone pointed out to me the execution of the "polar ridge", which looks as if it can work better).

I also thought about doing something funky by making a cubic map of 5 photos and skillfully deciding which of the photos to read for a given pixel, but I'm not quite sure.

Is there a better way? Something I forgot? Or does anyone have a specific way to achieve the above?

+4
source share
1 answer

I would make a rectangular texture out of it.

You will need 2 x 2D textures / arrays for r,g,b summing the color avg and one for count cnt . Also I'm not sure if I would use OpenGL / GLSL , because it seems to me that C / C ++ would be better for this.

I would do it like this:

  • remove target textures ( avg[][]=0, cnt[][]=0 )
  • get satellite position / direction, time

    From a position and a direction, create a transformation matrix that projects the Earth in the same ways as in the photo. Then determine the rotation shift from time to time.

  • make a loop across the entire surface of the earth

    only two nested loops a - rotation and `b - distance from the equator.

  • get x,y,z from a,b and transform the matrix + rotation shift ( a -axis)

    can also do it back a,b,z = f(x,y) , but it is more complicated, but faster and more accurate. You can also interpolate x,y,z between neighboring ones (pixels/areas)[a][b]

  • add pixel

    if x,y,z is on the front side ( z>0 or z<0 depends on the direction of the camera Z ), then

     avg[a][b]+=image[x][y]; cnt[a][b]++; 
  • end of nested loop from point number 3.

  • goto # 2 with the following photo
  • loop through the whole avg texture to restore medium color

     if (cnt[a][b]) avg[a][b]/=cnt[a][b]; 

[Note]

  • can check if there is a copied pixel:

    It turns out during the day or night (use only what you want, and do not mix together !!!) can also detect clouds (I think that gray / white-and-x-colors are not snow) and ignore them.

  • don't overflow colors

    can use 3 separate textures r[][],g[][],b[][] instead of avg to avoid this

  • can ignore areas near the edges of the earth to avoid distortion

  • may apply lighting adjustments

    from time and a,b to normalize lighting

Hope this helps ...

Orthogonal Projection [Edit1]

therefore, it is clear here what I mean by orthogonal projection:

Satellite photo texture (EUMETSAT)

this is the texture used (it cannot find anything better and free on the Internet) and wanted to use a real satellite image, and not some visualized one ...

orthogonal projection

this is my orthogonal projection App

  • red, green, blue lines are the coordinate system of the Earth ( x,y,z axis)
  • (red, green, blue) -white lines are the satellite projection coordinate system ( x,y,z axis)

the point is to convert the coordinates of the earth's peak (vx,vy,vz) to the satellite coordinates (x,y,z) , if z >= 0 , then its real vertex for the processed texture calculates the coordinates of the texture directly from x,y without any perspective (orthogonal).

for example tx=0.5*(+x+1); ... if x was scaled to <-1,+1> , and the texture used is tx <0,1> The same goes for the y axis: ty=0.5*(-y+1); ... if y scaled to <-1,+1> and the texture used is ty <0,1> (my camera has an inverted y coordinate system corresponding to the texture matrix, so the inverted sign on the y axis)

if z < 0 , then you are processing a vertex from a range of textures, so ignore it ... as you can see in the image, the outer borders of the texture are distorted, so you should only use the inside (for example, 70% of the ground image area), you also You can perform some correction of texture coordinates depending on the distance from the midpoint of the texture. When you do this, simply merge all the satellite image images into one image, and that’s all.

[Edit2] Well, I played around a bit with this and found out about it:

  • reverse projector correction does not work for my texture at all, I think it is possible, this is a post-processed image ...
  • Correction based on the midpoint at a distance seems good, but the scaling factor used is odd, has no idea why multiply by 6, when it should be 4 I think ...

     tx=0.5*(+(asin(x)*6.0/M_PI)+1); ty=0.5*(-(asin(y)*6.0/M_PI)+1); 

corrected nonlinear projection

  • corrected nonlinear projection (asin)

corrected nonlinear projection edge zoom

  • adjusted nonlinear projection scale
  • distortion is much less than without asin adjusting texture coordinates
0
source

All Articles