I would make a rectangular texture out of it.
You will need 2 x 2D textures / arrays for r,g,b summing the color avg and one for count cnt . Also I'm not sure if I would use OpenGL / GLSL , because it seems to me that C / C ++ would be better for this.
I would do it like this:
- remove target textures (
avg[][]=0, cnt[][]=0 ) get satellite position / direction, time
From a position and a direction, create a transformation matrix that projects the Earth in the same ways as in the photo. Then determine the rotation shift from time to time.
make a loop across the entire surface of the earth
only two nested loops a - rotation and `b - distance from the equator.
get x,y,z from a,b and transform the matrix + rotation shift ( a -axis)
can also do it back a,b,z = f(x,y) , but it is more complicated, but faster and more accurate. You can also interpolate x,y,z between neighboring ones (pixels/areas)[a][b]
add pixel
if x,y,z is on the front side ( z>0 or z<0 depends on the direction of the camera Z ), then
avg[a][b]+=image[x][y]; cnt[a][b]++;
end of nested loop from point number 3.
- goto # 2 with the following photo
loop through the whole avg texture to restore medium color
if (cnt[a][b]) avg[a][b]/=cnt[a][b];
[Note]
can check if there is a copied pixel:
It turns out during the day or night (use only what you want, and do not mix together !!!) can also detect clouds (I think that gray / white-and-x-colors are not snow) and ignore them.
don't overflow colors
can use 3 separate textures r[][],g[][],b[][] instead of avg to avoid this
can ignore areas near the edges of the earth to avoid distortion
may apply lighting adjustments
from time and a,b to normalize lighting
Hope this helps ...
Orthogonal Projection [Edit1]
therefore, it is clear here what I mean by orthogonal projection:

this is the texture used (it cannot find anything better and free on the Internet) and wanted to use a real satellite image, and not some visualized one ...

this is my orthogonal projection App
- red, green, blue lines are the coordinate system of the Earth (
x,y,z axis) - (red, green, blue) -white lines are the satellite projection coordinate system (
x,y,z axis)
the point is to convert the coordinates of the earth's peak (vx,vy,vz) to the satellite coordinates (x,y,z) , if z >= 0 , then its real vertex for the processed texture calculates the coordinates of the texture directly from x,y without any perspective (orthogonal).
for example tx=0.5*(+x+1); ... if x was scaled to <-1,+1> , and the texture used is tx <0,1> The same goes for the y axis: ty=0.5*(-y+1); ... if y scaled to <-1,+1> and the texture used is ty <0,1> (my camera has an inverted y coordinate system corresponding to the texture matrix, so the inverted sign on the y axis)
if z < 0 , then you are processing a vertex from a range of textures, so ignore it ... as you can see in the image, the outer borders of the texture are distorted, so you should only use the inside (for example, 70% of the ground image area), you also You can perform some correction of texture coordinates depending on the distance from the midpoint of the texture. When you do this, simply merge all the satellite image images into one image, and thatβs all.
[Edit2] Well, I played around a bit with this and found out about it:
- reverse projector correction does not work for my texture at all, I think it is possible, this is a post-processed image ...
Correction based on the midpoint at a distance seems good, but the scaling factor used is odd, has no idea why multiply by 6, when it should be 4 I think ...
tx=0.5*(+(asin(x)*6.0/M_PI)+1); ty=0.5*(-(asin(y)*6.0/M_PI)+1);

- corrected nonlinear projection (asin)

- adjusted nonlinear projection scale
- distortion is much less than without
asin adjusting texture coordinates