How to create texture mapping images?

I want to put / wrap images in 3D objects. To make everything simple and fast, instead of using (and learning) a 3D library, I want to use cartographic images. Image mapping is used this way:

mapping usage

So, you create display images once for each object and use the same mapping for all the images you want to wrap.

My question is, how can I generate such cartographic images (given the 3D model)? Since I do not know the terminology, my searches failed me. Sorry if I use the wrong jargon.

Below you can see a description of the workflow.
enter image description here
I have a 3D model of the object and the input image, I want to generate map images that I can use to create a textured image.

I don’t even know where to start, any pointers are evaluated.

Additional Information

My initial idea was to somehow connect identity mappings (see below) using an external program. I created horizontal and vertical gradient images in Photoshop to see if mapping works using Photoshop images. The result does not look very good. I did not hope, but it was worthy.

entrance
enter image description here

displays (x and y), they just resize the image, they do nothing.
enter image description hereenter image description here

result
enter image description here
As you can see, there are many artifacts. The custom map images that I generated by warping the gradients even look worse.

Here is more information on mappings: http://www.imagemagick.org/Usage/mapping/#distortion_maps

I am using the OpenCV remap () function for matching.

+4
source share
2 answers

If I understand you right here, do you want to do it all in 2D?

calling warpPerspective () for each of your cube surfaces will be much more successful than using the redirection () function

pseudo code scheme:

// for each surface: // get the desired src and dst polygon // the src one is your texture-image, so that's: vector<Point> p_src(4), p_dst(4); p_src[0] = Point(0,0); p_src[1] = Point(0,src.rows-1); p_src[2] = Point(src.cols-1,0); p_src[3] = Point(src.cols-1,src.rows-1); // the dst poly is the one you want textured, a 3d->2d projection of the cube surface. // sorry, you've got to do that on your own ;( // let say, you've come up with this for the cube - top: p_dst[0] = Point(15,15); p_dst[1] = Point(44,19); p_dst[2] = Point(56,30); p_dst[3] = Point(33,44); // now you need the projection matrix to transform from one to another: Mat proj = getPerspectiveTransform( p_src, p_dst ); // finally, you can warp your texture to the dst-polygon: warpPerspective(src, dst, proj, dst.size()); 

if you can get the book "Teaching Opencv", it is described in section p.

the last word of warning, since you are complaining about artifacts - yes, it will all look pretty crappy, “real” 3D engines do a lot of work here, subpixel uv mapping, filtering, mipmapping, etc. if you want It looked beautiful, think about using the "real" thing.

btw, there is good opengl support built into opencv

+1
source

In order to achieve what you are trying to do, you need to visualize the 3D models of the UV texture. It’s easier to learn how to render 3D than to do so. Moreover, there are many shortcomings in your aproach. it is difficult to light and problems to the depth of the buffer will be abundant.

Assuming all your objects are removed from only one corner, you need to map each of them to 3 textures:

UV card
Normal map
Depth map (for correction of depth buffer)

You still have to do the shading to draw them as your object, and I don’t even know how to make the depth buffer, I just know that it can be done.

So, to avoid learning about 3D, you will need to learn all the complex parts of 3D rendering. Doesn't seem like an easier route ...

+1
source

All Articles