I am currently taking the C ++ Game Libraries library class, and for this class our project had a built render that supports a lot of things. For the current lab, our instructor gave us a manual for loading BMP into OpenGL manually and applied it to our geometry.
Tutorial: http://www.opengl-tutorial.org/beginners-tutorials/tutorial-5-a-textured-cube/
After this tutorial, step by step, my textures have some interesting behaviors. I went to other classmates, high school students and several instructors. None of them know what is going on. Considering that almost every code is identical for this Laboratory, and I am the only one who has this problem, I cannot but confuse.
I use the following obj and texture. I am converting OBJ to binary in the OBJ converter that I created. My renderer takes this binary and sends the data to the OpenGL buffer peaks.
OBJ and Texture: http://www.models-resource.com/playstation_2/ratchetclank/model/6662/
My friend and I have the same binary structure, so I gave him a copy of my binary to verify that the UV is correct. He creates a perfectly textured chicken, while the mind displays a chicken that looks as if the texture was compressed horizontally to 1/16 of the length of the model, and then repeated a bunch of times. I would post images, but I'm new here and don't have enough reputation for that. Over the weekend, I will do my best to increase my reputation, because I really think that it will help to visually see my problem.
I would publish my source code, however this project comes close to approximately 16,000 lines of code, and I doubt anyone wants to search this to find someone else's question.
Any suggestions would be useful, I am primarily curious to spread the errors that can be made when working with OpenGL textures or .bmps in general.
Thank you I.
// ----- Change one ----- //
Result of my friend

My result

I am afraid that I am not allowed to use other libraries. I probably should have mentioned this in my original post.
Here is the code that I upload to bmp, I heard from one of the high schools in my school that I am ignoring something called bit depth. I know that the textbook is pretty bad, and I'd rather learn to do it right than just crossbreed. If someone has a good resource on this issue, I would really appreciate what he pointed out in this direction.
unsigned char header[54]; unsigned int dataPos; unsigned int width, height; unsigned int imageSize; unsigned char * data; FILE * file = fopen(filePath, "rb"); if (!file) { printf("\nImage could not be opened"); exit(1); } if (fread(header, 1, 54, file) != 54){ printf("\nNot a correct BMP file"); exit(1); } if (header[0] != 'B' || header[1] != 'M'){ printf("\nNot a correct BMP file"); exit(1); } dataPos = *(int*)&(header[0x0A]); imageSize = *(int*)&(header[0x22]); width = *(int*)&(header[0x12]); height = *(int*)&(header[0x16]); if (imageSize == 0) imageSize = width * height * 3; if (dataPos == 0) dataPos = 54; data = new unsigned char[imageSize]; fread(data, 1, imageSize, file); fclose(file); glGenTextures(1, &m_textureID); glBindTexture(GL_TEXTURE_2D, m_textureID); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_BGR, GL_UNSIGNED_BYTE, data); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
I am currently using shaders. I have both a fragment and a vertex shader that are identical to the shaders described in the tutorial. I check each of them and make sure that they compile.
// ----- Edit Two ----- //
So, I accepted the durhass proposal and set my color to vec3 (0.0, uv.x, uv.y), where uv is vec2, which contains my texture coordinates, and this is what I get.

So, I think I see the root of the problem. I think that I do not save my UVs correctly in my gl buffer. I donβt think this is a problem with the UVs binary, given that it works great with my friends engine. I look at it, thanks for the suggestion, this could lead to a fix!