Is there any uncharted territory in computer graphics?

It seems to me that everything is possible with computer graphics. It seems that we can depict tissue, water, skin, anything, completely convincing.

Are there areas that are still complicated, or are now focused on finding faster algorithms and reducing rendering time?

+7
graphics
source share
12 answers

Raster graphics are basically a huge collection of hacks. Raytracing or similar methods are more "correct". You get free, like rays, reflection and refraction, through radiation therapy. Performing real-time raytracing would be HUGE for games.

+3
source share
  • Water
  • Fire
  • People
  • Doing all this in real time
  • Physics (somewhat related to the field of computer graphics)

I have not seen digital people who are completely convincing. The same with water and fire on any significant scale.

Take a look at some of the latest developments in the field of computer game physics as examples: destructible buildings in Red Faction: Guerilla, destruction based on materials in Force Unleashed, etc. Most computer graphics revolve around video games and movies, where good enough good enough. There are many smart tricksters. There are many opportunities to increase efficiency, scalability, thoroughness and realism.

+8
source share

Quite a lot is still not possible in the schedule if you want to do it right. Cloth, water and skin are rigged to hell and vice versa to achieve real-time frames. We still cannot do what is probably the most fundamental effect of all: proper lighting.

+6
source share

I have been working in video games for 8 years, and I have seen how graphics in games and films get better every year.

In my opinion, we do not need to worry about making the graphics look right; with the methods that we have now, and an increase in processor power, which will not be a problem ... For still pictures.

The problem I see in games and movies right now is (imho) character animation, which just doesn't look right. The characters in video games and films are as beautiful as they can be visualized, even when capturing movement. The characters miss something that makes them look natural - whether it's a picture of breathing, random bodies of micromodules, the way someone accidentally blinks her eyes faster or slower ... I canโ€™t put my finger on what itโ€™s for sure, itโ€™s just doesn't look natural.

So, I think the next area of โ€‹โ€‹research should be (human) movement and animation to make things look right.

+5
source share

I canโ€™t think of what would be harder to do than to convince a person, and this was done (IMHO) in the "Curious Case of Benjamin Button". Mark this site about making BB . The face graphics in the film were made by computers, but there is still the task of animating a face that cannot yet be made using a computer.

+3
source share

I agree with Stephen and Eric. We dive deep into the Uncanny Valley when it comes to people.

And jalf is right when he points out that many things still have smoke and mirrors.

+3
source share

3D
Not photos that look like 3d, but I mean actual 3d - as soon as we get 2D down, we will do it all again in the 3rd dim. We are just starting to see quite interesting things in theaters, as well as some very interesting new products that no longer require special tacks.

+3
source share

About raytracing; raytracing is cool, but raytracing in standard mode does not give you realistic lighting, as the rays are cast off the camera (the position of your eyes when you are sitting in front of the monitor) through the viewing screen (your computer screen) to see where they end.

In the real world, this does not work. You do not emit radar / sonar rays from your eyes and do not verify that they have hit; instead, other objects emit energy, and sometimes this energy ends on your retina.

Thus, the correct way to calculate lighting will be similar to photo planning, where each light source emits energy that is transmitted through the carrier (air, water), and reflects / refracts across / through the materials.

Think about it - shooting rays from the camera through a pixel on the screen gives you one direction that you will check for the light intensity, while in fact the light can have many different angles to end up with the same pixel. Thus, "standard" raytracing does not give you the effects of light scattering, unless you use a special hack to take this into account. And aren't hacks the exact reason people want to use a method other than multifaceted screening?

Raytracing is not the ultimate solution.

The only real solution is the endless process, when the lights emit energy that bounces around the stage, and if you're lucky, you get on camera. Since the endless process is rather difficult to imitate, we will need to approximate it at some point. Games use hacks to make things look good, but in the end, every other rasterizer / rendering / tracer / any strategy should implement a limit - hack - at some point.

The important thing is - does it really matter? Are we going to a 100% simulation of real life, or is it good enough to calculate a picture that looks 100% real, regardless of the technique used?

If you cannot determine if the image is real or CGI, it doesn't matter which method or hacks were used?

+3
source share

Some believe that computer vision is the frontier of computer graphics. This is basically CG in the reverse order: instead of moving from model to images, you are moving from images to model. Computer vision is a young field with many open problems.

+2
source share

Tons of things are still difficult or very slow.

Try combining transparent objects with fogging, for example.

+1
source share

An API that isolates you from math and does not support programming.

+1
source share

The task, apparently, is a productive modeling of surfaces WITHOUT POLYGONS. Polygons are rasterized surfaces displayed on rasterized screens.

0
source share

All Articles