When you compose your 3D scene, what you see on the screen is already relatively close to the final result obtained after rendering calculations.
During the composition
Final result
Compared to the pre-calculated, for a single image/frame, we speak in minutes for the calculation time. Calculations require quite a few resources at the server level, so it is much cheaper and accessible to many more people.
The final result after computation time, whether for a single image or a video, is very close to what your scene looked like when you composed it. There are very few surprises and very few changes to make. And if it were, the computing times are so small that it has very little impact on your production speed.
When composing, what you see on the screen is only a brief glimpse of what the final result will look like.
During the composition
Final result
Depending on the complexity of the scene, for a single image/frame, the calculation time can take up to a day. The calculation servers for pre-computed requires a certain cost that will have a fairly large impact on your production budget.
Once the long calculation times are complete and you have a definitive preview of your image/video, if you are not satisfied with the result, you must repeat the process until you are satisfied with the result. This implies having to take a lot of time upstream during pre-production to avoid as much changes as possible.
The use of physics in animation makes character movements much more realistic. It is the same for the behavior of objects, during a fall for example. Physics can be used for many other things such as hair, fur, vehicles, anything that can be animated and subject to the laws of physics.
In this example, the lantern is set to move properly, according to the laws of physics, if a force is applied. The rocking of the object will take into account its weight and its center of gravity.
The process of scanning an object to recover its volume and texture is called "photogrammetry".
The artificial intelligence that we developed allows us for example to "delight" the textures and automatically replace them in the right place on the 3D volume of the object.
In the case of a more generic texture like that of a stone, one can also use the texture of a stone to replace it automatically on another while respecting the volume and the relief of it.
The object will react to the light to which it is correctly exposed. Artificial intelligence is also used in animation.
If we take the example of a character who walks and accelerates to start a race, artificial intelligence predicts how animation must be done to make the transition natural.