EN
aleha_84
aleha_84
41 subscribers
goals
5.57 of $ 335 money raised
for good deeds 2 \ на хорошие дела 2
334.63 of $ 335 money raised
for good deeds \ на хорошие дела

Article No. 1 Approaches to the creation of frames in the procedural graphics.

I have already said many times that for my scenes I created my own engine for drawing and animating scenes. There is no magic there, these are the usual html5 canvas technologies. At the same time, I do not have an art education and also did not study animation anywhere. Therefore, all further articles will be based only on my personal experience, practical implementation and in the context of the created software.
Usually such JS application has 1 <canvas> element on the page, which we use to draw everything we need. There can be more canvases, they can be layered on top of each other, for rendering elements that are independent of each other. Drawing on the canvas happens through its context.
Now I would like to consider ways of forming frames.
In runtime.
I used this approach at the very beginning of my journey. It consists in updating the image in real time, as the data model that is responsible for it changes. If we talk about executing this code in the context of a simple single-threaded JS application, then a very significant bottleneck appears in the form of an extremely small amount of time to redraw the image. With an animation at 60 frames per second, we only have 16 milliseconds to form one frame. Do you think there is much to be done?
Allotted time does not belong to us entirely. The process has three parts:
1. Erase current frame (selected area)
2. Creation of a new frame.
3. Draw a new frame.
We can only control point 2. The first point can only be optimized, and the last one is completely dependent on the JS engine. Although points 1 and 3 are not significant in time, they must be taken into account. Also, I'm not talking here about the possibility of connecting multithreading through webworkers, maybe I'll talk about them sometime separately.
Points 2 and 3 can be combined if we use context directly and draw on the target canvas. I usually try to first draw everything on a canvas created on the fly in memory, and only after all operations are completed, render it into the target picture. One way or another, performance always rests on the number of calls to ctx.fillRect (x, y, w, h), the more there are, the more likely it is not to be in time for us.
In what situations can rendering in runtime come in handy?
An unpredictable change in the state of an object. These are those situations in which it is necessary to update the image according to an algorithm not foreseen in advance with the introduction of amendments or variables. The simplest thing is to change the texture of an object depending on its internal state or external influence, but switching from one image to another is not a difficult operation. But creating effects that react to the user can already be more interesting. Scattering sparks on the surface of nearby objects, taking into account the search for collisions, or a beautiful, non-repeating flash at the place where the user clicked. Although you can always go for optimization tricks and solve everything in a different way.
What happens if you do not meet the allotted time of point 2? The animation will start to lag and slow down. Rendering and calculating the trajectories of tens of thousands of particles is not a good idea, without proper optimization.
Create frames in advance.
This is the approach I use all the time when creating my scenes now. The essence is simple, the rendering of a sequence of frames occurs in memory when the application or scene starts. There is now no restriction on the use of time for the creation of one frame. In the cycle model behavior is calculated and images are generated and putted in array.
The obvious inconvenience of this approach is that, with great complexity and volume, the formation of a sequence of frames will take a decent amount of time and hang the application. In difficult situations, it happens to me that it can take up to several seconds. This approach cannot be used in applications where the live user is exists, as he\she can be lost. But if rendering occurs for subsequent unloading to a file, then there is no problem.
Real situations can include both methods of frames creation, even combining them when the complexity of the algorithm allows you to fit what was intended. For example, on an event, you can generate a small sequence of frames with code, taking into account the context. Although most often everything ends with terrible lags, and optimization is used to squeeze out a fraction of a second, cache something somewhere, and so on.
Enough boring letters, I created a small example
Two squares, as soon as they appear on the canvas, immediately begin to emit particles into each other along curved trajectories.
Each of the squares blinks smoothly - this is a sequence of 20 frames that changes every 50 milliseconds. The frames were generated at the start of the application and are used by the object at the moment of its appearance.
Particles flying at each other are objects that form their frames on the fly. Not the best example, of course, in terms of optimal use. At the time of creation, the particle knows only the start and end points, the route is built according to the equation of the curve. Each next cycle, the next point of the route is calculated (taken from the array), and in this place on the canvas with the size of the entire scene, a point and its tail are drawn. Further, this canvas\frame is rendered in the main space of the scene.
That's all, I tried in this article to state my vision of the approach to the creation of frames in the procedural graphics. Thank you for your attention.
Translated with help of google-translate with my editions.

Subscription levels

Поддержка \ Support

$ 1,12 per month
Большое спасибо что поддерживаете меня и то что я делаю, это реально помогает. 
------------------
Thank you so much for supporting me and what I do. It really helps.
Go up