![]() ![]() I suppose my key point is that when I am filming in something like 30fps I don’t know until I get into post-production which clips I might want to slow down or speed up and when I find one that I need to “squeeze” a little then it would be good to know what the optimal multiplation factors might be for this. My drone manufacturer doesn’t make ND filters for this model and it is difficult to mod other ones to do this. The longer the shutter, the more options you have because there’s enough connective blur to allow many speed settings to look good. The shorter it is, the sooner sped-up video will look jerky and strobe-like, and options will be limited. To tie this into the original question… the sweet spot for a speed-up will also depend on the shutter speed used in the source video. Even in video mode, a lot of modern mirrorless cameras can record 1/8th shutter at 24fps or 30fps by merging light readings from previous frames, and get the same effect. Therefore, when the footage was played back (looking sped up due to the every-two-seconds interval timer), there were nice connective trails of blur that made it look like we gracefully “flowed” around the camp site rather than looking like we were being assaulted by a flickering strobe light. ![]() The point is that each individual JPEG had a lot of motion blur streaks in it as we walked around the camp site. But, I had the shutter speed set for 1/8th instead of the usual 1/50th for video (or I may have used 1/4th… been too long to remember). The camera was snapping JPEG pictures every two seconds, which I compiled into a video. I made a time lapse last year while my wife and I set up a tent at a camp site. On a separate note, the shutter speed used in the source video has a big impact on the final look. Everything else between 1x and 2x will probably look “good enough” as well.Ĭaveat: If the source footage is variable frame rate from a cell phone, then all bets are off. The conclusion here is that 1.5x looks very good if the source is twice the frame rate as the timeline. The smaller and more consistent the difference between “requested” and “actual” timestamps are, the smoother the footage. ![]() If I have a 60fps clip on a 30fps timeline with a speed factor of 1.5x, then the export engine will look for “1.5 frames from the start” in 30fps time, and find an actual frame for that offset in 60fps time (the third frame in this case) rather than selecting a 30fps frame that is plus/minus half a frame’s time from the requested offset. However, the situation changes if the source is a different frame rate than the timeline. (1.8x is more like 0.2 less than 2.0x as opposed to 0.8 more than 1.0x.) When the source and export frame rates are the same, a speed of 1.5x creates the most uneven cadence because the 0.5 fractional part is the greatest misalignment possible in frame selection. (Note that 1.5x creates a whole number the most frequently of any value between 1x and 2x.) But if I have 60fps footage on a 25fps timeline at 5.0x speed, then N becomes 60/25x5.0=12 which is a whole number (keep every 12th frame), and playback will be a smooth cadence in the time-progression sense. A fractional N-value means an uneven cadence of frame selection will happen, causing the playback to stutter every time the Nth frame lands on a whole number. But if I have 60fps footage on a 25fps timeline at 1.0x speed, then N is 60/25x1.0=2.4 which is fractional. If I have 30fps source video on a 30fps timeline at 1.0x speed, then N is one (keep every frame). “Keep every Nth frame” is determined by a combination of the source video’s frame rate, the export frame rate, and the speed factor. Short version: If the source video’s frame rate and the export frame rate are the same, then 1.5x is the mathematically worst option, and 1.0x-1.3x or 1.8x-2.0x are the best options. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |