You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Setsu described a setup a while ago that'd involve something like DescaleTarget, but with smart handling of pre- and post-filtering, which could possibly be further extended. Implementing something like this for descaling would be a good idea, as a good descale wrapper is probably the biggest one this package lacks right now.
As Setsu is unlikely to implement this idea now, we need to figure out what exactly the plan was, what its drawbacks are, and what the best method to implement it is.
The MVP for this wrapper should be:
Support simple descale operations (descale -> upscale/rescale)
Option to pick kernel
Option to pick upscaler
Ideally, we should let the user pass a downscaler to the supersampler if possible rather than as a separate param, and simply supersample, do post-filtering or masking (if applicable), and downscale to the input resolution.
Simple credit masking (with the ability to override this logic(?) and/or change a threshold)
Cross conversion-handling
Border-handling (+ masking if necessary)
Edgemasking
Pre/post-upsampling filtering
For pre-upsampling filtering, there's two ways to go about it. Both provides potential upsides, but unless we add a bunch of params, only one can be supported easily.
Prefiltering for stuff like credit and edgemasking
Prefiltering before descaling
Post-filtering would be done on the supersampled output, except for cases where it's unreasonable to supersample (i.e. a conventional kernel is passed to the upscaler).
Additional niceties:
Dynamically pick kernels/resolution combinations (i.e. like vodesfunc.MixedRescale used to).
For this to be feasible, we need to find a good way to determine which descaler must be picked. The most obvious way to is to check the errors of all re-upscaled (with the same kernel) clips, normalising them, and then comparing the values. However, it may be feasible to look at better implementations and ways to determine this, as well as ways to optimise this process. Checking the error of many clips at once can be very slow, after all.
Alongside the above, per-scene kernel/resolution picker.
As it's highly unlikely for a kernel to switch in the middle of a scene, this could reduce the logic for determining the above greatly by just checking a handful of frames as opposed to every single frame.
The scenes could be collected prior to the analysis being done and the result stored in a temporary directory (or even .vsjet), or the user could pass their own Keyframes file.
If possible, it may be worthwhile to allow the user to do a "scan" in advance using this logic to pre-analyse which kernel/res should be used for which scene. That can be compared to the keyframe file from the previous step, and if the keyframes match, we can use that information. If not, fall back to checking per-scene.
The text was updated successfully, but these errors were encountered:
Setsu described a setup a while ago that'd involve something like DescaleTarget, but with smart handling of pre- and post-filtering, which could possibly be further extended. Implementing something like this for descaling would be a good idea, as a good descale wrapper is probably the biggest one this package lacks right now.
As Setsu is unlikely to implement this idea now, we need to figure out what exactly the plan was, what its drawbacks are, and what the best method to implement it is.
The MVP for this wrapper should be:
Additional niceties:
.vsjet
), or the user could pass their own Keyframes file.The text was updated successfully, but these errors were encountered: