This page exists for people too lazy to register on Doom9/GitHub/wherever and want to leave comment or ask a simple question anyway.

20 thoughts on “Discussion

  1. I wanted to register on Doom9 but was not able to answer the random question(s) at the end of the registration page and stopped after the third strike. How is FFMS2 installed on windows 10 64b? A broader question is how to install the plugins? I downloaded FFMS2 and extracted with 7Z. I then copied ffms2.dll, ffms2.lib and ffmsindex.exe to C:\Program Files (x86)\VapourSynth\plugins64.

    I read the plugin autoload page but not sure if the USER search path is one I create
    \VapourSynth\plugins32 or \VapourSynth\plugins64 .

    I found 133meadwad’s VapoursSynth installation guide and VapourSynth 101 helpful but apparently I missed a connecting dot. URL is http://www.animemusicvideos.org/forum/viewtopic.php?f=118&t=125039#p1546405
    Thank You

    • C:\Program Files (x86)\VapourSynth\plugins64 <- that's the global path for 64bit, you may as well use that one FFMS2.dll is installed by simply dopping the 64bit ffms2.dll in the global autoload dir, then it's done

  2. Any tips for compiling VS on GNU/Linux boxes Fredrik?

    I’m too old/lazy/stupid to work through a bunch of compiler errors. :/

    • I speculate that you don’t have a sufficiently recent compiler. You need a C++11-capable compiler: GCC 4.7.0 or above, or, in practice, if you want to avoid a bunch of C++ ABI bugs present only in that release, GCC 4.8.x or above.

      You also need the stated version of zimg — but if that goes wrong you get a configure-time failure, so that’s probably not your problem.

  3. I want to better understand how to use misc.AverageFrames but I’m not sure how to construct the weights. Are they supposed to be like a kernel/matrix (like used in std.Convolution)?

    The exemplar usecase input array is only 5 items long (https://forum.doom9.org/showthread.php?t=173871):
    misc.AverageFrames(singleclip, [1, 1, 1, 1, 1])
    which is supposed to give a radius of two, so does that look like:
    1, x, 2
    x, 3, x
    4, x, 5
    in which case, what is the merit of increasing those weights?

    If I’ve completely missed the point and there is some required reading I’ve missed I’d appreciate it if you could point me in the right direction

    • It mostly works like the Avisynth Average plugin: http://avisynth.nl/index.php/Average
      It’s just a list of wieghts. And if only clip is supplied it uses frames …n-2, n-1, n, n+1, n+2… as input. The final result is then multiplied by scale which defaults to the sum of the weights (or 1 if the sum is 0). I’ll try to get it properly documented for the next release.

  4. I sooo cant wait to leave wine + avisynth behind, and have all my tools run natively on Linux. VapourSynth has come a long way since I last peeked in. Is very exciting to how much progress has been made. Any chance of deshaker, any thing comparable, or even better available yet ?

  5. https://github.com/vapoursynth/vapoursynth/pull/265#issuecomment-272283953 — May I ask which troubles? I didn’t want to bring this up on the GitHub issue tracker since it’s not exactly suited for chatting. A modularized refactor would come in nice; the code base of the Cython module has gotten a bit large and it can sometimes be tedious for someone to add/remove/fix something or even look at it. So what do you think about splitting it into smaller modules (and maybe into a package), and what issues do you think this could possibly bring in?

  6. On another tack, since you appear to be almost about to release an R37, per this post
    There are 2 new filters DGDenoise DGSharpen which use the GPU for seriously sped up filtering. Using either or both I get now these messages all the time:
    “Avisynth Compat: requested frame xxx not prefetched, using slow method”
    and then this at the end
    “Core freed but 645120 bytes still allocated in framebuffers”
    A quick peek at this code https://github.com/vapoursynth/vapoursynth/blob/master/src/avisynth/avisynth_compat.cpp seems to suggest to an uninitiated person that it could be updated to take into account DGDenoise and DGSharpen ? If so could that please be done ? Or, advice on what else should be done.

    I and presumably some others would like to benefit from gpu accelerated filters yielding eye-wateringly fast speeds 🙂

  7. I have test many times using the cmd setting nominal_luminance 800 and 100. There is no different just like do not working:
    clip=core.resize.Spline36(clip=clip,width=3840,height=2160,format=vs.YUV420P10,matrix_s=”2020ncl”,range_s=”limited”, transfer_s=”st2084″, primaries_s=”2020″,matrix_in_s=”709″,range_in_s=”full”, transfer_in_s=”709″, primaries_in_s=”709″,nominal_luminance=800)

    anybody can give some comment?

  8. I have install python 3.6 x64 and vapoursynth R37.

    But have
    Failed to initialize VapourSynth environment

    How to fix it?

  9. I just signed up for doom9 but I have to wait a few days to post.

    I just discovered this script to add motion interpolation to mpv, and it is amazing: https://gist.github.com/phiresky/4bfcfbbd05b3c2ed8645

    However, it is not perfect. The videos I want to improve are the lame 25 & 30 fps matches from tennistv. The problem is that when the ball moves too fast (i.e. between two frames the ball moves much further than the width of a tennis ball) the interpolation fails.

    How feasible is it to come up with a way to detect fast-moving small objects and draw them at a halfway point between the location on the two “real” frames surrounding the interpolated frame?


    • You can probably increase the maximum search radius and see if that helps. As usual the problem is that you’ll get more false positives when looking for motion vectors. Specialized solutions for detection are of course possible but nobody ever wants to pay for them…

      • Thanks for the quick reply. The larger search radius would probably only need to look for a yellowish tennis ball color, so maybe that would reduce the false positives enough?

        I would pay for a couple hours of work if that’s all it takes for an expert; otherwise I’ll hack on it in my spare time starting with the motioninterpolation.vpy from my first post.

Leave a Reply

Your email address will not be published. Required fields are marked *