This page exists for people too lazy to register on Doom9/GitHub/wherever and want to leave comment or ask a simple question anyway.

124 thoughts on “Discussion

  1. Failed to evaluate the script:
    Python exception: No attribute with the name lsmas exists. Did you mistype a plugin namespace?

    Traceback (most recent call last):
    File “src/cython/vapoursynth.pyx”, line 2244, in vapoursynth.vpy_evaluateScript
    File “src/cython/vapoursynth.pyx”, line 2245, in vapoursynth.vpy_evaluateScript
    File “/Library/Frameworks/VapourSynth.framework/video_edit/script.vpy”, line 17, in
    video = core.lsmas.LWLibavSource(source=abs_file_path)
    File “src/cython/vapoursynth.pyx”, line 1756, in vapoursynth.Core.__getattr__
    AttributeError: No attribute with the name lsmas exists. Did you mistype a plugin namespace?

    I keep getting this error when doing anything in vapoursynth ive downloaded the lsmash plugin and dependencies (i mightve done it wrong but i placed it in the desktop folder) so i dont know why this is showing up? ive tried putting the plugin in where its supposed to autoload and many other places but it doesnt work… please help been stuck for weeks!

  2. Hi! I ran into an script while trying to make a gif using VPS but when I try to check the script it gives me this error:

    Failed to evaluate the script:
    Python exception: No attribute with the name dfttest exists. Did you mistype a plugin namespace?

    Traceback (most recent call last):
    File “src/cython/vapoursynth.pyx”, line 2244, in vapoursynth.vpy_evaluateScript
    File “src/cython/vapoursynth.pyx”, line 2245, in vapoursynth.vpy_evaluateScript
    File “/Library/Frameworks/VapourSynth.framework/video_edit/script.vpy”, line 27, in
    File “/Library/Frameworks/VapourSynth.framework/lib/python3.8/site-packages/muvsfunc.py”, line 2672, in dfttestMC
    filtered = core.dfttest.DFTTest(interleaved, sigma=sigma, sbsize=sbsize, sosize=sosize, tbsize=tbsize, **dfttest_params)
    File “src/cython/vapoursynth.pyx”, line 1893, in vapoursynth._CoreProxy.__getattr__
    File “src/cython/vapoursynth.pyx”, line 1756, in vapoursynth.Core.__getattr__
    AttributeError: No attribute with the name dfttest exists. Did you mistype a plugin namespace?

    this is the script i’m trying to run:

    import os
    import vapoursynth as vs
    import havsfunc as haf
    import G41Fun as fun
    import mvsfunc as mvs
    import descale as descale
    import muvsfunc as muf
    import kagefunc as kage
    core = vs.get_core()

    core.max_cache_size = 1000 #Use this command to limit the RAM usage. 1000 or 2000 is fine.

    script_dir = os.path.dirname(__file__)
    rel_path = “video_cut/cut.mkv”
    abs_file_path=os.path.join(script_dir, rel_path)

    video = core.lsmas.LWLibavSource(source=abs_file_path) # Good for .ts/.tp/.m2ts/.mkv
    #video = core.ffms2.Source(source=abs_file_path) # Good for .mp4

    #Trim returns a clip with only the frames between the arguments first and last, or a clip of length frames, starting at first. a = first frame, b = last frame
    #video = core.std.Trim(video, a, b)

    #Resizer content goes here

    video = muf.dfttestMC(video, sigma=5, mdg=True)
    video = fun.FineSharp(video, sstr=1.8)

    video = core.fmtc.bitdepth(video, bits=16)

    I have all the scripts and plugins in the right place but it seems like it’s not recognising.. do you perhaps know what the problem could be?
    thank you so much in advance

  3. I’m trying to pass a clip filename to a vapoursynth script called by vspipe. I’ve read the documentation but cant seem to read the args in my vapousynth script. Any help gratefully accepted.

    Command line:
    vspipe –arg “cclipName=D:\003–04-02.mov” –y4m -p C:\Test\VS_TestArgs.vpy – | C:\ffmpeg.exe -y -i pipe: -c:v copy C:\Test\_TestPassArgs.mov

    import vapoursynth as vs
    core = vs.get_core()
    core.max_cache_size = 2000
    clipName=str(cclipName) #cclipName the passed in arg
    clip = core.lsmas.LWLibavSource(source=clipName)

    When run I get error:

    File “C:\Test\VS_TestArgs.vpy”, line 50, in
    NameError: name ‘cclipName’ is not defined

    • I tested your script but it works perfectly here. You’re definitely on the right track. Maybe there’s something else that’s causing it like a misquoted string in the commandline?

  4. Could you port the full version of TDecimate plugin? Modes 2 and 4. Even at standard fps, it working better than ordinary decimators due to the larger buffer and statistics. For example, not mistaken after a long darkness at the beginning of credits.

    • By mode 4 you mean statistics output? That’s already implemented with dryrun=True where it simply attaches the calculated metrics for each frame to the output. Use FrameProps() to print them.
      I don’t plan to implement mode 2 since the decimation decisions usually need to be manually overridden anyway. You could however with python cleverness construct something similar by saving all the metrics and deciding which frames to drop yourself.

      • Yes, statistics output. But it is needed for mode 2. Without mode 2, I don’t know what to do with statistics. It’s a shame you don’t plan to implement it. Mode 2 is better and it would also allow using non-standard fps and a cycle greater than 1 (sometimes it is necessary to remove several duplicates). The plugin works well and manual adjustments are usually not needed.

        Perhaps it is possible to build a TIVTC plugin for the Linux version of AviSynth+. But most likely it also needs to be ported first. I know that AviSynth plugins can be loaded into VapourSynth, however only on Windows and Wine, but I use Linux. TIVTC is currently being actively refactored and developed github.com/pinterf/TIVTC

      • About TDecimate.
        I noticed mode 2 uses non-linear access. It’s a little buggy, but works fine in AviSynth. Two-pass mode4 + mode2 uses linear access and works fine in VapourSynth (Win) – this is ideal option.

        pass 1:
        video = core.ffms2.Source(“input.mkv”)
        decimated = core.avs.TDecimate(video, mode=4, output=”c:\metrics.txt”)

        pass 2:
        video = core.ffms2.Source(“input.mkv”)
        decimated = core.avs.TDecimate(video, mode=2, rate=24, input=”c:\metrics.txt”)

        It would be ideal if it was native and worked in Linux not only in Wine.

        I noticed that all decimators work badly, very badly. They make mistakes and leave doubles even if the frame structure is completely constant.
        Only TDecimate mode 2 works well.

        For example, this 30p video youtube.com/watch?v=8ZeamMIhyj8 that can be downloaded like this:
        youtube-dl youtube.com/watch?v=8ZeamMIhyj8 -f 22

        At the beginning (5th frame) and before black titles (00:05:36) there are duplicate frames remain. Although, the video structure is completely stable (in AviSynth, I would rather do ChangeFPS (25)).

        video = core.ffms2.Source(“input.mp4”)
        decimated = core.vivtc.VDecimate(video, cycle=25, blockx=8, blocky=8)

        Only TDecimate mode 2 leaves no duplicates, even in one pass. It has a sufficient buffer.
        That’s why, even at standard fps, it is preferable.

  5. Hello,

    I want to build Vapoursynth for using it in Hybrid, but cannot successfully build mvtools plugin on Ubuntu 18.04.4. After building it, the vspipe –version command gives me:
    Illegal instruction (core dumped)
    and dmesg command shows:
    [ 388.320630] traps: vspipe[2321] trap invalid opcode ip:7f17b5bcde5a sp:7ffcd8bee4b0 error:0 in libmvtools.so[7f17b5bc5000+386000]

    I checked the mvtools build log, but did not find anything suspicious as far as I can tell according to my limited knowledge.

    Interesting thing is that on both Ubuntu 16.04.6 and 20.04 it builds well and vspipe –version works then. Only Ubuntu 18.04 from these three is affected.

    Thank you for any help.

  6. Hey,

    So I’m using a jupyter notebook developed by github user AlphaAtlas

    So when I run the thing in Colab, we get this

    vapoursynth.Error: Degrain3: failed to retrieve first frame from super clip. Error message: CUDA out of memory. Tried to allocate 7.91 GiB (GPU 0; 14.73 GiB total capacity; 8.00 GiB already allocated; 5.19 GiB free; 8.00 GiB reserved in total by PyTorch)

    And an error message that says:
    “pipe:: Invalid data found when processing input”

    We are testing the script out on a 4 second video clip that has a resolution of 1920×1080

    Here is the full traceback:

    Traceback (most recent call last):
    File “src\cython\vapoursynth.pyx”, line 1946, in vapoursynth.vpy_evaluateScript
    File “src\cython\vapoursynth.pyx”, line 1947, in vapoursynth.vpy_evaluateScript
    File “/content/autogenerated.vpy”, line 85, in
    clip = G41.SMDegrain(clip, tr=3, RefineMotion=True, pel = 1, prefilter = prefilter)
    File “/VapourSynthImports/G41Fun.py”, line 2123, in SMDegrain
    output = D3(mfilter, super_render, bv1, fv1, bv2, fv2, bv3, fv3, **degrain_args)
    File “src\cython\vapoursynth.pyx”, line 1852, in vapoursynth.Function.__call__
    vapoursynth.Error: Degrain3: failed to retrieve first frame from super clip. Error message: CUDA out of memory. Tried to allocate 7.91 GiB (GPU 0; 14.73 GiB total capacity; 8.00 GiB already allocated; 5.19 GiB free; 8.00 GiB reserved in total by PyTorch)

    pipe:: Invalid data found when processing input

    We are using the python 3 google compute engine backend (GPU) runtime and apparently we have access to 12.72 GB of RAM (Although it looks like it would be14.73 based on the error message)

    We are wondering if you guys might have any ideas what could be causing it. I am thinking the solution would be to pay for more RAM, but obviously we are trying to avoid that, which is why I am posting here

  7. Hello, thanks much for this. Is there a recommended way/path to take if one wanted to remove a logo/watermark in the corner of a video using Vapoursynth? I had a look through the functions and plugins and nothing stood out. Again, thanks!

  8. Originally reached out to Python support as they have an email but they responded saying this was a VapourSynth error and I would need to check with them on how to correct this error. While making a gif I get this error

    “Failed to evaluate the script:
    Python exception: lsmas: failed to construct index.

    Traceback (most recent call last):
    File “src\cython\vapoursynth.pyx”, line 1927, in vapoursynth.vpy_evaluateScript
    File “src\cython\vapoursynth.pyx”, line 1928, in vapoursynth.vpy_evaluateScript
    File “C:/Users/caitl/Desktop/my gifs/un_baek.vpy”, line 11, in
    video = core.lsmas.LWLibavSource(source=r’video_cut\cut.mkv’)
    File “src\cython\vapoursynth.pyx”, line 1833, in vapoursynth.Function.__call__
    vapoursynth.Error: lsmas: failed to construct index.”

    The code was the one in the scripts gifs file for VapourSynth and not one I made myself. I just input the code from the resizer as it asked. I tried messing with the lsmas line that it has the error with, changing the file name but even if I put in a wrong file name I always got the same error. How do I fix this? It seems no matter what video I use as well to make a gif I get the same error as well and I would love to be able to use the product as I’ve heard many great things about it. I’m just not sure what’s going wrong here.

    • import vapoursynth as vs
      import havsfunc as haf
      import muvsfunc as muvs
      import mvsfunc as mvs
      import descale as descale
      import hnwvsfunc as hnw
      core = vs.get_core()

      #core.max_cache_size = 1000 #Use this command to limit the RAM usage (1000 is equivalent to 1GB of RAM)

      video = core.lsmas.LWLibavSource(source=r’video_cut\cut.mkv’)

      #Trim returns a clip with only the frames between the arguments first and last, or a clip of length frames, starting at first. a = first frame, b = last frame
      #video = core.std.Trim(video, a, b)

      #Whatever you copied from the resizer goes here
      #video = haf.QTGMC(video, Preset=”Slower”, TFF=True)

      video = core.fmtc.resample(video, css=”444″)
      video = descale.Debilinear(video, 629,354)

      video = core.std.CropRel(video, left=64, top=26, right=115, bottom=78)

      video = core.fmtc.bitdepth(video, bits=8)

  9. Hello,
    I would like to know how to install Vapoursynth on Ubuntu 18.04?
    The ppa / djcj is no longer updated?
    Thank you and happy New Year celebrations

  10. Hi there,

    I recently updated my mac to macOS catalina and it seems like vapoursynth doesn’t work anymore. Is that normal?

  11. Hi, I’m on linux mint 19.2 and I install hybrid to convert video files.
    I try to install vapoursynth to be able to use the functionality in Hybrid.
    I try different way but I didn’t manage to do it.
    Is there someone to help me for that ?

  12. How do i install plugin for vapoursynth ? I tried plugin autoloading page and it didn’t work and i have plugin for my version of vapoursynth. And i also don’t know where to put parameters. Isn’t there some tutorial, which explains how to install vapoursynth plugin, i didn’t find anything sofar…

  13. Sorry, I know quite literally nothing about python/vapoursynth, so could anyone explain to me what this means?

    Failed to evaluate the script:
    Python exception: lsmas: failed to construct index.

    Traceback (most recent call last):
    File “src\cython\vapoursynth.pyx”, line 1927, in vapoursynth.vpy_evaluateScript
    File “src\cython\vapoursynth.pyx”, line 1928, in vapoursynth.vpy_evaluateScript
    File “C:/Users/admin/Downloads/VapourSynth64Portable/Scripts/Untitled.vpy”, line 19, in
    video = core.lsmas.LWLibavSource(source=path)
    File “src\cython\vapoursynth.pyx”, line 1833, in vapoursynth.Function.__call__
    vapoursynth.Error: lsmas: failed to construct index.

  14. Please click the donate button on this site right now and contribute to Fredrik as I just did. He is a super helpful person and has created something special. I know he would appreciate your support too. Thanks Fredrik!

  15. Hi, I’d like to see if I can convert a quite simply AviSynth script to VapourSynth on Linux but I think I need some help to get started. The script uses five video files (WMV format by the way, but I could of course convert with ffmpeg). For each file, the script applies one trim and the resulting slightly shorter clip is given a fade from black at the start and to black at the end. The five resulting videos are concatenated. So an abstracted and shortened form of the AvySynth would be: V1 = DirectShowSource(file1); V1 = Trim(V1, a, b); V1 = FadeIn(V1, x); V1 = FadeOut(V1, y); V2 = … AlignedSplice(V1, V2, V3, V4, V5). If this could be replicated in VapourSynth I’d be very gratefule for advice.

  16. Hello,
    I managed to install Hybrid and Vapoursynth in a Mac (Mojave). I really don’t know much about all this, all I want is a reasonable app with GUI to deinterlace video with QTMGC. But Hybrid gives me this error about l-mash. I tried to install lsmash, too, and the plugins are somewhere in the Mac lib directories, but Hybrid still fails. Any hint, please? Thank you!

    Failed to evaluate the script:
    Python exception: No attribute with the name lsmas exists. Did you mistype a plugin namespace?

    Traceback (most recent call last):
    File “src/cython/vapoursynth.pyx”, line 1927, in vapoursynth.vpy_evaluateScript
    File “src/cython/vapoursynth.pyx”, line 1928, in vapoursynth.vpy_evaluateScript
    File “/Users/xxxx/xxxPathHiddenxxx/tempPreviewVapoursynthFile16_35_23_268.vpy”, line 12, in
    clip = core.lsmas.LWLibavSource(source=”/Users/xxxx/xxxPathHiddenxxx/1-Test.dv”, format=”YUV420P8″, cache=0)
    File “src/cython/vapoursynth.pyx”, line 1522, in vapoursynth.Core.__getattr__
    AttributeError: No attribute with the name lsmas exists. Did you mistype a plugin namespace?

  17. Hey,

    I’m a beginner and have limited experience in Python and Computer Science in general so i’m sorry if it is a stupid question but :
    What is the difference between Portable and not Portable ?

    Thank you

  18. i wonder if it’s possible to feed output of blender’s frameserver into input stream of vapoursynth. the idea is use blender as a video editor and on final render pass the output through qtgmc deinterlacer filter. thank you, jose

  19. I like the idea of this quite a bit.
    I’m very attracted to the idea of using python for video editing but i’ve ran into a huge snag, i can’t figure out how this works…
    I understand some of the more touched on subjects that were in the documentation but as for the more complex functions i can’t even begin to understand.
    So far all i’ve been able to do in this is splice, crop, and append videos. Which is very limiting, and i know for a fact there is a way to get more indepth.
    But i can’t seem to find any documentation or tutorials besides what is listed here.
    I have little to no expierence with avisynth so when people cop out on doom9 saying something is “just like avisynth” it really isn’t helping.

    Would there by chance be a more indepth tutorial in the near future?

  20. Hello.
    I’ve noticed the color (or other info) of each pixel is very useful and can be used to create some scrips/plugins.
    How can I get the value of specific pixel?
    For example, for pixel(1234, 234) it can return a object with some attributes.
    (16, 128, 128)
    which means it’s a pure black pixel.

    If user can get the info about each pixel, function like contain_border(), is_border() can be created easily.

    ^_^ Many thanks

  21. Does avspmod work with vapoursynth? You should do a similar program that is more up to date that would be amazing and easier for us new beginners. 😀

  22. Was just testing mvtools.py before starting on my own script and playback is producing: [vapoursynth] Frame requested during init! This is unsupported.
    [vapoursynth] Returning black dummy frame with 0 duration.
    [vapoursynth] Frame requested during init! This is unsupported.
    [vapoursynth] Returning black dummy frame with 0 duration.

    Interpolation seems fully functional so can the above be ignored or is there a .conf element that will fix this?

  23. Hi!
    Poyon 2.6 is no longer alavable.

    I am only able to download 2.7 so I am not able to install vapoursynth.


  24. Using v44 on Ubuntu (from djcj ppa); using current VSEdit. Problem with text subtitles:
    video = core.sub.Subtitle(video, “This is a fairly long subtitle with lots of text”, margins=[0, 0, 100, 100])
    If I try to set either top or bottom margins, all that happens is that the subtitle is rendered in a smaller (very small!) font (subtitle vertical position doesn’t alter).
    Any other info to assist in a diagnosis?

    BR, Jon

  25. Hi.
    I have a question about vspipe.
    What is the meaning of the vspipe return value 1.
    P.S. I’m using QProcess.
    My Log:
    Piper Process: vspipe –y4m “example.vpy” –
    Encoder Process: “x264_x64” –demuxer y4m –stdin y4m –crf 22.0 -o “example_vpy.mp4” –
    Piper Process has completed. Exit code: 1
    y4m [error]: bad sequence header magic
    x264 [error]: could not open input file `-‘
    mp4 [error]: failed to finish movie.
    Encoder Process has completed. Exit code: -1

  26. Python Script is ok.
    >>>from vapoursynth import core

    Core R43
    API R3.5

    I wanted to use protable vs.
    But I can’t use vsedit-32bit.exe, which showed following log.
    2018-05-04 16:23:34.285
    Failed to initialize VapourSynth environment!
    Failed to initialize VapourSynth environment!
    Failed to initialize VapourSynth environment!
    Failed to initialize VapourSynth environment!
    Failed to initialize VapourSynth environment!
    How can I fix it?
    Thank you!

  27. Hi Frederik!

    I’m interested in knowing more about reat time NLE and I wanted to know wheater VapourSynth it’s a good piece of software to create the foundation of such a software!

    I read a bit the docs and I can see that the program is capable of doing offline NLE, so basically it might act as a “renderer”, you create your script and then you can render it out and see the result.

    What if I wanted to add some real time capabilities like video scrubbing, frame by frame editing/trimming, adding filters on the go, editing filter parameters, ecc?

    Would VS used as a library and integrated in some wrapper project maybe with a GUI be suitable for the task?

    If yes, where do I think I need to start looking to understand better how to achieve this goal? Docs or actual source code?

    Thank you for your time!

  28. Hi.

    Where can I find the vapoursynth folder under OS X Sierra?

    I would like to install some plugins….isnt there under the application support folder.

    Thank You!

  29. Hi. I am just approaching Vapoursynth after having used Avisynth for a long time. Actually, “using” is a big word – I just had scripts generated automatically by another software I wrote years ago, which produced a series of PNG files and WAV files to be combined in a movie (through FFMPEG).

    Now I have the urgency of getting the software to work again, and I wanted to move to a more modern frameserver, but I can’t get a grasp of how Vapoursynth works. So far I have been unable to get anything to work.

    I wonder if anyone could give me even just a few hints on how go get started converting this AVS script to Vaporsynth. The specific case would also be useful to understand the general principles.


    function CreateTrack(float duration, float afps)
    // Allocates an empty “track” (as later multiple tracks will be mixed
    // in a single final audio track
    duration = Round(duration)
    return ResampleAudio(ConvertToMono(BlankClip(duration, fps=afps)),44100)

    function GetAudio(string file, float duration, float afps)
    // Converts an audio file to mono / 44k
    duration = Round(duration)
    result = BlankClip(duration, fps=afps)
    result = AudioDub(result, DirectShowSource(file))
    return ResampleAudio(ConvertToMono(result),44100)

    function Attach(clip channel, int point, float whole, string file, float duration, float afps)
    // Places an audio file into a track, at a given point in time
    channel = channel.Trim(0, point) + GetAudio(file, duration, afps)
    temp = CreateTrack(whole, afps)
    return AudioDub(temp, channel)

    video = ImageSource(“S:\temp\director\render\%05d.png”, 0, 3721, 24, true)
    audio = CreateTrack(3720, 24)

    music = CreateTrack(3720, 24)
    fx = CreateTrack(3720, 24)
    speech = CreateTrack(3720, 24)

    speech = Attach(speech,Round(120),3720,”000.WAV”,48,24)
    speech = Attach(speech,Round(216),3720,”001.WAV”,72,24)
    speech = Attach(speech,Round(336),3720,”002.WAV”,72,24)

    audio = MixAudio(audio, speech, 0.00, 1.00)
    audio = MixAudio(audio, fx, 0.50, 0.50)
    audio = MixAudio(audio, music, 0.50, 0.50)

    return AudioDub(video, audio)

  30. Hello.
    import vapoursynth as vs
    core = vs.get_core()

    clip = video_in

    src_num = int(float(container_fps) * 1e3)
    src_den = int(1e3)
    play_num = int(float(display_fps) * 1e3)
    play_den = int(1e3)

    if not (clip.width > 1920 or clip.height > 1080 or container_fps >= 60):
    clip = core.std.AssumeFPS(clip, fpsnum=src_num, fpsden=src_den)
    sup = core.mv.Super(clip, pel=2, hpad=16, vpad=16)
    bvec = core.mv.Analyse(sup, truemotion=True, blksize=16, isb=True, chroma=True, search=3)
    fvec = core.mv.Analyse(sup, truemotion=True, blksize=16, isb=False, chroma=True, search=3)
    clip = core.mv.BlockFPS(clip, sup, bvec, fvec, num=play_num, den=play_den, mode=3, thscd2=48)


    print(“Source fps”, (src_num/src_den))
    print(“Playback fps”, (play_num/play_den))
    This script worked fine. However, since last Tuesday or Wednesday, Vapoursynth says ”
    Failed to evaluate the script:
    Python exception: name ‘video_in’ is not defined

    Traceback (most recent call last):
    File “src/cython/vapoursynth.pyx”, line 1841, in vapoursynth.vpy_evaluateScript
    File “”, line 6, in
    NameError: name ‘video_in’ is not defined

    I guess that Vapoursynth R41, which was released in this week, is the problem. What should I fix in this script?

  31. Would it be possible to use Hybrid mode of Tdecimate?
    AviSynth command is below.


    My environment is OSX.


    • No, there’s no port of TDecimate. You could theoretically dump the decimation metrics and write your own two pass solution though if you’re a bit creative. Or use Wobbly and manually mark the sections. Basically nobody here is a big fan of the automatic methods since they’re inferior.

    • So there are two main things separating it from a single clip return filter.

      1. Pass multiple videoinfo structs to setVideoInfo()
      2. In order to figure out which frame to return call getOutputIndex(frameCtx) in the getFrame() function

      I hope this is clear enough, filters like subtext do this. Ask more if it’s not.

      • Thank you.

        What would be the best strategy to employ when the different frames intrinsically result as the result of the same computing operation? One wouldn’t want to repeatedly perform the same computing operation for each clip, only to keep and return one result and discard the others in each case. I don’t assume the getFrame() calls will come in some kind of reliable sequence, so that I could simply store the computed frames in the filter private data and return them?

  32. Hello,
    I have installed R38 on Windows and I’m trying to write a simple script to load an image:
    import vapoursynth as vs
    core = vs.get_core()
    clip = core.imwri.Read([‘C:\dev\Test.jpg’])
    On running the script I get an error “AttributeError: No attribute with the name imwri exists.”
    I thought the ImageMagick Writer-Reader (IMWRI) plugin was part of the release but I cannot find its DLL in the plug-in folder. Any chance you could point me in the right direction to fix this please?

  33. Hi, just one comment:

    Can you add to the “getting started” section a recommendation on how to test (preview) the script output?

    On avisynth (windows) I open the avs script in virtual dub and preview all edits there. I can also jump to specific frames and compare inout and output.

    Is anything like this possible in vapoursynth or is it required to always fully encode the video result to be able to check it? After reading the docs, I think that AVFS (AV FileSystem) may be what I ask about. But without an simple example I have no idea on how to get started.

    It would be very helpful if the beginners section had an example and a tool recommendation.


    • What kind of format do you want? I mean it’d be trivial to simply compress the current html docs if you want it. Or make a single html file. If you’re comfortable using python you can simply grab the source and compile your own version.

  34. Why don’t you display this discussion forum in reverse order? Would make it easier to check it regularly.

  35. In the doc of resize function, the argument ‘dither_type’ lists the available values but doesn’t clarify which dithering method is the default one. Also the argument ‘prefer_props’ doesn’t clarify whether its default is true or false.

    • There’s no port of it but vdecimate(dryrun=1) can be used to calculate the same metrics as it uses. I think you could use that and fremeeval to accomplish it, but it’s veru complicated scripting. In general I’d say you shouldn’t use dup at all for encoding things nowadays.

      • hey, that was fast.

        im not looking to decimate but to blend duplicate frames for noise reduction, something like this Dup(copy=true, maxcopies=1, blend=true)

        a cartoon im encoding has a lot of film grain and dup works great for that

        its the first time i use vs and i wasnt expecting for every obscure filter to be ported, i understand its a really niche use and very few people would use it.

        anyway, thanks 🙂

  36. Hi Fredrik,

    is VS supposed to output sound as well as video? Neither AVISource nor FFMS2 gives me any sound… Thank you.

      • What would be a reasonable workflow to reintroduce sound into the final output? I have ffmpeg at my disposal. Thank you.

      • Is audio support at least planned ? I used to enjoy compiling videos in aviSynth where the audio would get processed alongside the video yielding some interesting output like jittering people and so on.

    • For the moment you can try this hack: https://github.com/dubhater/vapoursynth-damb
      For now I’ll stick with AviSynth+, because I have an excuse: I use it like a NLE and split home videos into small clips, that I color grade or denoise depending on amount of light in the scene and finally recombine using transition filters. That necessarily includes audio cross-fades. ffmpeg and ffplay as well as MPC Home Cinema can read AviSynth scripts directly, which is great if you want to preview with audio.
      There’s also a plugin to read environment variables, which I use to override some script variables. That way you can write wrapper’s in Windows *.cmd files that apply setting for different uses, like realtime preview, dropout spotting with frame number or rendering in full quality. It’s a layman’s way of passing arguments to the frame server.

  37. I just signed up for doom9 but I have to wait a few days to post.

    I just discovered this script to add motion interpolation to mpv, and it is amazing: https://gist.github.com/phiresky/4bfcfbbd05b3c2ed8645

    However, it is not perfect. The videos I want to improve are the lame 25 & 30 fps matches from tennistv. The problem is that when the ball moves too fast (i.e. between two frames the ball moves much further than the width of a tennis ball) the interpolation fails.

    How feasible is it to come up with a way to detect fast-moving small objects and draw them at a halfway point between the location on the two “real” frames surrounding the interpolated frame?


    • You can probably increase the maximum search radius and see if that helps. As usual the problem is that you’ll get more false positives when looking for motion vectors. Specialized solutions for detection are of course possible but nobody ever wants to pay for them…

      • Thanks for the quick reply. The larger search radius would probably only need to look for a yellowish tennis ball color, so maybe that would reduce the false positives enough?

        I would pay for a couple hours of work if that’s all it takes for an expert; otherwise I’ll hack on it in my spare time starting with the motioninterpolation.vpy from my first post.

  38. I have install python 3.6 x64 and vapoursynth R37.

    But have
    Failed to initialize VapourSynth environment

    How to fix it?

  39. I have test many times using the cmd setting nominal_luminance 800 and 100. There is no different just like do not working:
    clip=core.resize.Spline36(clip=clip,width=3840,height=2160,format=vs.YUV420P10,matrix_s=”2020ncl”,range_s=”limited”, transfer_s=”st2084″, primaries_s=”2020″,matrix_in_s=”709″,range_in_s=”full”, transfer_in_s=”709″, primaries_in_s=”709″,nominal_luminance=800)

    anybody can give some comment?

  40. On another tack, since you appear to be almost about to release an R37, per this post
    There are 2 new filters DGDenoise DGSharpen which use the GPU for seriously sped up filtering. Using either or both I get now these messages all the time:
    “Avisynth Compat: requested frame xxx not prefetched, using slow method”
    and then this at the end
    “Core freed but 645120 bytes still allocated in framebuffers”
    A quick peek at this code https://github.com/vapoursynth/vapoursynth/blob/master/src/avisynth/avisynth_compat.cpp seems to suggest to an uninitiated person that it could be updated to take into account DGDenoise and DGSharpen ? If so could that please be done ? Or, advice on what else should be done.

    I and presumably some others would like to benefit from gpu accelerated filters yielding eye-wateringly fast speeds 🙂

  41. https://github.com/vapoursynth/vapoursynth/pull/265#issuecomment-272283953 — May I ask which troubles? I didn’t want to bring this up on the GitHub issue tracker since it’s not exactly suited for chatting. A modularized refactor would come in nice; the code base of the Cython module has gotten a bit large and it can sometimes be tedious for someone to add/remove/fix something or even look at it. So what do you think about splitting it into smaller modules (and maybe into a package), and what issues do you think this could possibly bring in?

  42. I sooo cant wait to leave wine + avisynth behind, and have all my tools run natively on Linux. VapourSynth has come a long way since I last peeked in. Is very exciting to how much progress has been made. Any chance of deshaker, any thing comparable, or even better available yet ?

  43. I want to better understand how to use misc.AverageFrames but I’m not sure how to construct the weights. Are they supposed to be like a kernel/matrix (like used in std.Convolution)?

    The exemplar usecase input array is only 5 items long (https://forum.doom9.org/showthread.php?t=173871):
    misc.AverageFrames(singleclip, [1, 1, 1, 1, 1])
    which is supposed to give a radius of two, so does that look like:
    1, x, 2
    x, 3, x
    4, x, 5
    in which case, what is the merit of increasing those weights?

    If I’ve completely missed the point and there is some required reading I’ve missed I’d appreciate it if you could point me in the right direction

    • It mostly works like the Avisynth Average plugin: http://avisynth.nl/index.php/Average
      It’s just a list of wieghts. And if only clip is supplied it uses frames …n-2, n-1, n, n+1, n+2… as input. The final result is then multiplied by scale which defaults to the sum of the weights (or 1 if the sum is 0). I’ll try to get it properly documented for the next release.

  44. Any tips for compiling VS on GNU/Linux boxes Fredrik?

    I’m too old/lazy/stupid to work through a bunch of compiler errors. :/

    • I speculate that you don’t have a sufficiently recent compiler. You need a C++11-capable compiler: GCC 4.7.0 or above, or, in practice, if you want to avoid a bunch of C++ ABI bugs present only in that release, GCC 4.8.x or above.

      You also need the stated version of zimg — but if that goes wrong you get a configure-time failure, so that’s probably not your problem.

  45. I wanted to register on Doom9 but was not able to answer the random question(s) at the end of the registration page and stopped after the third strike. How is FFMS2 installed on windows 10 64b? A broader question is how to install the plugins? I downloaded FFMS2 and extracted with 7Z. I then copied ffms2.dll, ffms2.lib and ffmsindex.exe to C:\Program Files (x86)\VapourSynth\plugins64.

    I read the plugin autoload page but not sure if the USER search path is one I create
    \VapourSynth\plugins32 or \VapourSynth\plugins64 .

    I found 133meadwad’s VapoursSynth installation guide and VapourSynth 101 helpful but apparently I missed a connecting dot. URL is http://www.animemusicvideos.org/forum/viewtopic.php?f=118&t=125039#p1546405
    Thank You

    • C:\Program Files (x86)\VapourSynth\plugins64 <- that's the global path for 64bit, you may as well use that one FFMS2.dll is installed by simply dopping the 64bit ffms2.dll in the global autoload dir, then it's done

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.