Discussion

This page exists for people too lazy to register on Doom9/GitHub/wherever and want to leave comment or ask a simple question anyway.

70 thoughts on “Discussion

  1. Hello.
    I’ve noticed the color (or other info) of each pixel is very useful and can be used to create some scrips/plugins.
    How can I get the value of specific pixel?
    For example, for pixel(1234, 234) it can return a object with some attributes.
    print(thisobject.colorvalue)
    (16, 128, 128)
    which means it’s a pure black pixel.

    If user can get the info about each pixel, function like contain_border(), is_border() can be created easily.

    ^_^ Many thanks

  2. Does avspmod work with vapoursynth? You should do a similar program that is more up to date that would be amazing and easier for us new beginners. 😀

  3. Was just testing mvtools.py before starting on my own script and playback is producing: [vapoursynth] Frame requested during init! This is unsupported.
    [vapoursynth] Returning black dummy frame with 0 duration.
    [vapoursynth] Frame requested during init! This is unsupported.
    [vapoursynth] Returning black dummy frame with 0 duration.

    Interpolation seems fully functional so can the above be ignored or is there a .conf element that will fix this?

  4. Hi!
    Poyon 2.6 is no longer alavable.

    I am only able to download 2.7 so I am not able to install vapoursynth.

    Regards
    Carsten

  5. Using v44 on Ubuntu (from djcj ppa); using current VSEdit. Problem with text subtitles:
    video = core.sub.Subtitle(video, “This is a fairly long subtitle with lots of text”, margins=[0, 0, 100, 100])
    If I try to set either top or bottom margins, all that happens is that the subtitle is rendered in a smaller (very small!) font (subtitle vertical position doesn’t alter).
    Any other info to assist in a diagnosis?

    BR, Jon

  6. Hi.
    I have a question about vspipe.
    What is the meaning of the vspipe return value 1.
    P.S. I’m using QProcess.
    ——————–
    My Log:
    Piper Process: vspipe –y4m “example.vpy” –
    Encoder Process: “x264_x64” –demuxer y4m –stdin y4m –crf 22.0 -o “example_vpy.mp4” –
    Piper Process has completed. Exit code: 1
    y4m [error]: bad sequence header magic
    x264 [error]: could not open input file `-‘
    mp4 [error]: failed to finish movie.
    Encoder Process has completed. Exit code: -1

  7. Python Script is ok.
    >>>from vapoursynth import core
    >>>print(core.version())

    Core R43
    API R3.5

    I wanted to use protable vs.
    But I can’t use vsedit-32bit.exe, which showed following log.
    2018-05-04 16:23:34.285
    Failed to initialize VapourSynth environment!
    Failed to initialize VapourSynth environment!
    Failed to initialize VapourSynth environment!
    Failed to initialize VapourSynth environment!
    Failed to initialize VapourSynth environment!
    How can I fix it?
    Thank you!

  8. Hi Frederik!

    I’m interested in knowing more about reat time NLE and I wanted to know wheater VapourSynth it’s a good piece of software to create the foundation of such a software!

    I read a bit the docs and I can see that the program is capable of doing offline NLE, so basically it might act as a “renderer”, you create your script and then you can render it out and see the result.

    What if I wanted to add some real time capabilities like video scrubbing, frame by frame editing/trimming, adding filters on the go, editing filter parameters, ecc?

    Would VS used as a library and integrated in some wrapper project maybe with a GUI be suitable for the task?

    If yes, where do I think I need to start looking to understand better how to achieve this goal? Docs or actual source code?

    Thank you for your time!

  9. Hi.

    Where can I find the vapoursynth folder under OS X Sierra?

    I would like to install some plugins….isnt there under the application support folder.

    Thank You!

  10. Hi. I am just approaching Vapoursynth after having used Avisynth for a long time. Actually, “using” is a big word – I just had scripts generated automatically by another software I wrote years ago, which produced a series of PNG files and WAV files to be combined in a movie (through FFMPEG).

    Now I have the urgency of getting the software to work again, and I wanted to move to a more modern frameserver, but I can’t get a grasp of how Vapoursynth works. So far I have been unable to get anything to work.

    I wonder if anyone could give me even just a few hints on how go get started converting this AVS script to Vaporsynth. The specific case would also be useful to understand the general principles.

    ——

    function CreateTrack(float duration, float afps)
    // Allocates an empty “track” (as later multiple tracks will be mixed
    // in a single final audio track
    {
    duration = Round(duration)
    return ResampleAudio(ConvertToMono(BlankClip(duration, fps=afps)),44100)
    }

    function GetAudio(string file, float duration, float afps)
    // Converts an audio file to mono / 44k
    {
    duration = Round(duration)
    result = BlankClip(duration, fps=afps)
    result = AudioDub(result, DirectShowSource(file))
    return ResampleAudio(ConvertToMono(result),44100)
    }

    function Attach(clip channel, int point, float whole, string file, float duration, float afps)
    // Places an audio file into a track, at a given point in time
    {
    channel = channel.Trim(0, point) + GetAudio(file, duration, afps)
    temp = CreateTrack(whole, afps)
    return AudioDub(temp, channel)
    }

    video = ImageSource(“S:\temp\director\render\%05d.png”, 0, 3721, 24, true)
    audio = CreateTrack(3720, 24)

    music = CreateTrack(3720, 24)
    fx = CreateTrack(3720, 24)
    speech = CreateTrack(3720, 24)

    speech = Attach(speech,Round(120),3720,”000.WAV”,48,24)
    speech = Attach(speech,Round(216),3720,”001.WAV”,72,24)
    speech = Attach(speech,Round(336),3720,”002.WAV”,72,24)

    audio = MixAudio(audio, speech, 0.00, 1.00)
    audio = MixAudio(audio, fx, 0.50, 0.50)
    audio = MixAudio(audio, music, 0.50, 0.50)

    return AudioDub(video, audio)

  11. Hello.
    —————————
    import vapoursynth as vs
    core = vs.get_core()
    core.std.LoadPlugin(path=”/usr/local/lib/libmvtools.dylib”)
    core.std.LoadPlugin(path=’/usr/local/lib/libffms2.dylib’)

    clip = video_in

    src_num = int(float(container_fps) * 1e3)
    src_den = int(1e3)
    play_num = int(float(display_fps) * 1e3)
    play_den = int(1e3)

    if not (clip.width > 1920 or clip.height > 1080 or container_fps >= 60):
    clip = core.std.AssumeFPS(clip, fpsnum=src_num, fpsden=src_den)
    sup = core.mv.Super(clip, pel=2, hpad=16, vpad=16)
    bvec = core.mv.Analyse(sup, truemotion=True, blksize=16, isb=True, chroma=True, search=3)
    fvec = core.mv.Analyse(sup, truemotion=True, blksize=16, isb=False, chroma=True, search=3)
    clip = core.mv.BlockFPS(clip, sup, bvec, fvec, num=play_num, den=play_den, mode=3, thscd2=48)

    clip.set_output()

    print(“Source fps”, (src_num/src_den))
    print(“Playback fps”, (play_num/play_den))
    ————————————————–
    This script worked fine. However, since last Tuesday or Wednesday, Vapoursynth says ”
    Failed to evaluate the script:
    Python exception: name ‘video_in’ is not defined

    Traceback (most recent call last):
    File “src/cython/vapoursynth.pyx”, line 1841, in vapoursynth.vpy_evaluateScript
    File “”, line 6, in
    NameError: name ‘video_in’ is not defined
    “.

    I guess that Vapoursynth R41, which was released in this week, is the problem. What should I fix in this script?

  12. Would it be possible to use Hybrid mode of Tdecimate?
    AviSynth command is below.

    TDecimate(mode=1,hybrid=1,Clip2=””)

    My environment is OSX.

    Thanks.

    • No, there’s no port of TDecimate. You could theoretically dump the decimation metrics and write your own two pass solution though if you’re a bit creative. Or use Wobbly and manually mark the sections. Basically nobody here is a big fan of the automatic methods since they’re inferior.

    • So there are two main things separating it from a single clip return filter.

      1. Pass multiple videoinfo structs to setVideoInfo()
      2. In order to figure out which frame to return call getOutputIndex(frameCtx) in the getFrame() function

      I hope this is clear enough, filters like subtext do this. Ask more if it’s not.

      • Thank you.

        What would be the best strategy to employ when the different frames intrinsically result as the result of the same computing operation? One wouldn’t want to repeatedly perform the same computing operation for each clip, only to keep and return one result and discard the others in each case. I don’t assume the getFrame() calls will come in some kind of reliable sequence, so that I could simply store the computed frames in the filter private data and return them?

  13. Hello,
    I have installed R38 on Windows and I’m trying to write a simple script to load an image:
    import vapoursynth as vs
    core = vs.get_core()
    clip = core.imwri.Read([‘C:\dev\Test.jpg’])
    On running the script I get an error “AttributeError: No attribute with the name imwri exists.”
    I thought the ImageMagick Writer-Reader (IMWRI) plugin was part of the release but I cannot find its DLL in the plug-in folder. Any chance you could point me in the right direction to fix this please?

  14. Hi, just one comment:

    Can you add to the “getting started” section a recommendation on how to test (preview) the script output?

    On avisynth (windows) I open the avs script in virtual dub and preview all edits there. I can also jump to specific frames and compare inout and output.

    Is anything like this possible in vapoursynth or is it required to always fully encode the video result to be able to check it? After reading the docs, I think that AVFS (AV FileSystem) may be what I ask about. But without an simple example I have no idea on how to get started.

    It would be very helpful if the beginners section had an example and a tool recommendation.

    Thanks.

    • What kind of format do you want? I mean it’d be trivial to simply compress the current html docs if you want it. Or make a single html file. If you’re comfortable using python you can simply grab the source and compile your own version.

  15. Why don’t you display this discussion forum in reverse order? Would make it easier to check it regularly.

  16. In the doc of resize function, the argument ‘dither_type’ lists the available values but doesn’t clarify which dithering method is the default one. Also the argument ‘prefer_props’ doesn’t clarify whether its default is true or false.

    • There’s no port of it but vdecimate(dryrun=1) can be used to calculate the same metrics as it uses. I think you could use that and fremeeval to accomplish it, but it’s veru complicated scripting. In general I’d say you shouldn’t use dup at all for encoding things nowadays.

      • hey, that was fast.

        im not looking to decimate but to blend duplicate frames for noise reduction, something like this Dup(copy=true, maxcopies=1, blend=true)

        a cartoon im encoding has a lot of film grain and dup works great for that

        its the first time i use vs and i wasnt expecting for every obscure filter to be ported, i understand its a really niche use and very few people would use it.

        anyway, thanks 🙂

  17. Hi Fredrik,

    is VS supposed to output sound as well as video? Neither AVISource nor FFMS2 gives me any sound… Thank you.

      • What would be a reasonable workflow to reintroduce sound into the final output? I have ffmpeg at my disposal. Thank you.

      • Is audio support at least planned ? I used to enjoy compiling videos in aviSynth where the audio would get processed alongside the video yielding some interesting output like jittering people and so on.

    • For the moment you can try this hack: https://github.com/dubhater/vapoursynth-damb
      For now I’ll stick with AviSynth+, because I have an excuse: I use it like a NLE and split home videos into small clips, that I color grade or denoise depending on amount of light in the scene and finally recombine using transition filters. That necessarily includes audio cross-fades. ffmpeg and ffplay as well as MPC Home Cinema can read AviSynth scripts directly, which is great if you want to preview with audio.
      There’s also a plugin to read environment variables, which I use to override some script variables. That way you can write wrapper’s in Windows *.cmd files that apply setting for different uses, like realtime preview, dropout spotting with frame number or rendering in full quality. It’s a layman’s way of passing arguments to the frame server.

  18. I just signed up for doom9 but I have to wait a few days to post.

    I just discovered this script to add motion interpolation to mpv, and it is amazing: https://gist.github.com/phiresky/4bfcfbbd05b3c2ed8645

    However, it is not perfect. The videos I want to improve are the lame 25 & 30 fps matches from tennistv. The problem is that when the ball moves too fast (i.e. between two frames the ball moves much further than the width of a tennis ball) the interpolation fails.

    How feasible is it to come up with a way to detect fast-moving small objects and draw them at a halfway point between the location on the two “real” frames surrounding the interpolated frame?

    Thanks!
    Mark

    • You can probably increase the maximum search radius and see if that helps. As usual the problem is that you’ll get more false positives when looking for motion vectors. Specialized solutions for detection are of course possible but nobody ever wants to pay for them…

      • Thanks for the quick reply. The larger search radius would probably only need to look for a yellowish tennis ball color, so maybe that would reduce the false positives enough?

        I would pay for a couple hours of work if that’s all it takes for an expert; otherwise I’ll hack on it in my spare time starting with the motioninterpolation.vpy from my first post.

  19. I have install python 3.6 x64 and vapoursynth R37.

    But have
    Failed to initialize VapourSynth environment

    How to fix it?

  20. I have test many times using the cmd setting nominal_luminance 800 and 100. There is no different just like do not working:
    clip=core.resize.Spline36(clip=clip,width=3840,height=2160,format=vs.YUV420P10,matrix_s=”2020ncl”,range_s=”limited”, transfer_s=”st2084″, primaries_s=”2020″,matrix_in_s=”709″,range_in_s=”full”, transfer_in_s=”709″, primaries_in_s=”709″,nominal_luminance=800)

    anybody can give some comment?

  21. On another tack, since you appear to be almost about to release an R37, per this post
    https://forum.doom9.org/showthread.php?p=1800433#post1800433
    There are 2 new filters DGDenoise DGSharpen which use the GPU for seriously sped up filtering. Using either or both I get now these messages all the time:
    “Avisynth Compat: requested frame xxx not prefetched, using slow method”
    and then this at the end
    “Core freed but 645120 bytes still allocated in framebuffers”
    A quick peek at this code https://github.com/vapoursynth/vapoursynth/blob/master/src/avisynth/avisynth_compat.cpp seems to suggest to an uninitiated person that it could be updated to take into account DGDenoise and DGSharpen ? If so could that please be done ? Or, advice on what else should be done.

    I and presumably some others would like to benefit from gpu accelerated filters yielding eye-wateringly fast speeds 🙂

  22. https://github.com/vapoursynth/vapoursynth/pull/265#issuecomment-272283953 — May I ask which troubles? I didn’t want to bring this up on the GitHub issue tracker since it’s not exactly suited for chatting. A modularized refactor would come in nice; the code base of the Cython module has gotten a bit large and it can sometimes be tedious for someone to add/remove/fix something or even look at it. So what do you think about splitting it into smaller modules (and maybe into a package), and what issues do you think this could possibly bring in?

  23. I sooo cant wait to leave wine + avisynth behind, and have all my tools run natively on Linux. VapourSynth has come a long way since I last peeked in. Is very exciting to how much progress has been made. Any chance of deshaker, any thing comparable, or even better available yet ?

  24. I want to better understand how to use misc.AverageFrames but I’m not sure how to construct the weights. Are they supposed to be like a kernel/matrix (like used in std.Convolution)?

    The exemplar usecase input array is only 5 items long (https://forum.doom9.org/showthread.php?t=173871):
    misc.AverageFrames(singleclip, [1, 1, 1, 1, 1])
    which is supposed to give a radius of two, so does that look like:
    1, x, 2
    x, 3, x
    4, x, 5
    in which case, what is the merit of increasing those weights?

    If I’ve completely missed the point and there is some required reading I’ve missed I’d appreciate it if you could point me in the right direction

    • It mostly works like the Avisynth Average plugin: http://avisynth.nl/index.php/Average
      It’s just a list of wieghts. And if only clip is supplied it uses frames …n-2, n-1, n, n+1, n+2… as input. The final result is then multiplied by scale which defaults to the sum of the weights (or 1 if the sum is 0). I’ll try to get it properly documented for the next release.

  25. Any tips for compiling VS on GNU/Linux boxes Fredrik?

    I’m too old/lazy/stupid to work through a bunch of compiler errors. :/

    • I speculate that you don’t have a sufficiently recent compiler. You need a C++11-capable compiler: GCC 4.7.0 or above, or, in practice, if you want to avoid a bunch of C++ ABI bugs present only in that release, GCC 4.8.x or above.

      You also need the stated version of zimg — but if that goes wrong you get a configure-time failure, so that’s probably not your problem.

  26. I wanted to register on Doom9 but was not able to answer the random question(s) at the end of the registration page and stopped after the third strike. How is FFMS2 installed on windows 10 64b? A broader question is how to install the plugins? I downloaded FFMS2 and extracted with 7Z. I then copied ffms2.dll, ffms2.lib and ffmsindex.exe to C:\Program Files (x86)\VapourSynth\plugins64.

    I read the plugin autoload page but not sure if the USER search path is one I create
    \VapourSynth\plugins32 or \VapourSynth\plugins64 .

    I found 133meadwad’s VapoursSynth installation guide and VapourSynth 101 helpful but apparently I missed a connecting dot. URL is http://www.animemusicvideos.org/forum/viewtopic.php?f=118&t=125039#p1546405
    Thank You

    • C:\Program Files (x86)\VapourSynth\plugins64 <- that's the global path for 64bit, you may as well use that one FFMS2.dll is installed by simply dopping the 64bit ffms2.dll in the global autoload dir, then it's done

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.