vcam | DShow Video Capture Filter | Video Utils library
kandi X-RAY | vcam Summary
kandi X-RAY | vcam Summary
This is a DirectShow Video Capture Source Filter, taken from The March Hare website and shipped suitable to compile under MinGW (Ie, includes code borrowed from the DirectX 9 SDK).
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of vcam
vcam Key Features
vcam Examples and Code Snippets
Community Discussions
Trending Discussions on vcam
QUESTION
I tried one of sample DirectShow based virtual camera available at https://github.com/roman380/tmhare.mvps.org-vcam
I am able to compile and build and its working fine in browsers(Chrome and Edge).
But in case of Desktop app like Zoom and Team virtual camera is getting recognized but not showing any frames. On selecting this virtual camera only black screen is visible not the expected output.
I tried to debug after reading Debugging DirectShow Filters and How to debug c++ DirectShow filter
I added DbgLog()
in constructor of output pin
class like
ANSWER
Answered 2021-Jan-25 at 09:49Debugging PushSource/VCam based filter is Zoom is to happen along the same lines as debugging of a DLL project running in context of external application. Namely the procedure is this:
- Stop target application (Zoom)
- Build your project, register the DLL as/if needed with COM (regsvr32)
- Have the target application started
- Attach your Visual Studio to the running application (Ctrl+Alt+P, Native code debugger, Zoom process)
- Put breakpoints in your project, enable break on exceptions
- Have the target appliction running and interactively start activity related to video capture
Steps 3-4 can be replaced by setting project settings to start Zoom as debug target (Project settings, Debugging, Command).
Also, you might want to put a breakpoint on this line and see where exactly debug output is routed to. You might be able to see it in integrated Debug Output window (in the case of OutputDebugString
use) or you would be able to check what exactly file is being used for writing log to.
QUESTION
I am trying to build Vivek's Virtual Camera on Windows 10.
For that I need to have Win7Samples/multimedia/directshow/baseclasses
I have downloaded baseclasses
and built it using Visual Studio 2019.
Now I am ready to build Virtual camera filter. I followed few requisite steps
git clone https://github.com/roman380/tmhare.mvps.org-vcam
cd tmhare.mvps.org-vcam\Filters
- Then I tried to open
Filters.dsp
in Visual Studio 2019 which asked me to doone-way upgrade
- Added
C:\Users\alokm\tmp\Windows-classic-samples\Samples\Win7Samples\multimedia\directshow\baseclasses
inAdditional Include Directories
- Added
C:\Users\alokm\tmp\Windows-classic-samples\Samples\Win7Samples\multimedia\directshow\baseclasses\Debug
toAddition Library Directories
- After all these steps I tried to build by hitting
Build
>>Build Solution
But I am getting lot of linker errors.
Errors in text format:
...ANSWER
Answered 2021-Jan-17 at 09:02The problem building the project is coming from this:
- The project source code has a dependency of DirectShow BaseClasses which is no longer a part of Windows SDK
- The project has too many settings diverged from defaults; with current Visual Studio it becomes a problem
I updated the github repository and changed the project settings to make the project build (Visual Studio 2019 Communitty) and run: most of the C++ project settings are reverted and DirectShow BaseClasses are used in the build configuration from github.
Check out README.md there, follow build steps (which require to pull Windows SDK Samples first, and build DirectShow stuff there), have DLLs registered (regsvr32
) and finally you will have the filter generating video with random data:
QUESTION
https://github.com/roman380/tmhare.mvps.org-vcam
I gave look on Vivek's Virtual Camera. I could not understand how to compile and run this project.
In the project I am seeing two directories
ANSWER
Answered 2021-Jan-13 at 08:55Filters
folder contains source code for the project. .DSP is the project file for old Visual Studio (or was it Visual C++ 6.0 yet?). If current Visual Studio cannot convert the project file you should still be able to create a new DLL project file and add the source code files.
You need DirectShow BaseClasses to build the code. BaseClasses are no longer a part of Windows 10 SDK, so you have to have Windows 10 SDK and you additionally need this:
Note that BaseClasses there are fresher than VCam sample itself and Visual Studio solution file is already .SLN and is known to be buildable and acceptable (via conversion) for current Visual Studio.
Also you can find other filter projects in neighboring folders.
Bin
folder contains pre-built Win32 binaries of the project. Don't be confused with .AX extension - the files are regular .DLL files and you can use them directly against regsvr32. If you build the code into .DLL files you will have the same effect as with .AX.
To see the project in action you need a 32-bit application that works with cameras via DirectShow, for example:
- Windows 10 SDK GraphEdit
- AMCap sample (among mentioned samples and also documented on MSDN)
- GraphStudioNext
You should see a new camera option once you regsvr32 the built project (from privilege elevated command prompt!)
To have the project working with 64-bit applications, you need to build the project first, then regsvr32 it. Virtual Driver Cam not recognized by browser question clarifies why 32 and 64 builds work separately and target different applications.
QUESTION
I'm struggling to build/link Vivek's Vcam / Capture Source Filter on Windows 10 with Visual Studio 2019 (Version 16.5.5) for x64 platforms.
(I have already built the BaseClasses project with no issues.)
These are the current building errors:
...ANSWER
Answered 2020-May-21 at 02:47Try to disable "Conformance mode" in properties(From Yes(/permissive-)
to No
).
I can reproduce this issue and then get it compile after disable it.
This compiler option is set by default in Visual Studio 2017 version 15.5 and later, but it is not set by default in earlier versions.
According to the /permissive document:
By default, the /permissive- option is set in new projects created by Visual Studio 2017 version 15.5 and later versions. It's not set by default in earlier versions. When the option is set, the compiler generates diagnostic errors or warnings when non-standard language constructs are detected in your code, including some common bugs in pre-C++11 code.
...Older versions of the SDK may fail to compile under /permissive- for various source code conformance reasons.
QUESTION
- 3d game
- 2 game objects: A and B. (both spawned on runtime)
I want to get a smooth transition from object A to B, but only if, the object B is outside vcam's dead zone. Vcam should look at the object with static rotation. (only the camera position should change)
I assume that I have to use:
...ANSWER
Answered 2019-Nov-24 at 18:08Description how to achieve what I described in a question is provided by Unity staff member - Gregoryl.
CM doesn't give you a notification when things go in and out of dead zones.
You can accomplish what you're looking for by polling the position of B relative to the camera and manually calculating whether the angle between CameraPosToB and CameraForward is sufficiently large, and activating the second vcam when it is.
Here is a part of code - doing what he described:
QUESTION
I can't find how to constantly dynamically blend between 3 cameras (I call them middle, upper & lower) based on the rate and height of the hero, constantly.
When following the hero, the middle vCam is the main/base one, I'd like to proportionally blend through upper and lower vCams based on the height of the hero.
The player/hero can move rapidly between heights, so the blend should be weighted with eases. Which is a natural part of Cinemachine blends. And works. But it's like a switch, to me, at my current understanding of Cinemachine blends, rather than a constant dynamic blending based on height.
...ANSWER
Answered 2019-May-17 at 10:36From what I remember, you can define the blend style in the Cinemachine blend options. From the description, it seems that it is set to "Cut" when you probably want something similar to EaseIn/EaseOut. It even allows you to define your own custom blend between cameras if the default options do not work for you.
Take a look at the documentation for more details.
QUESTION
So I've tried to tackle this problem for the last couple of weeks but come to a bit of a standstill. I'm trying to registering an RTSP stream from an IP address as a virtual webcam for use in another application (could be skype or similar). What I need is for my computer to add a virtual webcam to its device list. This should preferably be done through a C# script as devices could be added dynamically through a .NET program. I have found similar questions on StackOverflow, but many of these are outdated, use Linux, or receives another stream format/protocol.
My approach so far has been using DirectShow filters and so far that has worked to a degree. Using Graphedit I can see my incoming stream by using an RTSP source filter. However, there are some problems:
- The source filter was a trial, the full version is paid and pretty expensive
- I have no experience with DirectShow filter programming
- I only showed the stream through GraphEdit, there was no virtual driver registered so e.g Skype couldn't use the stream
So I guess my question boils down to:
- Is my approach with DirectShow the only way to acheive what I'd like?
- Is a filter the correct approach to use if Windows should list the stream as a webcam device?
- Is vcam still the best example to look at to implement something like this?
- Does any one know of similar, open source programs that acheive what I describe?
Anyway, I appreciate any help I can get! Thanks.
...ANSWER
Answered 2019-Mar-11 at 08:02The diagram below explains the applicability of virtual cameras:
You are trying to somehow mount a lower green or blue box so that it reads data from RTSP.
Note that more and more applications like new Skype are Media Foundation based (top right box on the diagram) and your filter based source is less and less applicable.
Creating a virtual camera which is recognized by various software assumes you are supplying a driver (red box). Even though such packages exist, I am not aware of any open source or even free which let you quickly start on this route.
DirectShow filter based sources (and you are yet to implement RTSP client there) will only be see by DirectShow based applications of the same bitness.
QUESTION
So I found a V-CAM source, I am now using it and quite happy however, is it possible to untoggled bitmap when the objects that are bitmapped are viewed by the cam? For instance, lets say I have a vector movieclip with a bunch of vector art, I toggle export as bitmap on the movieclip from my IDE, now would it be possible to add on to my VCAM, that everything in its view (it resizes stage) untoggles or redraws back to vector, while the rest of map/movieclip is still in bitmap? And as the VCAM moves away, what was shifted from bitmap to vector gets shifted back to bitmap?
...ANSWER
Answered 2018-Oct-22 at 04:43I think you'd better use another camera with higher bitmap dimensions (2x-4x) to render those scenes from vector that you feel are too pixelized. In terms of export, just export the character's bitmaps 2x-4x larger, or you can just have it as a vector somewhere in your app, maybe hidden, and do realtime render when needed, or plain have it in your display list as a vector and not a bitmap.
In case you need to have some complex vector form into a bitmap-based engine, you can use realtime bitmap drawing of a single source in various postures/rotations, then use those rendered bitmaps to get performance. Check the game "Enigmata: Stellar War" for this technique, how does it look in the process (hint: when it says "Loading boss" it does all the render behind the scenes).
Getting a vectorized source form bitmaps is a lot more processor consuming than having a ready-made vectorized source stored somewhere. Also you won't get your original vector restored in exact form, as converting a vector to a bitmap is a lossy transformation.
QUESTION
I have build 32 bit famous Vivek's VCam available here http://tmhare.mvps.org/downloads.htm and it successfully works and shows in 32 bit video conferencing software's like Skype and Zoom but when I build it int 64 bit it showed in graph edit but did not showed in 64 bit video chatting software's like Skype for windows 10, Bluejeans and Skype for business 64 bit.
Do anyone has experience in using 64 bit VCam project? Do I have to make some changes in code for 64 bit? please guide.
...ANSWER
Answered 2018-Sep-12 at 07:48When you build a 64-bit filter, it will work only for 64-bit software which consumes video capture via DirectShow and not otherwise. There is not so much of this software around though.
See Applicability of Virtual DirectShow Sources to get an idea where such 64-bit builds are applicable and what are the other options.
You will also find relative information in these older questions:
QUESTION
I've been struggling with an issue for quite some time now and have all but run out of ideas.
I'm using a Cinemachine Virtual Camera in a 2D project to follow around a target. The ground / background is a Unity Tilemap GameObject. As you can see in this gif, when following around the player (a 24x24 sprite), the background tiles seem to warp a bit. I've tried to script all types of solutions to adjust the Virtual Camera transform position and hopefully snap/move it "correctly" to no avail. I don't even know for sure that the source of the issue is with the camera setup. I'm running out of solutions to something that seems like a pretty straightforward and very common scenario. I've created a sample project illustrating the issue which can be downloaded here.
Using Unity 2017.3.1f1 Personal. The background sprites are 32x32, with a PPU of 32. No compression, Point (no filter), rendered using a material with Shader: Sprites/Default, and Pixel snap.
Cinemachine Virtual Cam is set to Ortho lens size 16, Body: Framing Transposer with default settings.
Thank you so much for any suggestions or tips!!!
It feels similar to what's being described here with sub-pixel movement but I don't know for sure, and the solution in that blog post seems like it should be unnecessary (but again - maybe not).
I've created camera scripts and attached them to the virtual camera as follows:
...ANSWER
Answered 2018-Apr-01 at 14:32I cross-posted this question to https://gamedev.stackexchange.com and someone responded with a great answer:
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install vcam
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page