FaceTracker | Real time deformable face tracking in C++ with OpenCV | Computer Vision library
kandi X-RAY | FaceTracker Summary
Support
Quality
Security
License
Reuse
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample Here
FaceTracker Key Features
FaceTracker Examples and Code Snippets
Trending Discussions on FaceTracker
Trending Discussions on FaceTracker
QUESTION
So I'm making a ball bounce in spark AR with cannon.js. Everything was working fine until I wanted to get the position of the forehead from the facetracker via the patch editor to a script.
Error:
Error:Exception in HostFunction: valueOf() called on a Signal. This probably means that you are
trying to perform an arithmetic operation on a signal like +, -, *, etc. Use functions .add, .sub(),
etc on the signal instead or .subscribeWithSnapshot() on an EventSource to get the signal's current value on a callback.
at ScalarSignal::valueOf (native)
{
"line": 4841,
"column": 19,
"sourceURL": "cannon.js"
}
My patches that send the vector3 of the forehead to the script.
this piece of code is giving the error:
var pos = Patches.getVectorValue('HeadPos');
groundBody.position.set(pos);
I sadly can't find anything online about sending a vector3 from the patches to a 'Working' value in the script, Does somebody know how to send a vector3 value ta a script and the use it as a value?
ANSWER
Answered 2020-Oct-23 at 10:21I've found a selution when working with cannon.js. cannon.js has a kind of update function so you can use .pinLastValue() because it does this every frame in order to update the physics.
My code:
// Create time interval loop for cannon
Time.setInterval(function (time) {
if (lastTime !== undefined) {
let dt = (time - lastTime) / 1000;
world.step(fixedTimeStep, dt, maxSubSteps)
// Get the head position values from the patches
var headVal1 = Patches.getScalarValue("HeadPosX").pinLastValue();
var headVal2 = Patches.getScalarValue("HeadPosY").pinLastValue();
var headVal3 = Patches.getScalarValue("HeadPosZ").pinLastValue();
// Set the position of the head hitbox to the headposition in the physics world
HeadBody.position.x = headVal1;
HeadBody.position.y = headVal2;
HeadBody.position.z = headVal3;
}
lastTime = time
}, timeInterval);
This code gets the x, y and z values individually from the patches where I send these values individually as well. I could have done it as a Vector3 from both sides as well, but I thought this looked nicer and made it easier to edit the values individually through the patches instead of packing it as a Vector3 again.
QUESTION
I'm having trouble getting the FaceTracker Class to work on HoloLens 2. As soon as I try to detect the faces with ProcessNextFrameAsync Method
I get an exception of the following kind:
System.Runtime.InteropServices.COMException (0x80004005): Unspecified error
This is only the first part of the error message, if more information is needed, I can add that.
See this for a minimal example.
public async void Start()
{
var selectedGroup = await FindCameraAsync();
await StartMediaCaptureAsync(selectedGroup);
}
private async Task StartMediaCaptureAsync(MediaFrameSourceGroup sourceGroup)
{
faceTracker = await FaceTracker.CreateAsync();
this.mediaCapture = new MediaCapture();
await this.mediaCapture.InitializeAsync(settings);
this.frameProcessingTimer = ThreadPoolTimer.CreatePeriodicTimer(ProcessCurrentVideoFrameAsync, timerInterval);
}
private async Task ProcessCurrentVideoFrameAsync()
{
const BitmapPixelFormat InputPixelFormat = BitmapPixelFormat.Nv12;
var deviceController = this.mediaCapture.VideoDeviceController;
this.videoProperties = deviceController.GetMediaStreamProperties(MediaStreamType.VideoPreview) as VideoEncodingProperties;
VideoFrame videoFrame = new VideoFrame(InputPixelFormat, (int)this.videoProperties.Width (int)this.videoProperties.Height);
IList detectedFaces;
try
{
detectedFaces = await faceTracker.ProcessNextFrameAsync(videoFrame);
}
catch (Exception e)
{
System.Diagnostics.Debug.WriteLine($"Failed with Exception: {e.ToString()}");
return;
}
videoFrame.Dispose();
}
- i get a suitable camera with
MediaFrameSourceKind.Color
andMediaStreamType.VideoPreview
withFindCameraAsync()
. Which works fine in my opinion. - start
MediaCapture
and theFaceTracker
withinStartMediaCaptureAsync()
- try to detect faces in
ProcessCurrentVideoFrameAsync()
Here are the things I have tested and the information I have received:
- I have a picture in the format
Nv12
,PixelWidth
1504 andPixelHeigt
846 - the permissions in Unity are granted for Webcam, PicturesLibrary and Microphone
- the app is compiled with Il2CPP
- the message
No capture devices are available.
appears after starting the app. In other articles it was mentioned that the permission (Webcam or Microphone) is missing, which is not the case. But may be connected nonetheless. - I used Track faces in a sequence of frames and Basic Face Tracking sample as reference
I am very grateful for every incentive and thought.
UPDATE 14. July 2020I have just tried the FaceDetector
on several individual images that were stored locally on the HoloLens 2. This works fine.
Even though FaceDetector
and FaceTracker
are not identical, they are very similar. So I guess that the problem is somehow related to MediaCapture
.
Next I will try to capture an image with MediaCapture
and process it with FaceDetector
.
If anyone has any more ideas in the meantime, I would be grateful to hear them.
ANSWER
Answered 2020-Jul-14 at 03:29This is an official sample show how to use the FaceTracker class to find human faces within a video stream: Basic face tracking sample. And in line 256, that is the main point to get a preview frame from the capture device.
However, base on your code, you have created a VideoFrame
object and specified the properties and format to it, but you are missing invoke GetPreviewFrameAsync
to convert the native webcam frame into the VideoFrame
object.
You can try the following code to fix it:
private async Task ProcessCurrentVideoFrameAsync()
{
const BitmapPixelFormat InputPixelFormat = BitmapPixelFormat.Nv12;
var deviceController = this.mediaCapture.VideoDeviceController;
this.videoProperties = deviceController.GetMediaStreamProperties(MediaStreamType.VideoPreview) as VideoEncodingProperties;
VideoFrame videoFrame = new VideoFrame(InputPixelFormat, (int)this.videoProperties.Width (int)this.videoProperties.Height);
//add this line code.
await this.mediaCapture.GetPreviewFrameAsync(videoFrame);
QUESTION
So my app is saving the names of Bluetooth devices the user has connected to previously in SharedPreferences which is than compared to all of the names of currently paired devices so on opening the app can instantly connect to the said device. This is done by this piece of code:
sharedPreferences = getApplicationContext().getSharedPreferences("BtNames", MODE_PRIVATE);
keys = sharedPreferences.getAll();
for(BluetoothDevice device : pairedDevices) {
try {
for (Map.Entry entry : keys.entrySet()) {...}
This loops through the paired devices and the entries of SharedPreferences whose value than is accessed by this code:
String device_name = device.getName();
String name = entry.getValue().toString();
Now both of these work well and entry.getValue()... returns the exact names of the previously connected devices. The problem occurs when trying to compare the two Strings by:
device_name.equals(name)
This returns false even though both of the Strings appear to be exact same when logged:
E/FaceTracker: EV3LO
E/FaceTracker: EV3LO
I have already tried to replace all spaces with nothing but that didn't work either. Maybe I overlooked something but at the moment I don't really have a clue what's going wrong. Thanks in advance for answers.
ANSWER
Answered 2020-Jun-05 at 22:04The problem is a non printable and non ASCII character at the end or the beginning of the string. Please try the following script:
name.replaceAll("\\P{Print}","");
I hope it helpem and good luck if it didn't
QUESTION
i am newbie for Spark AR. I just learn the basic, and i wanna now that can we track face emotional like sad, happy, angry etc? Like a facetracker using javascript made by justadudewhohacks
ANSWER
Answered 2020-Apr-13 at 16:45There are some Face Gestures that are supported by Spark such as happy, smiling, surprised, kissing, etc. See here: https://sparkar.facebook.com/ar-studio/learn/documentation/reference/classes/facegesturesmodule/
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install FaceTracker
src (contains all source code)
model (contains a pre-trained tracking model)
bin (will contain the executable after building)
Support
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesExplore Kits - Develop, implement, customize Projects, Custom Functions and Applications with kandi kits
Save this library and start creating your kit
Share this Page