Augmented-Reality | repository consists of basic augmented reality | Augmented Reality library
kandi X-RAY | Augmented-Reality Summary
kandi X-RAY | Augmented-Reality Summary
Augmented Reality’s definition varies from one person to another, but the concept remains the same. My definition of AR is to put some extra information on the user’s real world. This some extra information varies from text to objects, and the real world remains unaffected from this superimposed information. Google’s Definition: "a technology that superimposes a computer-generated image on a user's view of the real world, thus providing a composite view".
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of Augmented-Reality
Augmented-Reality Key Features
Augmented-Reality Examples and Code Snippets
Community Discussions
Trending Discussions on Augmented-Reality
QUESTION
I am new to flutter. I am using AR core provided by google. But I get this error at compiletime:
flutter.pub-cache\hosted\pub.dartlang.org\arcore_flutter_plugin-0.0.10\android\src\main\kotlin\com\difrancescogianmarco\arcore_flutter_plugin\ArCoreView.kt: (241, 38): Object is not abstract and does not implement abstract member public abstract fun onActivityCreated(@NonNull p0: Activity, @Nullable p1: Bundle?): Unit defined in android.app.Application.ActivityLifecycleCallbacks
You can see my code here
is this error specific to my version or is it an error in the properties? Also I have enabled AndroidX
...ANSWER
Answered 2021-May-19 at 10:37There is an error in the flutter plugin that needs to be corrected.
Go to ArCoreView.kt file in Flutter plugin at
\flutter.pub-cache\hosted\pub.flutter-io.cn\arcore_flutter_plugin-0.0.10\android\src\main\kotlin\com\difrancescogianmarco\arcore_flutter_plugin
remove "?" from onActivityCreated as below
i.e. Replace
QUESTION
In ARKit 4 with RealityKit, one could find, for example, left hand transform relatively to skeleton base (hip). With lefthand transform (relative to hip) and hip transform (relative to world), how to calculate the lefthand transform relative to world? Where is the API? It seems that when I figured out the math formula, I could use SIMD api. But I guess there should be a simple API do this kind of math? Thanks.
EDIT: Adding some code to make it clear ..
...ANSWER
Answered 2020-Sep-18 at 03:59Ok. After reviewing all the math about matrix, 3D transformation, and quaternion, I finally found a solution! It is actually buried in RealityKit API:
QUESTION
I managed with the help of the ScrollMagic library to change my background img for my section three-section-container
depending on scroll position. Additionally, I managed to add an overlay that will appear only when I am on a certain section of my page.
My issue now is that I would like to animate how background image changes (I want to come from right to left and stay positioned in the middle/ as you can see in code the background is changing 2 times). I tried with `transform: translateY(40px); property in CSS but the result was inconsistent because the image would not be 100% of my screen. Also, I want my overlay to come from left to right, and I am quite confused how.
...ANSWER
Answered 2020-Oct-10 at 20:57I'm not familiar with the ScrollMagic API but I think this code snippet can make things a little cleared from the JS and CSS prospective involved in the animation.
In fact most of them can be done without the need of externals API but just triggering back an forth a CSS class !
Hope this helps you a little bit:
QUESTION
Is there a markerless AR library that can run on any mobile web browser? Something that does not require a standalone app like Google Article, and can be run on both Android and iOS.
...ANSWER
Answered 2018-Jan-29 at 20:37For that to happen, the mobile browsers would have to be able to support some web-based AR framework and properly interface with the hardware components such as camera, gyros, etc.
There are a few web-based AR projects out there, the most popular is currently AR.js (loosely branded as WebAR) which is what Google Article is currently based on. However, they are limited to experimental browsers or special "WebAR" applications with the correct hooks.
An alternative is to consider using cross-platform AR frameworks, such as Viro React which enables mobile cross-platform AR/VR development in React Native (Javascript-based). However, you will need to build your experience into an App in order to deploy it.
However, with time, WebVR and WebAR should eventually be supported by the various browsers and their vendors.
QUESTION
I have an ARSCNView with virtual objects drawn. The virtual objects are drawn on the user's face. The session has the following configuration:
...ANSWER
Answered 2020-Apr-20 at 19:17The reason the Virtual objects are not appearing is because ARKit provides only the raw image, so frame.capturedImage
is the image captured by the camera, without any of the SceneKit rendering. To pass the rendered video you will need to implement an offscreen SCNRenderer
and pass the pixel buffer to Agora's SDK.
I would recommend you check out the Open Source framework AgoraARKit. I wrote the framework and it implements Agora.io Video SDK and ARVideoKit as dependancies. ARVideoKit is a popular library that implements an off-screen renderer and provides the rendered pixel buffer.
The library implements WorldTracking by default. If you want to extend the ARBroadcaster
class to implement faceTracking you could use this code:
QUESTION
On Apple's ARKit 3 page (https://developer.apple.com/augmented-reality/arkit/) there is fine print reading, "People Occlusion and the use of motion capture, simultaneous front and back camera, and multiple face tracking are supported on devices with A12/A12X Bionic chips, ANE, and TrueDepth Camera."
I read this sentence as a computer scientist, which suggests that the device must have an A12, ANE, and a TrueDepth camera to support any of "people occlusion, motion capture, simultaneous front/back camera, and multiple face tracking." If this is the case, then the only iPad that can use any of these features is the latest iPad Pro, and not an Air, which have an A12, but not a TrueDepth camera. (Sidenote: what is ANE? I can't find documentation on it, but I think it has something to do with the machine learning system.) Is this correct--that only the iPad Pro supports any of these features?
I ask because people occlusion is incredibly important for a multi-user experience around a table.
...ANSWER
Answered 2019-Aug-15 at 20:26The ANE is the Apple Neural Engine.
Apple's footnote is a little misleading. People occlusion is performed on the scene captured by the rear camera and so does not make use of the TrueDepth camera.
Multiple face tracking requires a TrueDepth camera, in addition to an A12 chip with ANE.
People occlusion is available on all A12 devices - see this WWDC session so the new iPad Air will support it.
QUESTION
I am interested in opening a USDZ file in a Safari browser and making sure that it ends up in the QLPreviewer and I was wondering if there is a browser API for safari that can do this. I want to make sure that the link doesn't just open in a new tab.
I have tried just opening a USDZ file but that just gives the appearance of a new tab where you have the option to open into files and stuff like that.
There isn't really any code yet but if that is the best way to achieve this that would make sense.
I see so from what I have read here you need to specify the rel="ar" which I am still not sure if it is working. I have tried it within codepen and nothing is really happening.
...ANSWER
Answered 2019-Aug-08 at 11:58It turns out that you need an image tag within the link and the code I was actually using didn't have that image tag. Very weird choice by the apple devs.
QUESTION
I am investigating new ARKit3 features, in particular motion capture. I have an iPhone with A12 chip (so all new features should work), and iPhone is loaded with iOS 13 (beta). I also installed the Xcode 11 beta on my development laptop as recommended.
When I download the tutorial / sample Xcode project here I find that I have errors on compile. I was able to get rid of those by commenting out the references to the AnyCancellable instance, and the program compiles.
When I run it on on my device, I get error messages about the 3D mesh (in USDZ format) saying it is missing certain joint information.
I've tried substituting the USDZ model included with the sample project for other USDZ models provided on the apple site here, to no avail.
The expected behaviour is that the sample app should open in a camera view, track a person that appears in front of the camera and render a skeleton with 3D mesh model overtop, which mimics the person's actions.
I am getting the error in the Xcode console:
...ANSWER
Answered 2019-Jul-23 at 02:54Can you confirm that your device definitely has an A12 chip (meaning it's an iPhone XR, XS, XS Max, iPad Mini (2019), iPad Air (2019), or iPad Pro (2018))? Additionally, ensure your Xcode is running the latest Beta 4 release (build 11M374r, as of the time of this writing) and your iOS device is running the iOS 13 beta.
These may seem rudimentary, but I cannot replicate the issues you indicate when downloading the sample project on my end. I can launch the project, set my signing team, load on my iPhone XS Max, and the project functions as it should; when a body is detected in the frame, the 3D "skeleton" appears alongside and motion follows.
It may be worth mentioning that the 3D body tracking technology appears to require a USDZ model that has existing joints already configured. The USDZ models on Apple's site mostly do not (the robot, for example, lacks any such joints that could be tracked). See this screenshot for an example of what the structure of Apple's "skeleton" looks like.
The error messages you provided from Xcode make it seem like the model you are trying to load lacks the skeletal structure to conform to this technology. Could you try re-downloading the sample project and confirm you get the same error?
QUESTION
we are trying to make a custom 3D configurator with AR capabilities, but we found out that model-viewer from google is too limited for our needs, so we are doing it in three.js
To use the AR we analyzed the source code and found out that there is a redirect to this link when clicking the model-viewer button:
intent://googlewebcomponents.github.io/model-viewer/examples/assets/Astronaut.glb?link=https%3A%2F%2Fgooglewebcomponents.github.io%2Fmodel-viewer%2Fexamples%2Faugmented-reality.html&title=A%203D%20model%20of%20an%20astronaut
(taken from the Google's example page)
Out first tests made a warning in the console like "Inaccessible Navigation" silently failing. Do you have an idea on what are we doing wrong?
...ANSWER
Answered 2019-Jul-16 at 15:30The link above was wrong. I inspected the source code and find out that the correct one is built like this:
QUESTION
I tried to render ARCore stereoscopically through cardboard. Due to the misalignment between the field of view of the ARCore camera and VR, the object appears to be not being tracked.
To sort this out, I referred to this blog and implemented it by using a barrel distortion shader. However, it doesn't render stereoscopically.
Is there any other fix for this problem?
...ANSWER
Answered 2019-Apr-25 at 13:11For stereo-vision you need two View Controllers i.e. two ArFragments
, each running at 60 fps. Ideally you need a frame rate 120 fps but it's impossible in ARCore at this time.
Barrel distortion
is just a special type of a warped distortion of a view.
Also, for robust stereo experience you should shift views only along X-axis and never along Y-axis. In real life an effective distance between two camera lens is 64
-200
mm.
For further details look at Technicolor Paper: 15 Stereo Issues.
And there are other visual implementations for stereo:
It's up to you which one is more comfortable for watching.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install Augmented-Reality
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page