SDK-Samples | repository includes several sample projects | SDK library
kandi X-RAY | SDK-Samples Summary
kandi X-RAY | SDK-Samples Summary
The repository includes several sample projects which show how to create apps for YotaPhone
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Initialize the epd activity
- Called when the activity is created
SDK-Samples Key Features
SDK-Samples Examples and Code Snippets
Community Discussions
Trending Discussions on SDK-Samples
QUESTION
I have download this code from official microsoft cognitive github repository:
...ANSWER
Answered 2021-May-06 at 10:27Pls make sure that you have created a current type of cognitive service, I recommend you to create All Cognitive Services
just as below:
You can follow this doc to create it(Multi-service resource).
I did some test on my side by this service and everything works for me as expected :
My local test image:
Result:
QUESTION
Sidenode: I'm not asking anyone to help me with a directx tutorial. I think this error is a normal Visual Studio win32 API error.
Hey guys and girls, I'm trying to learn some DirectX 11 for fun and I found those directx11 examples on github. Now I'm trying to understand each function and see how everything works together. I created a new VS project just to play around with stuff. I copy-pasted all of the code in Tutorial02.cpp into my own project, created a new filter called "Shaders" and changed the properties of my VS solution. At that point I don't have any problems to compile my code. Once I add the Tutorial02.fx, Tutorial02_VS.hlsl and Tutorial02_PS.hlsl to the solution VS can't compile and gives me the following error code:
...ANSWER
Answered 2020-Nov-17 at 09:46With the help of github demo, I can reproduce this problem. You can refer to the screenshots below to modify:
Tutorial02.fx properties:
Tutorial02_PS.hlsl properties:
Tutorial02_VS.hlsl properties:
You can also refer HLSL Property Pages to learn how individual HLSL shader files are built.
QUESTION
I am using jitsi-android-sdk in which when I go click home screen it takes me to the picture in picture mode and when I click on the app takes me back to the first form page. How should full screen the picture in picture mode when I click on the app?
what should I put in onResume() so I can full-screen picture in picture on click of the app.
For example, When we are using Netflix and watching a movie on mobile when we press the home button it enters the picture in picture mode and when we click on the app icon it fullscreen the picture in picture mode (return to the movie in fullscreen which we are watching)
I want to implement the same behavior in my android app.
...ANSWER
Answered 2020-Oct-28 at 06:46The first check which activity is going to paused state when the app goes picture in picture on that activity onResume() method add this code.
QUESTION
I recently started to learn computer graphics and I need some help.
I'm trying to reproduce a rotation of a "stick" around a circular path in a way that the stick is always tangent to the path. I provided an image to help to illustrate the desired behaviour:
I tried some approaches but never in a successful way. Below a gif with my most recent test, where the small cube represents the circular path that the stick should rotate around but being tangent to the path.
The transformations that I applied to the object world matrices:
...ANSWER
Answered 2020-Oct-18 at 10:22I could reproduce the behaviour I wanted by translating the Z coordinate of the stick instead of the X:
QUESTION
I need a consultation regarding developing authentication feature in MS teams Bot.
The Bot is meant to be used in MS Teams Channels primarily, and in order to secure the api/messages
endpoint, I am using OAuth
Now if the user is not authenticated an OAuthPrompt is created for the user to login and continue participating in the channel's thread conversation, however, the login prompt is sent in the channel thread, which does not provide a good user experience.
Instead, I would like to send the OAuthPrompt to the user as a private message.
How could go about implementing this? I am referring this documentation and this example core-proactiveMessages could someone please help me in figuring this out or pointing to correct resources, examples. Thanks
...ANSWER
Answered 2020-Jun-29 at 18:13It sounds like you want to restrict the bot to specific tenants just like in this question: What is the botframework security model?
If you're sure you want to restrict the bot to specific users instead of specific tenants, you can still use the middleware from that answer and just adapt it slightly to check the user ID instead of the tenant ID.
QUESTION
my goal is to encode the main framebuffer of my Windows machine using nvenc and stream its content to my iPad using the VideoToolbox API
The code I use to encode the h264 stream is basically a copy/paste of https://github.com/NVIDIA/video-sdk-samples/tree/master/nvEncDXGIOutputDuplicationSample the only change is that instead of writing to a file, I do send the data
For the decoding I do use https://github.com/zerdzhong/SwfitH264Demo/blob/master/SwiftH264/ViewController.swift#L71
The encoding work perfectly when I write all the contents to a file, I am able to use a h264->mp4 online converter without issue, the problem is that the decoder gives me the error kVTVideoDecoderBadDataErr in the function decompressionSessionDecodeFrameCallback
So for what I tried:
- Firsly using an h264 analyzer I found that the frame order are: 7/8/5/5/5/5/1...
- I found that nvenc does encode the frames 7/8/5/5/5/5 in only one packet
- I did try to separate this packet into multiple ones using the sequence (0x00 0x00 0x00 0x01), it gave me the frames 7/8/5 separately
- As you can see I only got one 5 frame which is around 100KB, the H264 analyzer said that there are four 5 frames (which are something like 40KB, 20KB, 30KB, 10KB)
- Using a hex file viewer I saw that the sequence separating those 5 frames were (0x00 0x00 0x01) instead, tried to also separate them but I got the exact same VideoToolbox error while decompressing
here is the code I use to separate and send the frames: The protocol is simply PACKET_SIZE->PACKET_DATA The swift code is able to read the NALU types so I am confident that this is not the issue
...ANSWER
Answered 2020-Jun-24 at 15:46Alright so as weird as it sounds, my code does work on the simulator but not on my iPad pro. In the end it does work so I'll still mark it as the correct answer
QUESTION
I'm looking for developing a video conferencing application for Android with the Skype for Business SDK (SfbSDK).
Too see if I'm able to do some needs, I cloned the git repository of the sample application with the SfbSDK dispensed by the Office Developer Team and available here
If the sample application allow me to broadcast the front facing camera and/or the back camera, I don't find any parameters that allow me to modify the camera instance except of changing target camera (front, back ...).
What I want (at least) is to modify the rotation when you turn your phone to landscape mode (modify other parameters would be great too, like Camera.Parameters
).
Because if you try it with the sample application, the preview (on the phone) and the outgoing video are both turned like below.
So I've tried to create an instance of android.hardware.Camera
and set it to active camera by casting it like this :
videoService.setActiveCamera(com.microsoft.office.sfb.appsdk.Camera)
But It doesn't work ... Or I'm doing it the wrong way ...!
Is this even possible !?
Any suggestion is welcomed.
...ANSWER
Answered 2018-Aug-25 at 05:10I have the same issue and didn't find a setDisplayRotation
function either in the Camera object used by Skype. If you go to the declaration of the interface of the camera you can see that not many options are available. But if you go in the SkypeForBusinessNative.aar
, in dl-video you can see the class RealCameraImpl
in the following package -> com.microsoft.dl.video.capture.impl
and it has a setDisplayRotation
function. Unfortunatly, they use the other Camera object that doesn't have this function. Maybe this will help you find something new
QUESTION
I integrated amazon pay into my website and i followed the instructions from Amazon Pay SDK Simple Checkout. Its all working so far, but in the last step the code example shows that i need an authorization reference id.
...ANSWER
Answered 2020-Feb-12 at 20:14authorization_reference_id
is given by you. It should be unique.
You can used the uniqid
builtin function in PHP to generate that. It must be unique for every request.
QUESTION
Webjobs version 3 has been out since around September so I want to upgrade from 2.3.0 to the latest version, currently 3.0.4.
The Microsoft.Azure.Webjobs.servicebus package is, however, blocking me from doing so. I tried looking at webjobs sdk samples but they had the exact same issue with the servicebus package blocking the upgrade.
Questions
- What is the correct way to upgrade the webjobs nuget package?
- Am I mistaken that version 3 is ready for production yet?
ANSWER
Answered 2019-Mar-06 at 18:36At the time I'm writing this, the newest release version of Microsoft.Azure.WebJobs.ServiceBus is 2.3.0. Looking at the package on nuget.org and expanding the dependencies, I see this:
QUESTION
I'm working on a code to capture the desktop using Desktop duplication and encode the same to h264 using Intel hardwareMFT. The encoder only accepts NV12 format as input. I have got a DXGI_FORMAT_B8G8R8A8_UNORM to NV12 converter(https://github.com/NVIDIA/video-sdk-samples/blob/master/nvEncDXGIOutputDuplicationSample/Preproc.cpp) that works fine, and is based on DirectX VideoProcessor.
The problem is that the VideoProcessor on certain intel graphics hardware supports conversions only from DXGI_FORMAT_B8G8R8A8_UNORM to YUY2 but not NV12, I have confirmed the same by enumerating the supported formats through GetVideoProcessorOutputFormats. Though the VideoProcessor Blt succeeded without any errors, and I could see that the frames in the output video are pixelated a bit, I could notice it if I look at it closely.
I guess, the VideoProcessor has simply failed over to the next supported output format (YUY2) and I'm unknowingly feeding it to the encoder that thinks that the input is in NV12 as configured. There is no failure or major corruption of frames due to the fact that there is little difference like byte order and subsampling between NV12 and YUY2. Also, I don't have pixelating problems on hardware that supports NV12 conversion.
So I decided to do the color conversion using pixel shaders which is based on this code(https://github.com/bavulapati/DXGICaptureDXColorSpaceConversionIntelEncode/blob/master/DXGICaptureDXColorSpaceConversionIntelEncode/DuplicationManager.cpp). I'm able make the pixel shaders work, I have also uploaded my code here(https://codeshare.io/5PJjxP) for reference (simplified it as much as possible).
Now, I'm left with two channels, chroma, and luma respectively (ID3D11Texture2D textures). And I'm really confused about efficiently packing the two separate channels into one ID3D11Texture2D texture so that I may feed the same to the encoder. Is there a way to efficiently pack the Y and UV channels into a single ID3D11Texture2D in GPU? I'm really tired of CPU based approaches due to the fact that it's costly, and doesn't offer the best possible frame rates. In fact, I'm reluctant to even copy the textures to CPU. I'm thinking of a way to do it in GPU without any back and forth copies between CPU and GPU.
I have been researching this for quite some time without any progress, any help would be appreciated.
...ANSWER
Answered 2019-Dec-05 at 08:17I am experimenting this RGBA conversion to NV12 in the GPU only, using DirectX11.
This is a good challenge. I'm not familiar with Directx11, so this is my first experimentation.
Check this project for updates : D3D11ShaderNV12
In my current implementation (may not be the last), here is what I do:
- Step 1: use a DXGI_FORMAT_B8G8R8A8_UNORM as input texture
- Step 2: make a 1st pass shader to get 3 textures (Y:Luma, U:ChromaCb and V:ChromaCr): see YCbCrPS2.hlsl
- Step 3: Y is DXGI_FORMAT_R8_UNORM, and is ready for final NV12 texture
- Step 4: UV needs to be downsampled in a 2nd pass shader: see ScreenPS2.hlsl (using linear filtering)
- Step 5: a third pass shader to sample Y texture
- Step 6: a fourth pass shader to sample UV texture using a shift texture (I think other technique could be use)
My final texture is not DXGI_FORMAT_NV12, but a similar DXGI_FORMAT_R8_UNORM texture. My computer is Windows7, so DXGI_FORMAT_NV12 is not handled. I will try later on a another computer.
The process with pictures:
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install SDK-Samples
You can use SDK-Samples like any standard Java library. Please include the the jar files in your classpath. You can also use any IDE and you can run and debug the SDK-Samples component as you would do with any other Java program. Best practice is to use a build tool that supports dependency management such as Maven or Gradle. For Maven installation, please refer maven.apache.org. For Gradle installation, please refer gradle.org .
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page