Khronos | The open source intelligent personal assistant | Speech library
kandi X-RAY | Khronos Summary
kandi X-RAY | Khronos Summary
Khronos is a program that uses speech recognition to perform a command. Khronos also synthesizes speech in response to the given commands.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of Khronos
Khronos Key Features
Khronos Examples and Code Snippets
Community Discussions
Trending Discussions on Khronos
QUESTION
I'm trying to implement skeletal animation using gltf 2.0 assets.
I'm currently able to transform the skeleton and correctly render the model. The model moves as expected when a transform (for exmaple a rotation) of a joint is edited.
The problem is that as soon as I try to use the transforms from the animation sampler outputs, the skeleton is completely wrong. My testing shows that the transformation matrices of the first keyframe of the animation would match the transform of the joints in the initial pose, but they're in fact quite different ! It's not exactly clear to me where exactly these transforms are supposed to fit in the rendering algorithm.
My rendering algorithm looks roughly like this:
...ANSWER
Answered 2022-Mar-26 at 10:10Ok, I solved the problem. The issue was that I wasn't loading quaternions correctly. Quaternions should be interpreted as raw XYZW values.
QUESTION
My objective is to obtain the ability of parallelizing a code in order to be able to run it on GPU, and the Graal would be to have a software that can run in parallel on any GPU or even CPU (Intel, NVIDIA, AMD, and so...).
From what I understood, the best solution would be to use OpenCL. But shortly after that, I also read about SYCL, that is supposed to simplify the codes that run on GPU.
But is it just that ? Isn't better to use a lower level language in order to be sure that it will be possible to be used in the most hardware possible ?
I know that all the compatibilities are listed on The Khronos Group website, but I am reading everything and its opposite on the Internet (like if a NVIDIA card supports CUDA, then it supports OpenCL, or NVIDIA cards will never work with OpenCL, even though OpenCL is supposed to work with everything)...
This is a new topic to me and there are lots of informations on the Internet... It would be great if someone could give me a simple answer to this question.
...ANSWER
Answered 2022-Feb-12 at 09:08Probably yes.
OpenCL is supported on all AMD/Nvidia/Intel GPUs and on all Intel CPUs since around 2009. For best compatibility with almost any device, use OpenCL 1.2. The nice thing is that the OpenCL Runtime is included in the graphics drivers, so you don't have to install anything extra to work with it or to get it working on another machine.
SYCL on the other hand is newer and not yet established that well. For example, it is not officially supported (yet) on Nvidia GPUs: https://forums.developer.nvidia.com/t/is-sycl-available-on-cuda/37717/7 But there are already SYCL implememtations that are compatible with Nvidia/AMD GPUs, essentially built on top of CUDA or OpenCL 1.2, see here: https://stackoverflow.com/a/63468372/9178992
QUESTION
I am reading the vulkan subgroup tutorial and it mentions that if the local workgroup size is less than the subgroup size, then we will always have inactive invocations.
This post clarifies that there is no direct relation between a SubgroupLocalInvocationId
and LocalInvocationId
. If there is no relation between the subgroup and local workgroup ids, how does the small size of local workgroup guarantee inactive invocations?
My guess is as follows
I am thinking that the invocations (threads) in a workgroup are divided into subgroups before executing on the GPU. Each subgroup would be an exact match for the basic unit of execution on the GPU (warp for an NVIDIA GPU). This means that if the workgroup size is smaller than the subgroup size then the system somehow tries to construct a minimal subgroup which can be executed on the GPU. This would require using some "inactive/dead" invocations just to meet the minimum subgroup size criteria leading to the aforementioned guaranteed inactive invocations. Is this understanding correct? (I deliberately tried to use basic words for simplicity, please let me know if any of the terminology is incorrect)
Thanks
...ANSWER
Answered 2022-Mar-11 at 02:27A dispatch of compute defines with its parameters the global workgroup. The global workgroup has x×y×z invocations.
Each of those invocations are divided into local groups (defined by the shader). A local workgroup also has another set of x×y×z invocations.
A local workgroup is partitioned into subgroups. Its invocations are rearranged into subgroups. A subgroup has (1-dimensional) SubgroupSize
amount of invocations, which all need not be assigned a local workgroup invocation. And a subgroup must not span over multiple local workgroups; it can use only invocations from a single local workgroup.
Otherwise how this partitioning is done seems largely unspecified, except that under very specific conditions you are guaranteed full subgroups, which means none of the invocations in a subgroup of SubgroupSize
will stay vacant. If those conditions are not fulfilled, then the driver may keep some invocations inactive in the subgroup as it sees fit.
If the local workgroup has in total less invocations than SubgroupSize
, then some of the invocations of the subgroup indeed need to stay inactive as there are not enough available local workgroup invocations to fill even one subgroup.
QUESTION
Question: Are there any granularity/alignment requirements for transfering sub-regions of a VkBuffer from one queue family to another?
I would like to:
- use a single graphics/present queue for sourcing draw calls from a single VkBuffer;
- and another separate compute queue to fill sub-allocated regions/sections within said VkBuffer
So, commands submitted to the graphics/present queue (1.) will read/source data written to the VkBuffer by commands submitted to the compute queue (2.). And I do not want to transfer the whole VkBuffer from one queue to the other, but only certain sub-regions (once their computation is finished and the results can be consumed/sourced by the graphics commands).
I've read that whenever you use VK_SHARING_MODE_EXCLUSIVE
VkSharingMode you must explicitly transfer ownership from one queue (in my case the compute queue) to the other queue (in my case the graphics/present queue) such that commands submitted to the latter queue sees the changes made by submitted commands in the former queue.
I know how to do this and how to correctly synchronize both release and acquire actions with semaphores.
However, since I wanted to use a singe VkBuffer with manual sub-allocations (actually with VMA virtual allocations) and saw that VkBufferMemoryBarrier provides an offset
and size
properties, I was wondering whether there are any granularity requirements as for which sections/pages of a VkBuffer can be transferred.
Can I actually transfer single bytes of a VkBuffer from one queue family to another or do I have to obey certain granularity/alignment requirements (other than the alignment requirements for my own data structures within that VkBuffer and the usage of that buffer of course)?
...ANSWER
Answered 2022-Mar-08 at 14:36There are no granularity requirements on queue family transfer byte ranges. Indeed, there do not appear to be granularity requirements on memory barrier byte ranges either.
QUESTION
I used clang's ftime-trace to profile the compilation of time of my program. Turns out that about 90% of the time is spent parsing the massive vulkan.hpp header provided by the khronos group.
This in turn means that if I minimize the inclusion of this header on header files and put it only on cpp files my compile times should be drastically better.
I face the following problem however.
There's a few objects in the header that I need pretty much everywhere. There's a few error code enumerators, a few enums of other kinds, and a couple of object types, such as
vk::Buffer
, vk::Image
etc...
These ones make less than a fraction of a percent of the total header, but I cannot include them without including the entire header. What can I do to cherry pick only the types that I actually use and avoid including the entire header every time I need my code to interface with an image?
...ANSWER
Answered 2022-Feb-28 at 21:50There are some ways to mitigate the issue on your side.
vulkan_handles.hpp existsFirst, there are several headers now (there did not used to be, this was a huge complaint in virtually every vulkan survey). This does not completely mitigate the issues you have (the headers are still massive) but you don't have to include vulkan.hpp, which itself includes every single available header, just to get access to vk::Image and vk::Buffer. Handles are now found in vulkan_handles.hpp ( though it is still 13000 lines long).
Forward declarationYou talk about not having classes because of the way vulkan works. Hypothetically, you can avoid having Vulkan.hpp in your header files in a lot of cases.
vk::Buffer, vk::Image can both be forward declared, eliminating the need to include that header, as long as you follow forward declaration rules
Stack PIMPLE wrappingYou say that you can't use classes etc... That doesn't really make sense. vk::Buffer and vk::Image are both classes. You could hypothetically create wrapper classes for only the types you need doing this, however, in order to eliminate the overhead you'd have to allocate enough space for those types before hand.
Now in a big enterprise library with enterprise defined types, you normally don't do this, because the size of types could change at any moment. However, for vulkan.hpp, the size and declaration of the types vulkan.hpp is using and size of their wrappers is really well defined, and not going to change, as that would cause other incompatibilities on their side.
So you can assume the size of these types and create something like :
QUESTION
I am investigating how to do cross-process interop with OpenGL and Direct3D 11 using the EXT_external_objects
, EXT_external_objects_win32
and EXT_win32_keyed_mutex
OpenGL extensions. My goal is to share a B8G8R8A8_UNORM texture (an external library expects BGRA and I can not change it. What's relevant here is the byte depth of 4) with 1 Mip-level allocated and written to offscreen with D3D11 by one application, and render it with OpenGL in another. Because the texture is being drawn to off-process, I can not use WGL_NV_DX_interop2.
My actual code can be seen here and is written in C# with Silk.NET. For illustration's purpose though, I will describe my problem in psuedo-C(++).
First I create my texture in Process A with D3D11, and obtain a shared handle to it, and send it over to process B.
...ANSWER
Answered 2022-Feb-16 at 18:02After some more debugging, I managed to get [DebugSeverityHigh] DebugSourceApi: DebugTypeError, id: 1281: GL_INVALID_VALUE error generated. Memory object too small
from the Debug context. By dividing my width in half I was able to get some garbled output on the screen.
It turns out the size needed to import the texture was not WIDTH * HEIGHT * BPP
, (where BPP = 4
for BGRA in this case), but WIDTH * HEIGHT * BPP * 2
. Importing the handle with size WIDTH * HEIGHT * BPP * 2
allows the texture to properly bind and render correctly.
QUESTION
I've run into an issue while attempting to use SSBOs as follows:
...ANSWER
Answered 2022-Feb-10 at 13:25GLSL structs and C++ structs have different rules on alignment. For structs, the spec states:
If the member is a structure, the base alignment of the structure is N, where N is the largest base alignment value of any of its members, and rounded up to the base alignment of a vec4. The individual members of this substructure are then assigned offsets by applying this set of rules recursively, where the base offset of the first member of the sub-structure is equal to the aligned offset of the structure. The structure may have padding at the end; the base offset of the member following the sub-structure is rounded up to the next multiple of the base alignment of the structure.
Let's analyze the struct:
QUESTION
The GLSLangSpec.3.30 says:
Values from the previous pipeline stage are copied into input variables at the beginning of shader execution. Variables declared as in or centroid in may not be written to during shader execution.
If they are copied in and the original values are left untouched, why are input variables not writable?
...ANSWER
Answered 2022-Feb-09 at 17:29The standard describes behavior, not implementation. Thus, the statement about being "copied in" merely describes the apparent effect, not what the actual hardware does.
Indeed, the whole point of these two specific requirements is to allow implementations to not copy inputs into specific variables. Or rather, to allow VS implementations to avoid having to allocate storage for a variable if it doesn't need to. If an implementation wants to make a use of a shader-stage input variable read from the buffer directly (or a cache), it can do so.
Now yes, you can still have that implementation with the ability to modify in
variables. But the compiler would have to check to see if the shader does modify them. So it'd be a lot easier to implement such optimizations (where relevant) if the compiler didn't have to check to see if you're doing something you shouldn't be doing.
QUESTION
I'm currently working with OpenGL in C++, and I'm trying to debug by identifying what the currently bound vertex buffer and index buffer are. I have three functions.
...ANSWER
Answered 2022-Jan-27 at 21:12See the "Parameters" section here. The symbolic constants used for binding the buffers match the ones used for glGet* (but with a _BINDING suffix).
For the vertex buffer object, use:
QUESTION
I'm learning Vulkan following vulkan-tutorial.com.
I'm stuck because I can't figure out why I'm getting this error when creating the graphics pipeline.
...ANSWER
Answered 2022-Jan-22 at 00:25I finally found the problem: I was destroying the shader modules too early. Looks like you have to keep the shader modules alive ultil after you have created the pipeline.
This is the fixed code
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install Khronos
Make sure your PATH variable contains the location to the MinGW-w64 bin/ folder.
Locate the main source directory in your terminal. Change into the build/ folder (it should be empty, create it if it does not exist).
Run cmake -G "MinGW Makefiles" .. and configuration should begin. This will create a Makefile tailored for your specific environment. Any dependencies that you need will be flagged for downloading.
Run cmake --build .. All flagged dependencies will download to be configured and built for Khronos to link with. Once everything has finished downloading and linked together, the build should be complete. Now you can run Khronos.exe.
Locate the main source directory in your terminal. Change into the build/ folder (it should be empty, create it if it does not exist).
Run cmake .. and configuration should begin. This will create a Makefile tailored for your specific environment. Any dependencies that you need will be flagged for downloading.
Run make. All flagged dependencies will download to be configured and built for Khronos to link with. Once everything has finished downloading and linked together, the build should be complete. Now you can run ./Khronos.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page