stencil | building scalable , enterprise-ready component systems | Frontend Framework library
kandi X-RAY | stencil Summary
kandi X-RAY | stencil Summary
A compiler for generating Web Components.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of stencil
stencil Key Features
stencil Examples and Code Snippets
Community Discussions
Trending Discussions on stencil
QUESTION
I want to do the 3D-picking on OpenGl by the way of frame pick. On this way I need to use glReadPixel
to get some information of the pixels on the screen currently, so I test it on the following way but get the wrong result.
First , I use Callback glfwSetCursorPosCallback(window, mouse_callback)
and mouse_callback(GLFWwindow* window, double xpos, double ypos)
to get the current mouse position (screen positon) and the pixel's color(when I change the mouse's position) ,then use std::cout
to print it .
ANSWER
Answered 2022-Apr-07 at 14:50OpenGL uses coordinates where the origin (0, 0) is the bottom-left of the window, and +Y is up. You have to convert to OpenGL's coordinate system when reading, since the cursor events use (0, 0) as the top-left of the window, and +Y is down.
QUESTION
I am trying to implement a simple way to render particles in my toy app. I'm rendering billboarding quads with alpha blending, but this causes a problem with the depth stenciling where parts of the quad that are fully transparent are still obscuring the particles behind them, since they fail depth test. The result looks like this:
This is the way I've setup my pipeline:
...ANSWER
Answered 2022-Apr-02 at 13:41I figured it out eventually. In short there is no automatic way to handle this. A simple implementation is to draw opaque objects first while writing to depth buffer, then draw transparents with only depth test, without writing to depth stencil. Transparents should be sorted by depth and rendered from farthest to closest to view point.
Follow up to previous images:
QUESTION
we want to build an application with svelte but are stuck at using our components that we built with stenciljs. At first it looked like it worked like a charm but as stencil components use the term "slot" and svelte does too, i am reveiving an error when using this component on svelte
Element with a slot='...' attribute must be a child of a component or a descendant of a custom element
. I understand this issue, the question is how to ignore it or to tell svelte to ignore certain elements. Is there a function for that?
The component i try to use looks like that:
...ANSWER
Answered 2021-Oct-28 at 05:58check this post for named slots -> How to get the slots value in a Svelte component? I hope it gives you direction..
QUESTION
We are building web components using stencil. We compile the stencil components and create respective "React component" and import them into our projects.
While doing so we are able to view the component as expected when we launch the react app. However when we mount the component and execute test cases using cypress we observe that the CSS for these pre built components are not getting loaded.
cypress.json
ANSWER
Answered 2022-Feb-16 at 02:33You can try importing the css in the index.ts or index.js file that will be available in the location -> cypress/support/index.ts
QUESTION
I am trying to do component testing using Cypress component test runner. The web components are built using stencil. We compile the stencil components and create respective "Angular component" and import them into our projects.
The component is as expected when launched in the angular app. However when it is mounted, and the tests are executed using cypress, the CSS for these pre built components are not getting loaded.
cypress.json
ANSWER
Answered 2022-Mar-16 at 03:01The styles are .scss
which need preprocessing, which happens in cypress/plugins/index.js
You already have a webpack.config
in your plugins folder.
Does it have a rule for scss?
QUESTION
In the Stencil docs section on framework integration with Vue it states the following:
In order to use the custom element library within the Vue app, the application must be modified to define the custom elements and to inform the Vue compiler which elements to ignore during compilation.
According to the same page this can be achieved by modifying the config of your Vue instance like this:
...ANSWER
Answered 2021-Sep-12 at 09:23I'm not an expert in Vue but here's how I did it:
Add the following to your ./vue.config.js
(or create it if it doesn't exist):
QUESTION
I am learning DirectX12 programming. And one thing that has me confused is in all the tutorials I read, they only create a single depth/stencil buffer even when triple/double buffering is used with the swap chain. Wouldn't you need a depth/stencil buffer for each frame?
Code in question:
...ANSWER
Answered 2022-Mar-15 at 12:27Generally speaking, "intermediate resources" (such as depth stencils and intermediate render targets) are not presented on the screen.
So when one swapchain is ready to present (and you start recording next frame commands), those resources can be safely overridden, since the content is ready within the swap chain.
The only exception for those resources are generally buffers that are uploaded from cpu, for them it is needed to have one per frame (since you will start to copy data while another command list is still executing).
QUESTION
I have this simple self-contained example of a very rudimentary 2 dimensional stencil application using OpenMP tasks on dynamic arrays to represent an issue that I am having on a problem that is less of a toy problem.
There are 2 update steps in which for each point in the array 3 values are added from another array, from the corresponding location as well as the upper and lower neighbour location. The program is executed on a NUMA CPU with 8 cores and 2 hardware threads on each NUMA node. The array initializations are parallelized and using the environment variables OMP_PLACES=threads
and OMP_PROC_BIND=spread
the data is evenly distributed among the nodes' memories. To avoid data races I have set up dependencies so that for every section on the second update a task can only be scheduled if the relevant tasks for the sections from the first update step are executed. The computation is correct but not NUMA aware. The affinity clause seems to be not enough to change the scheduling as it is just a hint. I am also not sure whether using the single
for task creation is efficient but all I know is it is the only way to make all task sibling tasks and thus the dependencies applicable.
Is there a way in OpenMP where I could parallelize the task creation under these constraints or guide the runtime system to a more NUMA-aware task scheduling? If not, it is also okay, I am just trying to see whether there are options available that use OpenMP in a way that it is intended and not trying to break it. I already have a version that only uses worksharing loops. This is for research.
NUMA NODE 0 pus {0-7,16-23} NUMA NODE 1 pus {8-15,24-31}
Environment Variables
...ANSWER
Answered 2022-Feb-27 at 13:36First of all, the state of the OpenMP task scheduling on NUMA systems is far from being great in practice. It has been the subject of many research project in the past and they is still ongoing project working on it. Some research runtimes consider the affinity hint properly and schedule the tasks regarding the NUMA node of the in/out/inout dependencies. However, AFAIK mainstream runtimes does not do much to schedule tasks well on NUMA systems, especially if you create all the tasks from a unique NUMA node. Indeed, AFAIK GOMP (GCC) just ignore this and actually exhibit a behavior that make it inefficient on NUMA systems (eg. the creation of the tasks is temporary stopped when there are too many of them and tasks are executed on all NUMA nodes disregarding the source/target NUMA node). IOMP (Clang/ICC) takes into account locality but AFAIK in your case, the scheduling should not be great. The affinity hint for tasks is not available upstream yet. Thus, GOMP and IOMP will clearly not behave well in your case as tasks of different steps will be often distributed in a way that produces many remote NUMA node accesses that are known to be inefficient. In fact, this is critical in your case as stencils are generally memory bound.
If you work with IOMP, be aware that its task scheduler tends to execute tasks on the same NUMA node where they are created. Thus, a good solution is to create the tasks in parallel. The tasks can be created in many threads bound to NUMA nodes. The scheduler will first try to execute the tasks on the same threads. Workers on the same NUMA node will try to steal tasks of the threads in the same NUMA node, and if there not enough tasks, then from any threads. While this work stealing strategy works relatively well in practice, there is a huge catch: tasks of different parent tasks cannot share dependencies. This limitation of the current OpenMP specification is a big issue for stencil codes (at least the ones that creates tasks working on different time steps). An alternative solution is to create tasks with dependencies from one thread and create smaller tasks from these tasks but due to the often bad scheduling of the big tasks, this approach is generally inefficient in practice on NUMA systems. In practice, on mainstream runtimes, the basic statically-scheduled loops behave relatively well on NUMA system for stencil although it is clearly sub-optimal for large stencils. This is sad and I hope this situation will improve in the current decade.
Be aware that data initialization matters a lot on NUMA systems as many platform actually allocate pages on the NUMA node performing the first touch. Thus the initialization has to be parallel (otherwise all the pages could be located on the same NUMA node causing a saturation of this node during stencil steps). The default policy is not the same on all platforms and some can move pages between NUMA nodes regarding their use. You can tweak the behavior with numactl
. You can also fetch very useful information from the hw-loc
tool. I strongly advise you to manually the location of all OpenMP threads using OMP_PROC_BIND=True
and OMP_PLACES="{0},{1},...,{n}"
where the OMP_PLACES
string set can be generated from hw-loc
regarding the actual platform.
For more information you can read this research paper (disclaimer: I am one of the authors). You can certainly find other similar research paper on the IWOMP conference and the Super-Computing conference too. You could try to use research runtime though most of them are not designed to be used in production (eg. KOMP which is not actively developed anymore, StarPU which mainly focus on GPUs and optimizing the critical path, OmpSS which is not fully compatible with OpenMP but try to extend it, PaRSEC which is mainly designed for linear algebra applications).
QUESTION
I have this simple self-contained example of a very rudimentary stencil application to work with OpenMP tasks and the dependence clause. At 2 steps one location of an array is added 3 values from another array, one from the corresponding location and its left and right neighbours. To avoid data races I have set up dependencies so that for every section on the second update its task can only be scheduled if the relevant tasks for the sections from the first update step are executed. I get the expected results but I am not sure if my assumptions are correct, because these tasks might be immediately executed by the encountering threads and not spawned. So my question is whether the tasks that are created in worksharing loops all sibling tasks and thus are the dependencies retained just like when the tasks are generated inside a single
construct.
ANSWER
Answered 2022-Feb-26 at 22:46In OpenMP specification you can find the corresponding definitions:
sibling tasks - Tasks that are child tasks of the same task region.
child task - A task is a child task of its generating task region. A child task region is not part of its generating task region.
task region - A region consisting of all code encountered during the execution of a task. COMMENT: A parallel region consists of one or more implicit task regions
In the description of parallel construct you can read that:
A set of implicit tasks, equal in number to the number of threads in the team, is generated by the encountering thread. The structured block of the parallel construct determines the code that will be executed in each implicit task.
This practically means that in the parallel region many task regions are generated and using #pragma omp for
different task region will generate explicit tasks (i.e #pragma omp task...
). However, only tasks generated by the same task region are sibling tasks (not all of them!). If you want to be sure that all generated tasks are sibling tasks, you have to use a single task region (e.g. using single
construct) to generate all the explicit tasks.
Note that your code gives the correct result, because there is an implicit barrier at the end of worksharing-loop construct (#pragma omp for
). To remove this barrier you have to use the nowait
clause and you will see that the result will be incorrect in such a case.
Another comment is that in your case the workload is smaller than parallel overheads, so my guess is that your parallel code will be slower than the serial one.
QUESTION
I'm trying to test a click event originated from an icon () in a Stencil component. The thing is I am able to debug the test and I see that the test triggers the correct method on the component, but Jest is not able to detect\register the event and the
toHaveBeenCalled()
method fails.
input.spec.jsx
ANSWER
Answered 2022-Feb-09 at 09:17By default the name of custom events is the property name, in your case tipInputEnterEvent
. So the listener should instead be:
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install stencil
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page