metalkit | tiny set of libraries and a tiny bootloader | Emulator library
kandi X-RAY | metalkit Summary
kandi X-RAY | metalkit Summary
Learning about PC architecture or systems programming. For hardware (or virtual machine) developers, writing test cases or example programs that demonstrate hardware features. Simple applications (even games) that can target common virtual machines or emulators (QEMU, VMware, Bochs) as their platform.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of metalkit
metalkit Key Features
metalkit Examples and Code Snippets
Community Discussions
Trending Discussions on metalkit
QUESTION
MetalKit calls drawInMTKView
when it wants a your delegate to draw a new frame, but I wonder if it waits for the last drawable to have been presented before it asks your delegate to draw on a new one?
From what I understand reading this article, CoreAnimation can provide up to three "in flight" drawables, but I can't find anywhere wether MetalKit tries to draw to them as soon as possible or if it waits for something else to happen.
What would this something else be? What confuses me a little is the idea of drawing to up to two frames in advance, since it means the CPU must already know what it wants to render two frames in the future, and I feel like it isn't always the case. For instance if your application depends on user input, you can't know upfront the actions the user will have done between now and when the two frames you are drawing to will be presented, so they may be presented with out of date content. Is this assumption right? In this case, it could make some sense to only call the delegate method at a maximum rate determined by the intended frame rate.
The problem with synchronizing with the frame rate is that this means the CPU may sometimes be inactive when it could have done some useful work.
I also have the intuition this may not be happening this way since in the article aforementioned, it seems like drawInMTKView
is called as often as a drawable is available, since they seem to rely on it being called to make work that uses ressources in a way that avoids CPU stalling, but since there are many points that are unclear to me I am not sure of what is happening exactly.
Thank you
...ANSWER
Answered 2022-Mar-29 at 02:47MTKView
documentation mentions in paused
page that
If the value is
NO
, the view periodically redraws the contents, at a frame rate set by the value ofpreferredFramesPerSecond
.
Based on samples there are for MTKView
, it probably uses a combination of an internal timer and CVDisplayLink
callbacks. Which means it will basically choose the "right" interval to call your drawing function at the right times, usually after other drawable is shown "on-glass", so at V-Sync interval points, so that your frame has the most CPU time to get drawn.
You can make your own view and use CVDisplayLink
or CADisplayLink
to manage the rate at which your draws are called. There are also other ways such as relying on back pressure of the drawable queue (basically just calling nextDrawable
in a loop, because it will block the thread until the drawable is available) or using presentAfterMinimumDuration
. Some of these are discussed in this WWDC video.
I think Core Animation triple buffers everything that gets composed by Window Server, so basically it waits for you to finish drawing your frame, then it draws it with the other frames and then presents it to "glass".
As to a your question about the delay: you are right, the CPU is two or even three "frames" ahead of the GPU. I am not too familiar with this, and I haven't tried it, but I think it's possible to actually "skip" the frames you drew ahead of time if you delay the presentation of your drawables up until the last moment, possibly until scheduled handler on one of your command buffers.
QUESTION
I am trying to read frames of a video using the method in the accepted answer in this question: Accesing Individual Frames using AV Player.
I am then trying to display these frames sequentially on a MetalKit MKTView (I tried on a regular UIImage view, and the video doesn't render at all). The problem is that this only works for a few seconds (about 400 frames of the video), after which the app crashes and the video stops playing.
Here is the code in the view controller:
...ANSWER
Answered 2021-Jul-01 at 18:12I was able to isolate this issue to the while loop in the VideoFileReader class by confirming that the issue still occurs after removing the ViewController and Metal rendering classes. I was also able to confirm that the crash was indeed due to high memory utilization by counting the number of frames read before the crash and comparing with the memory usage growth in the debugger.
What finally fixed the issue was wrapping the while-loop body in an autoreleasepool block. It seems that image buffers were being created and not released, and this was the cause of the memory spike and subsequent crash.
QUESTION
I can't make my MTKView clear its background. I've set the view's and its layer's isOpaque to false, background color to clear and tried multiple solutions found on google/stackoverflow (most in the code below like loadAction and clearColor of color attachment) but nothing works.
All the background color settings seem to be ignored. Setting loadAction and clearColor of MTLRenderPassColorAttachmentDescriptor does nothing.
I'd like to have my regular UIView's drawn under the MTKView. What am I missing?
...ANSWER
Answered 2021-May-17 at 09:09Thanks to Frank, the answer was to just set the clearColor property of the view itself, which I missed. I also removed most adjustments in the MTLRenderPipelineDescriptor, who's code is now:
QUESTION
I'm stuck with SwiftUI and Metal up to the point of being about to give up.
I got this example from https://developer.apple.com/forums/thread/119112?answerId=654964022#654964022 :
...ANSWER
Answered 2021-Jan-05 at 07:12I have a small Core Image + SwiftUI sample project on Github that might be a good starting point for you. It doesn't cover a lot yet, but it demonstrates how to display filtered camera frames already.
Especially check out the draw
function of the view. It's used to render a CIImage
into the MTKView
(you can do the same in your delegate's draw
function).
QUESTION
I would like to capture
a real-world texture and apply it to a 3D mesh produced with a help of LiDAR scanner. I suppose that Projection-View-Model matrices should be used for that. A texture must be made from fixed Point-of-View, for example, from center of a room. However, it would be an ideal solution if we could apply an environmentTexturing
data, collected as a cube-map
texture in a scene.
Look at 3D Scanner App. It's a reference app allowing us to export a model with its texture.
I need to capture a texture with one iteration. I do not need to update it in a realtime. I realize that changing PoV leads to a wrong texture's perception, in other words, distortion of a texture. Also I realize that there's a dynamic tesselation in RealityKit and there's an automatic texture mipmapping (texture's resolution depends on a distance it captured from).
...ANSWER
Answered 2020-Nov-03 at 22:00Here is a preliminary solution (it's not a final one):
QUESTION
I have a need for an app, and being a decent(I think) Swift developer, I thought I’d take a crack at it.
The gist of it is simple: I want to
- Capture a display (CGDisplayStream?)
- Flip the image vertically
- Display the flipped stream in a window.
I’m just not sure where to start? Maybe CGDisplayStream or maybe AVFoundation? Does anyone have example code of how to accomplish capturing a display stream and displaying it in a window? Or even a guide of ‘use these classes/interfaces/apis’ to do this? I’ve github searched for CGDisplayStream.init and not really come up with any examples :(
I’m not asking anyone to write it for me, but I could use a: start here, use this, watch out for...
Here's the code I'm currently experimenting with. I think? I have steps 1 & 2 working, but it's hard to tell without #3...
...ANSWER
Answered 2020-Oct-22 at 05:51It looks like you can do most of the work with Core Image:
- The handler of the
CGDisplayStream
gives you anIOSurface
. You can wrap that inside aCIImage
withCIImage(ioSurface:)
. - Apply any (or multiple)
CGAffineTransform
to theCIImage
usingimage.transformed(by:)
. - Use a
CIContext
to render the resulting image into anMTKView
. Maybe you can get some inspiration from my example here.
Note: I'm not 100% sure if Core Image would properly retain the IOSurface
for you when you wrap it in a CIImage
. You should double-check if you still need to perform any of the actions described in the handler documentation.
QUESTION
I have a MTKView that is rendering a cube that is sized so that one of its faces fills the entire screen.
I am passing a struct variable (called theuniforms) that contains a random Float to the fragment shader and I want to color each pixel on the screen a different color depending on the float's value.
Right now, it is coloring the entire quad red or blue depending on the Float's value, which is randomized in the draw function.
I think my setup is wrong because at any frame I will only have access to one random Float, and all of the pixels will be colored during that frame (I won't have an array of random floats to determine the pixel color). However, I think I could simply add more variables to the struct variable to overcome this problem.
The main issue is accessing the individual pixel via the fragment function... and if it will be even performant enough to determine a color for every pixel on the screen.
...ANSWER
Answered 2020-Oct-18 at 21:11First you shouldn't use uniforms to change the color of each pixel you overload the CPU but this is a GPU task so you should use some function inside the shader. And if you want uniforms to change the color of fragment accordingly you must pass it into the fragment function setFragmentBytes(&theuniforms, length: MemoryLayout.stride, index: 1)
and call it in the fragment function. If I get you right you want something like white noise on the cube so here is the shader:
QUESTION
Xcode 11-12, MacOS Catalina, Swift 5 project...
This code is produced "Context leak detected, msgtracer returned -1" error:
...ANSWER
Answered 2020-Oct-09 at 17:36Yeahh, of course I tried
QUESTION
Is it possible to programmatically export 3D mesh as .usdz
file format using ModelIO and MetalKit frameworks?
Here's a code:
...ANSWER
Answered 2020-Sep-28 at 08:48September 28, 2020.
At the moment Apple developers can export .usd
, .usda
and .usdc
Pixar's family file formats.
You can check formats' compatibility using canExportFileExtension(_:) type method:
QUESTION
This is a follow up to my previous question here however this question should be able to stand alone. I get the following error when I try to import tensorflow while there exists a file containing from tensorflow import keras
.
ANSWER
Answered 2020-Jun-07 at 08:59Allright so this is a bug. I reproduced your issue using the python
docker
container, only installing the latest tensorflow
. What fixed it, was renaming code.py
to test.py
(or anything else for that matter). This means this this is for sure a tensorflow
issue. During import tensorflow
, python
will for some reason also import your code.py
. Will you file an issue or should I?
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install metalkit
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page