framebuffer | Go library for accessing the framebuffer

 by   gonutz Go Version: Current License: MIT

kandi X-RAY | framebuffer Summary

kandi X-RAY | framebuffer Summary

framebuffer is a Go library typically used in Internet of Things (IoT), Raspberry Pi applications. framebuffer has no bugs, it has no vulnerabilities, it has a Permissive License and it has low support. You can download it from GitHub.

The framebuffer library was created to have an easy way to access the pixels on the screen from the Raspberry Pi. It memory-maps the framebuffer device and provides it as a draw.Image (which is itself an image.Image). This makes it easy to use with Go's image, color and draw packages. Right now the library only implements the RGB 565 color model, which is the default under Raspbian. Also the OS is assumed to be little endian, also the default for Raspbian.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              framebuffer has a low active ecosystem.
              It has 22 star(s) with 5 fork(s). There are 3 watchers for this library.
              OutlinedDot
              It had no major release in the last 6 months.
              framebuffer has no issues reported. There are no pull requests.
              It has a neutral sentiment in the developer community.
              The latest version of framebuffer is current.

            kandi-Quality Quality

              framebuffer has no bugs reported.

            kandi-Security Security

              framebuffer has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.

            kandi-License License

              framebuffer is licensed under the MIT License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              framebuffer releases are not available. You will need to build from source code and install.
              Installation instructions are not available. Examples and code snippets are available.

            Top functions reviewed by kandi - BETA

            kandi has reviewed framebuffer and discovered the below as its top functions. This is intended to give you an instant insight into framebuffer implemented functionality, and help decide if they suit your requirements.
            • Open opens a device from the specified device
            • toRGB565 is a helper function to convert RGB values to RGB policy
            • Close closes the device .
            Get all kandi verified functions for this library.

            framebuffer Key Features

            No Key Features are available at this moment for framebuffer.

            framebuffer Examples and Code Snippets

            No Code Snippets are available at this moment for framebuffer.

            Community Discussions

            QUESTION

            Copy a VkImage after TraceRaysKHR to CPU
            Asked 2021-Jun-06 at 09:08

            Copying a VkImage that is being used to render to an offscreen framebuffer gives a black image.

            When using a rasterizer the rendered image is non-empty but as soon as I switch to ray tracing the output image is empty:

            ...

            ANSWER

            Answered 2021-Jun-06 at 09:08

            Resolved by now: When submitting the command buffer to the queue it would require an additional vkQueueWaitIdle(m_queue) since ray tracing finishes with a certain latency

            Source https://stackoverflow.com/questions/67765292

            QUESTION

            How to write integers alongside pixels in the framebuffer, and then use the written integer to ignore the depth buffer
            Asked 2021-May-30 at 01:18
            What I want to do

            I want to have a set triangles bleed through, or rather ignore the depth buffer, for another set triangles, but only if they have the same number.

            Problem (optional reading)

            I do not know how to do this without introducing a ton of bubbles into the pipeline. Right now I have very high throughput because I can throw my geometry onto the GPU, tell it to render, and forget about it. However, if I have to keep toggling the state when drawing, I'm worried I'm going to tank my performance. Other people who have done what I've just said (doing a ton of draw calls and state changes) have much worse performance than me. This performance hit is also significantly worse on older hardware, where we are talking on order of 50 - 100+ times performance loss by doing it the state-change way.

            Unfortunately this triangle bleeding scenario happens many thousands of times, so the state machine will be getting flooded with "draw triangles, depth off, draw triangles that bleed through, depth on, ...", except N times, where N can get large (N >= 1000).

            A good way of imagining this is having a set of triangles T_i, and a set of triangles that bleed through B_i where B_i only bleeds through T_i, and i ranges from 0...1000+. Note that if we are drawing B_100, then it should only bleed through T_100, not T_99 or T_101.

            My next thought is to draw all the triangles with their integer into one framebuffer (along with the integer), then draw the bleed through triangles into another framebuffer (also with the integer), and then merge these framebuffers together. I figure they will have the color, depth, and the integer, so I can hopefully merge them in the fragment shader.

            Problem is, I have no idea how to write an integer alongside the out vec4 fragColor in the fragment shader.

            Questions (and in short)

            This leaves me with two questions:

            1. How do I write an integer into a framebuffer? Do I need to write to 4 separate texture framebuffers? (like one color/depth framebuffer texture, another integer framebuffer texture, and then double this so I can merge the pairs of framebuffers together at some point?)

            To make this more clear, the algorithm would look like

            1. Render all the 'could be bled from triangles', described above as set T_i, write colors and depth info into FB1, and write integers into FB2

            2. Render all the 'bleeding' triangles, described above as set B_i, write colors and depth into FB3, and write integers to FB4

            3. Bind the textures for FB1, FB2, FB3, FB4

            4. Render each pixel by sampling the RGBA, depth, and integers from the appropriate texture and write those out into the final framebuffer

            I would need to access the color and depth from the textures in the shader. I would also need to access the integer from the other texture. Then I can do the comparison and choose which pixel to write to the default framebuffer.

            1. Is this idea possible? I assume if (1) is, then the answer is yes. Maybe another question could be whether there's a better way. I tried thinking of doing this with the stencil buffer but had no luck
            ...

            ANSWER

            Answered 2021-May-30 at 00:05

            What you want is theoretically possible, but I can't speak as to its performance. You'll be reading and writing a whole lot of texels in a lot of textures for every program iteration.

            Anyway to answer your questions:

            1. A framebuffer can have multiple color attachments by using glFramebufferTexture2D with GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1, etc. Each texture can then have its own internal format, in your example you probably want a regular RGB texture for your color output, and a second 1-integer only texture.

            2. Your depth buffer is complicated, because you don't want to let OpenGL handle it as normal. If you want to take over the depth buffer, you probably want to attach it as yet another, float texture that you can check against or not your screen-space fragments.

            3. If you have doubts about your shader, remember that you can bind the any number of textures as input samplers you program in code, and each color bind gets its own output value (your shader runs per-texel, so you output one value at a time). Make sure the format of your output is correct, ie vec3/vec4 for the color buffer, int for your integer buffer and float for the float buffer.

            And stencil buffers won't help you turn depth checking on or off in a single (possibly indirect) draw call. I can't visualize what your bleeding thing means, but it can probably help with that? Maybe? But definitely not conditional depth checking.

            Source https://stackoverflow.com/questions/67756516

            QUESTION

            Can I read single pixel value from WebGL depth texture in JavaScript?
            Asked 2021-May-27 at 08:10
            In short

            I would like to read a single pixel value from a WebGL 2 depth texture in JavaScript. Is this at all possible?

            The scenario

            I am rendering a scene in WebGL 2. The renderer is given a depth texture to which it writes the depth buffer. This depth texture is used in post processing shaders and the like, so it is available to us.

            However, I need to read my single pixel value in JavaScript, not from within a shader. If this had been a normal RGB texture, I would do

            ...

            ANSWER

            Answered 2021-May-26 at 17:50

            There is no possibility WebGL 2.0 is based on OpenGL ES 3.0.
            In OpenGL ES 3.2 Specification - 4.3.2 Reading Pixels is clearly specified:

            [...] The second is an implementation-chosen format from among those defined in table 3.2, excluding formats DEPTH_COMPONENT and DEPTH_STENCIL [...]

            Source https://stackoverflow.com/questions/67710054

            QUESTION

            I have ported the contents of OpenGL (webGL) to Metal and have a question
            Asked 2021-May-27 at 07:02

            I have ported the contents of OpenGL (webGL) to Metal and have a question. When doing the following in OpenGL (webGL),

            -I want to bind and render the framebuffer in OpenGL (webGL).

            ...

            ANSWER

            Answered 2021-May-27 at 07:02

            There's not really a frame buffer concept in Metal. You can render to any texture that has renderTarget usage. You can get a texture from a CAMetalDrawable (which you can get from a CAMetalLayer), or you can create one yourself, using MTLDevice.makeTexture method and passing the appropriate MTLTextureDescriptor.

            Then, when you want to render into it, you need to create a render command encoder, which you can make from an MTLCommandBuffer. You will need to pass an MTLRenderPassDescriptor. In that descriptor, you can set the texture along with it's load and store actions in the appropriate render target slot (there are 8 of them).

            There's actually a WWDC talk that goes in depth on how to port GL apps to Metal: Bringing OpenGL Apps to Metal

            Source https://stackoverflow.com/questions/67600070

            QUESTION

            What exactly are framebuffers and swapchains?
            Asked 2021-May-26 at 15:23

            So I'm a bit confused with the concept of framebuffers. I've done my research but I always find different definitions, often these two:

            • A framebuffer is an array of multiple different images. But this definition, or at least to me, sounds more like what a swapchain is: a series of framebuffers.

            • A framebuffer is an array of pixels forming a single image, so kind of like a bitmap (but from what I've read, it can contain more information than just the color, like depth values and stuff), and when that bitmap is filled by the pipeline, it is queued for presentation. This would make much more sense to me, because then the swapchain also makes sense: a collection of framebuffers, so there can be one that is used as a rendering target, and another one for presentation, in the case of double buffering, and the swapchain handles swaping them with the correct timing to improve framerate stabilization.

            Which of these is correct? Because I'm tired of hearing different things every time I look for a bit of information.

            Please keep in mind that I'm learning Vulkan with no graphics expirience at all (I know it's not recommended) so I'm much more interested in the concepts than the code right now.

            ...

            ANSWER

            Answered 2021-Apr-28 at 13:17

            It is bit of a problem how it historically evolved.

            Historically, frame buffer is everything for a frame, and it was largely opaque. That includes color buffer(s), depth buffer(, and newly in Vulkan: an input buffer).

            Also historically there was not that much preprocessing and compute, and things were tied to a screen. Hence the association with swapchain. But Vulkan can easily be headless, so that does not make sense.

            So sometimes "framebuffer" is used interchangably for "color buffer swapchain images". But generally (and specifically the Vulkan object) means "buffers for frame that need special consideration"; not only color buffer, and irrespctive if they end up on screen or not.

            Source https://stackoverflow.com/questions/67298942

            QUESTION

            How to bind a GL_TEXTURE_2D_ARRAY to a framebuffer on GL_COLOR_ATTACHMENT1?
            Asked 2021-May-24 at 07:35

            Edit : see @Rabbid76 answer, question was not truly related to GL_TEXTURE_2D_ARRAY, only to framebuffer color attachment activation!

            I'm having trouble updating a shader that use to ouput into a single texture to multiple textures. Here's the simplified code, I'll put all I find relevant, feel free to ask for other parts of the code if they are important.

            ...

            ANSWER

            Answered 2021-May-24 at 07:23

            You need to specify the buffers to be drawn into with glDrawBuffers:

            Source https://stackoverflow.com/questions/67668046

            QUESTION

            How can I use inotify to tell when a named pipe is opened?
            Asked 2021-May-21 at 16:51

            The overall goal is I am trying to make a named pipe that when read is a PNG file of the current framebuffer. With various snippets cobbled together from online like the PNG creator from Andrew Duncan I have something that can create and write the PNG OK but the trick is I need to not write the pipe until someone reads it so that the image is current and not back when the pipe was opened.

            It seems like I should be able to use inotify_add_watch( fd, filename, IN_OPEN | IN_ACCESS ) to tell when someone opens the file to start reading and then I open it for writing and send my PNG file data and then close the file.

            The pipe is getting created but I am not getting the watch event when I try to read from the file (cat fbpipe.png > pipefile.png). It is blocking at the first read() of watch events.

            Relevant code snippets:

            ...

            ANSWER

            Answered 2021-May-21 at 16:51

            You don't need inotify for this.

            If you open a named pipe for writing, the open system call will block until some process opens the named pipe for reading. That's basically what you want. When the open returns, you know there's a client waiting to read.

            Similarly, if you open the pipe for reading, the open will block until some process opens the pipe for writing. Furthermore, a read will block until the writer actually writes data. So the named pipe basically takes care of synchronisation.

            That makes for a very simple client-server architecture, but it only works if you never have two concurrent clients. For a more general approach, use Unix domain sockets.

            Source https://stackoverflow.com/questions/67639932

            QUESTION

            MTKView Transparency
            Asked 2021-May-17 at 09:09

            I can't make my MTKView clear its background. I've set the view's and its layer's isOpaque to false, background color to clear and tried multiple solutions found on google/stackoverflow (most in the code below like loadAction and clearColor of color attachment) but nothing works.

            All the background color settings seem to be ignored. Setting loadAction and clearColor of MTLRenderPassColorAttachmentDescriptor does nothing.

            I'd like to have my regular UIView's drawn under the MTKView. What am I missing?

            ...

            ANSWER

            Answered 2021-May-17 at 09:09

            Thanks to Frank, the answer was to just set the clearColor property of the view itself, which I missed. I also removed most adjustments in the MTLRenderPipelineDescriptor, who's code is now:

            Source https://stackoverflow.com/questions/67487986

            QUESTION

            Deferred Rendering not Displaying the GBuffer Textures
            Asked 2021-May-16 at 13:19

            I'm trying to implement deferred rendering within an engine I'm developing as a personal learning, and I cannot get to understand what I'm doing wrong when it comes to render all the textures in the GBuffer to check if the implementation is okay.

            The thing is that I currently have a framebuffer with 3 color attachments for the different textures of the GBuffer (color, normal and position), which I initialize as follows:

            ...

            ANSWER

            Answered 2021-May-16 at 13:19

            It is not clear from the question what exactly might be happening here, as lots of GL states - both at the time the rendering to the gbuffer, and at that time the gbuffer texture is rendered for visualization - are just unknown. However, from the images given in the question, one can not conclude that the actual color output for attachments 1 and 2 is not working.

            One issue which comes to mind is alpha blending. The color values processed by the per-fragment operations after the vertex shader are always working with RGBA values - although the value of the A channel only matters if you enabled blending and use a blend function which somehow depends on the source alpha.

            If you declare a custom fragment shader output as float, vec2, vec3, the remaining components stay undefined (undefined value, not undefined behavior). This does not impose a problem unless some other operations you do depend on those values.

            What we also have here is a GL_RGBA16F output format (which is the right choice, because none of the 3-component RGB formats are required as color-renderable by the spec).

            What might happen here is either:

            • Alpha blending is already turned on during rendering into the g-buffer. The fragment shader's alpha output happens to be zero, so that it appears as 100% transparent and the contents of the texture are not changed.
            • Alpha blending is not used during rendering into the g-buffer, so the correct contents end up in the texture, the alpha channel just happens to end up with all zeros. Now the texture might be visualized with alpha blending enbaled, ending up in a 100% transparent view.

            If it is the first option, turn off blending when rendering the into the g-buffer. It would not work with deferred shading anyway. You might still run into the second option then.

            If this is the second option, there is no issue at all - the lighting passes which follow will read the data they need (and ultimately, you will want to put useful information into the alpha channel to not waste it and be able to reduce the number of attachments). It is just your visualization (which I assume is for debug purposed only) is wrong. You can try to fix the visualization.

            As a side note: Storing the world space position in the G-Buffer is a huge waste of bandwidth. All you need to be able to reconstruct the world space position is the depth value and the inverse of your view and projection matrices. Also storing world space position in GL_RGB16F will very easily run into precision issues if you move your camera away from world space origin.

            Source https://stackoverflow.com/questions/67548623

            QUESTION

            How to divide Pixels in Subpixels?
            Asked 2021-May-11 at 10:54

            I am Developing an OS, I have to Subdivide Pixels into Subpixels, I am Using GOP Framebuffers https://wiki.osdev.org/GOP ,

            Is it Possible to Subdivide Pixels in GOP Framebuffers?

            How can I do it?

            I Found these only on Internet :

            Subpixel Rendering : https://en.wikipedia.org/wiki/Subpixel_rendering

            Subpixel Resolution : https://en.wikipedia.org/wiki/Sub-pixel_resolution

            The Most useful : https://www.grc.com/ct/ctwhat.htm

            How can I Implement It in My OS?

            ...

            ANSWER

            Answered 2021-May-11 at 10:54

            How can I Implement It in My OS?

            The first step is to determine the pixel geometry (see https://en.wikipedia.org/wiki/Pixel_geometry ); because if you don't know that any attempt at sub-pixel rendering is likely to just make the image worse than not doing sub-pixel rendering at all. I've never been able to find a sane way to obtain this information. A "least insane" way is to get the monitor's EDID/E-EDID (Extended Display Identification Data - see https://en.wikipedia.org/wiki/Extended_Display_Identification_Data ), extract the manufacturer and product code, and then use manufacturer and product code to find the information somewhere elsewhere (from a file, from a database, ..). Sadly this means that you'll have to create all the information needed for all monitors you support (and fall back to "sub-pixel rendering disabled" for unknown monitors).

            Note: As an alternative; you can let the user set the pixel geometry; but most users won't know and won't want the hassle, and the rest of users will set it wrong, so...

            The second step is the make sure you're using the monitor's preferred resolution; because if you're not then the monitor will probably be scaling your image to make it fit and that will destroy any benefit of sub-pixel rendering. To do this you want to obtain and parse the monitors EDID or E-EDID data and try to determine the preferred resolution; then use the preferred resolution when setting the video mode. Unfortunately some monitors (mostly old stuff) either won't tell you the preferred resolution or won't have a preferred resolution; and if you can determine the preferred resolution you might not be able to set that video mode with VBE (on BIOS) or GOP or UGA (on UEFI), and writing native video drivers is "not without further problems".

            The third step is the actual rendering; but that depends on how you're rendering what.

            For advanced rendering (capable of 3D - textured polygons, etc) it's easiest to think of it as rendering separate monochrome images (e.g. one for red, one for green, one for blue) with a slight shift in the camera's position to reflect the pixel geometry. For example, if the pixel geometry is "3 vertical bars, with red on the left of the pixel, green in the middle and blue on the right" then when rendering the red monochrome image you'd shift the camera slightly to the left (by about a third of a pixel). However, this almost triples the cost of rendering.

            If you're only doing sub-pixel rendering for fonts then it's the same basic principle in a much more limited setting (when rendering fonts to a texture/bitmap and not when rendering anything to the screen). In this case, if you cache the resulting pixel data and recycle it (which you'll want to do anyway) it can have minimal overhead. This requires that the text being rendered is aligned to a pixel grid (and not scaled in any way, or at abitrary angles, or stuck onto the side of a spinning 3D teapot, or anything like that).

            Source https://stackoverflow.com/questions/67482142

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install framebuffer

            You can download it from GitHub.

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/gonutz/framebuffer.git

          • CLI

            gh repo clone gonutz/framebuffer

          • sshUrl

            git@github.com:gonutz/framebuffer.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link