osdev | Fourth rewrite of LevOS , aiming for POSIX compliance | TCP library

 by   levex C Version: Current License: No License

kandi X-RAY | osdev Summary

kandi X-RAY | osdev Summary

osdev is a C library typically used in Networking, TCP applications. osdev has no bugs, it has no vulnerabilities and it has low support. You can download it from GitHub.

Fourth rewrite of LevOS, aiming for POSIX compliance.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              osdev has a low active ecosystem.
              It has 181 star(s) with 33 fork(s). There are 11 watchers for this library.
              OutlinedDot
              It had no major release in the last 6 months.
              There are 1 open issues and 0 have been closed. There are no pull requests.
              It has a neutral sentiment in the developer community.
              The latest version of osdev is current.

            kandi-Quality Quality

              osdev has no bugs reported.

            kandi-Security Security

              osdev has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.

            kandi-License License

              osdev does not have a standard license declared.
              Check the repository for any license declaration and review the terms closely.
              OutlinedDot
              Without a license, all rights are reserved, and you cannot use the library in your applications.

            kandi-Reuse Reuse

              osdev releases are not available. You will need to build from source code and install.

            Top functions reviewed by kandi - BETA

            kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.
            Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of osdev
            Get all kandi verified functions for this library.

            osdev Key Features

            No Key Features are available at this moment for osdev.

            osdev Examples and Code Snippets

            No Code Snippets are available at this moment for osdev.

            Community Discussions

            QUESTION

            Implement custom stack canary handling without the standard library
            Asked 2021-Jun-01 at 08:29

            I am trying to implement stack canaries manually and without the standard library. Therefore I have created a simple PoC with the help of this guide from the OSDev wiki. The article suggests that a simple implementation must provide the __stack_chk_guard variable and the __stack_chk_fail() handler.

            However, when I compile using GCC and provide the -fstack-protector-all flag, the executable does not contain any stack canary check at all. What am I missing to get GCC to include the stack canary logic?

            ...

            ANSWER

            Answered 2021-May-27 at 08:48

            It looks like the Arch gcc package (which the Manjaro package is based on) is turning off -fstack-protector when building without the standard library (Done for Arch bug 64270).

            This behavior is apparently also present in Gentoo.

            I haven't tried this, but I believe you should be able to dump the GCC specs using gcc -dumpspecs into a file, keeping only the section *cc1_options, removing %{nostdlib|nodefaultlibs|ffreestanding:-fno-stack-protector} from it, and passing it to gcc with gcc -specs=your_spec_file.

            Alternately, you can rebuild the gcc package with this patch removed.

            Source https://stackoverflow.com/questions/67702131

            QUESTION

            Get EDID info in C (UEFI): read the ES:DI register?
            Asked 2021-May-12 at 17:56

            I am Developing an OS, I wants to get EDID from monitor, I am found some asm code (https://wiki.osdev.org/EDID) to get edid in ES:DI registers,

            ...

            ANSWER

            Answered 2021-May-12 at 17:56

            That page on osdev.org contains code intended to be run when the CPU is in 16-bit real mode.
            You can tell not only from the registers involved but also from the fact that int 10h is used.
            This is a well-known BIOS interrupt service that is written in 16-bit real-mode code.

            If you target UEFI, then your bootloader is actually an UEFI application, which is a PE32(+) image.
            If the CPU is 64-bit capable, the firmware will switch into long mode (64-bit mode) and load your bootloader.
            Otherwise, it will switch into protected mode (32-bit mode).
            In any case, real mode is never used in UEFI.

            You can call 16-bit code from protected/long mode with the use of a 16-bit code segment in the GDT/LDT but you cannot call real-mode code (i.e. code written to work with the real-mode segmentation) because segmentation works completely different between the modes.
            Plus, in real mode the interrupts are dispatched through the IVT and not the IDT, you would need to get the original entry-point for interrupt 10h.

            UEFI protocol EFI_EDID_DISCOVERED_PROTOCOL

            Luckily, UEFI has a replacement for most basic services offered by the legacy BIOS interface.
            In this case, you can use the EFI_EDID_DISCOVERED_PROTOCOL and eventually apply any override from the platform firmware with the use of EFI_EDID_OVERRIDE_PROTOCOL.

            The EFI_EDID_DISCOVERED_PROTOCOL is straightforward to use, it's just a (Size, Data) pair.

            Source https://stackoverflow.com/questions/67499724

            QUESTION

            How to divide Pixels in Subpixels?
            Asked 2021-May-11 at 10:54

            I am Developing an OS, I have to Subdivide Pixels into Subpixels, I am Using GOP Framebuffers https://wiki.osdev.org/GOP ,

            Is it Possible to Subdivide Pixels in GOP Framebuffers?

            How can I do it?

            I Found these only on Internet :

            Subpixel Rendering : https://en.wikipedia.org/wiki/Subpixel_rendering

            Subpixel Resolution : https://en.wikipedia.org/wiki/Sub-pixel_resolution

            The Most useful : https://www.grc.com/ct/ctwhat.htm

            How can I Implement It in My OS?

            ...

            ANSWER

            Answered 2021-May-11 at 10:54

            How can I Implement It in My OS?

            The first step is to determine the pixel geometry (see https://en.wikipedia.org/wiki/Pixel_geometry ); because if you don't know that any attempt at sub-pixel rendering is likely to just make the image worse than not doing sub-pixel rendering at all. I've never been able to find a sane way to obtain this information. A "least insane" way is to get the monitor's EDID/E-EDID (Extended Display Identification Data - see https://en.wikipedia.org/wiki/Extended_Display_Identification_Data ), extract the manufacturer and product code, and then use manufacturer and product code to find the information somewhere elsewhere (from a file, from a database, ..). Sadly this means that you'll have to create all the information needed for all monitors you support (and fall back to "sub-pixel rendering disabled" for unknown monitors).

            Note: As an alternative; you can let the user set the pixel geometry; but most users won't know and won't want the hassle, and the rest of users will set it wrong, so...

            The second step is the make sure you're using the monitor's preferred resolution; because if you're not then the monitor will probably be scaling your image to make it fit and that will destroy any benefit of sub-pixel rendering. To do this you want to obtain and parse the monitors EDID or E-EDID data and try to determine the preferred resolution; then use the preferred resolution when setting the video mode. Unfortunately some monitors (mostly old stuff) either won't tell you the preferred resolution or won't have a preferred resolution; and if you can determine the preferred resolution you might not be able to set that video mode with VBE (on BIOS) or GOP or UGA (on UEFI), and writing native video drivers is "not without further problems".

            The third step is the actual rendering; but that depends on how you're rendering what.

            For advanced rendering (capable of 3D - textured polygons, etc) it's easiest to think of it as rendering separate monochrome images (e.g. one for red, one for green, one for blue) with a slight shift in the camera's position to reflect the pixel geometry. For example, if the pixel geometry is "3 vertical bars, with red on the left of the pixel, green in the middle and blue on the right" then when rendering the red monochrome image you'd shift the camera slightly to the left (by about a third of a pixel). However, this almost triples the cost of rendering.

            If you're only doing sub-pixel rendering for fonts then it's the same basic principle in a much more limited setting (when rendering fonts to a texture/bitmap and not when rendering anything to the screen). In this case, if you cache the resulting pixel data and recycle it (which you'll want to do anyway) it can have minimal overhead. This requires that the text being rendered is aligned to a pixel grid (and not scaled in any way, or at abitrary angles, or stuck onto the side of a spinning 3D teapot, or anything like that).

            Source https://stackoverflow.com/questions/67482142

            QUESTION

            Why am I not receiving interrupts on Port Status Change Event with the Intel's xHC on QEMU?
            Asked 2021-Apr-26 at 06:13

            I'm coding a small OS kernel which is supposed to have a driver for the Intel's xHC (extensible host controller). I got to a point where I can actually generate Port Status Change Events by resetting the root hub ports. I'm using QEMU for virtualization.

            I ask QEMU to emulate a USB mouse and a USB keyboard which it seems to do because I actually get 2 Port Status Change Events when I reset all root hub ports. I get these events on the Event Ring of interrupter 0.

            The problem is I can't find out why I'm not getting interrupts generated on these events.

            I'm posting a complete reproducible example here. Bootloader.c is the UEFI app that I launch from the OVMF shell by typing fs0:bootloader.efi. Bootloader.c is compiled with the EDK2 toolset. I work on Linux Ubuntu 20. Sorry for the long code.

            The file main.cpp is a complete minimal reproducible example of my kernel. All the OS is compiled and launched with the 3 following scripts:

            compile

            ...

            ANSWER

            Answered 2021-Apr-26 at 06:13

            I finally got it working by inverting the MSI-X table structure found on osdev.org. I decided to completely reinitialize the xHC after leaving the UEFI environment as it could leave it in an unknown state. Here's the xHCI code for anyone wondering the same thing as me:

            Source https://stackoverflow.com/questions/67206312

            QUESTION

            x86-64 Kernel crashing on setting up the IDT
            Asked 2021-Apr-23 at 16:11

            I am currently trying to create an x86-64 Kernel from scratch (using GRUB Multiboot2 as a bootloader). I set up my GDT just fine, but when setting up my IDT, there seems to be a problem. I isolated the issue to be my call of lidt by hlting before and after most instructions of my code. Here are my C and ASM files that define my IDT:

            ...

            ANSWER

            Answered 2021-Apr-23 at 15:39

            Your load_idt function is written as a 32-bit function where the first parameter is passed on the stack. In the 64-bit System V ABI the first parameter is passed in register RDI. Use this instead:

            Source https://stackoverflow.com/questions/67231295

            QUESTION

            How does MSI-X triggers interrupt handlers? Is there a need to poll the chosen memory address?
            Asked 2021-Apr-11 at 04:40

            I have a small kernel which is booted with UEFI. I'm using QEMU for virtualization. I want to write a xHCI driver to support USB keyboards in my kernel. I'm having trouble to find concise and clear information. I "found" the xHCI in my kernel. I have a pointer to its PCI configuration space. It is MSI-X capable. I want to use MSI-X but I'm having trouble to understand how that works with the xHCI and USB.

            My problem is that normally osdev.org is quite informational and has the basis I need to implement some functionality. In the case of MSI-X, it doesn't seem to be the case. I'm having a hard time to make the link between all the information I have on osdev.org with the MSI-X functionality.

            So basically, I find the MSI-X table and then I set some addresses there to tell the xHCI PCI device to write to that address to trigger an interrupt. But is an interrupt handler called at some point? Do I need to poll this address to determine if an interrupt occured? I would have thought that the Vector Control field in the MSI-X table let me set an interrupt vector but all the bits are reserved.

            EDIT

            I found the following stackoverflow Q&A which partially answers my question: Question about Message Signaled Interrupts (MSI) on x86 LAPIC system.

            So basically, the low byte of the data register contains the vector to trigger and the message address register contains the LAPIC id to trigger. I still have some questions.

            1. Why does the "Message Address register contains fixed top of 0xFEE".

            2. What are the RH, DM and XX bits in the Message Address register?

            3. How does this work with the LAPIC? Basically, how does it trigger interrupts in the LAPIC. Is it a special feature of PCI devices which allows them to trigger interrupts in the LAPIC. Or is it simply that PCI devices write to memory mapped registers of the LAPIC with some specific data which triggers an interrupt. Because normally the LAPIC is accessed from within the core at an address which is the same for every LAPIC. Is it some kind of inter-processor interrupt from outside the CPU?

            ...

            ANSWER

            Answered 2021-Apr-10 at 06:51
            1. Why does the "Message Address register contains fixed top of 0xFEE".

            CPUs are like networking - packets with headers describing what their contents are, that are routed around a set of links based on "addresses" (which are like the IP address in a TCP/IP packet).

            MSI is essentially a packet saying "write this data to that address", where the address corresponds to another device (the local APIC inside a CPU) and is necessary because that's what the protocol/s for the bus require for that packet/message type, and because that tells the local APIC that it has to accept the packet and intervene. If the address part is wrong then it'd just look like any other write to anything else (and wouldn't be accepted by the local APIC and wouldn't be delivered as an IRQ to the CPU's core).

            Note: In theory, for most (Intel) CPUs the physical address of the local APIC can be changed. In practice there's no sane reason to ever want to do that, and if the phgysical address of the local APIC is changed I think the standard/original "0xFEE..... " address range is still hard-wired into the local APIC for MSI acceptance.

            1. What are the RH, DM and XX bits in the Message Address register?

            The local APIC (among other uses) is used by software (kernel) on one CPU to send IRQ/s to other CPUs; called "inter-processor interrupts" (IPIs). When MSI got invented they simply re-used the same flags that already existed for IPIs. In other words, the DM (Destination Mode) and most of other bits are defined in the section of Intel's manual that describe the local APIC. To understand these bits properly you need to understand the local APIC and IPIs; especially the part about IPI delivery.

            Later (when introducing hardware virtualization) Intel added a "redirection hint" (to allow IRQs from devices to be redirected to specific virtual machines). That is described in a specification called "Intel® Virtualization Technology for Directed I/O Architecture Specification" (available here: https://software.intel.com/content/www/us/en/develop/download/intel-virtualization-technology-for-directed-io-architecture-specification.html ).

            Even later, Intel wanted to support systems with more than 255 CPUs but the "APIC ID" was an 8-bit ID (limiting the system to a max. of 255 CPUs and one IO APIC). To fix this they created x2APIC (which changed a bunch of things - 32-bit APIC IDs, local APIC accessed by MSRs instead of physical addresses, ...). However, all the old devices (including IO APICs and MSI) were designed for 8-bit APIC IDs, so to fix that problem they just recycled the same "IRQ remapping" they already had (from virtualization) so that IRQs with 8-bit APIC IDs could be remapped to IRQs with 32-bit APIC IDs. The result is relatively horrible and excessively confusing (e.g. a kernel that wants to support lots of CPUs needs to use IOMMU for things that have nothing to do with virtualization), but it works without backward compatibility problems.

            1. How does this work with the lAPIC? Basically, how does it trigger interrupts in the lAPIC.

            I'd expect that (for P6 and later - 80486 and Pentium used a different "3-wire APIC bus" instead) it all uses the same "packet format" (messages) - e.g. that the IO APIC sends the same "write this data to this address" packet/message (to local APIC) that is used by IPIs and MSI.

            Is it some kind of inter-processor interrupt from outside the CPU?

            Yes! :-)

            Source https://stackoverflow.com/questions/67028147

            QUESTION

            Page fault after enabling paging? OSDEV
            Asked 2021-Apr-09 at 08:47

            I am trying to make my own Operating System. I got interrupts working, some keyboard and mouse drivers, basic video and printing functions. Now I want to get into memory management and tasks and the first thing I realized is that I need paging which I have yet to set up properly.

            I followed some guides and tutorials on it, the most primary one being the setting up paging tutorial in osdev wiki (https://wiki.osdev.org/Setting_Up_Paging). I "wrote" (copied and pasted basically) the following code in order to initialize paging.

            ...

            ANSWER

            Answered 2021-Apr-09 at 08:47

            I finally got it. My kernel had the following code:

            Source https://stackoverflow.com/questions/67001152

            QUESTION

            Why does fmt::Arguments as_str not work when arguments are passed?
            Asked 2021-Mar-27 at 19:36

            I am making a VGA print macro for my OS with no_std, and it is for some reason not working. I am using the vga crate so that I do not have to do all the VGA code myself. I have a function called _print:

            ...

            ANSWER

            Answered 2021-Mar-27 at 19:04

            Your code panics because as_str() only returns Some, if there is no arguments. So when you immediately unwrap() it will panic, for instances where you have arguments to be formatted.

            Get the formatted string, if it has no arguments to be formatted.

            This can be used to avoid allocations in the most trivial case.

            std::fmt::Arguments::as_str() docs

            Instead you can use args.to_string() instead of args.as_str().unwrap(), to format it into a String. So it actually formats regardless of whether there is any arguments or not.

            Source https://stackoverflow.com/questions/66834686

            QUESTION

            Triple fault when loading the GDT
            Asked 2021-Mar-16 at 10:08

            I am trying to set up the GDT in rust and global asm but it seems to triple fault when I try to load the GDT.

            ...

            ANSWER

            Answered 2021-Mar-16 at 10:08

            I did not knew that it was a simple answer! I just had to dref(*) the GDT as it was lazy static and that solved everything :D

            Source https://stackoverflow.com/questions/66650568

            QUESTION

            compilation of binutils-gdb can't find ncurses
            Asked 2021-Jan-04 at 14:12

            I'm trying to compile the binutils for the i686-elf target according to this tutorial:

            I just added the --enable-tui option, so that I have the support in the gdb.

            I did the following:

            ...

            ANSWER

            Answered 2021-Jan-03 at 01:39

            You're cross-compiling to a different architecture (i686-elf) than whatever you're running on—the $TARGET mentioned in the question. gdb will have to be linked with libraries which are built for that architecture.

            Debian provides ncurses packages which run on the current architecture, but does not provide a suitable package for the cross-compiled application. So you get to do this for yourself.

            When cross-compiling ncurses, you'll have to keep in mind that part of it builds/runs on the current architecture (to generate source-files for compiling by the cross-compiler). That's defined in the environment as $BUILD_CC (rather than $CC), as you might see when reading the script for the mingw cross-compiling. There's a section in the INSTALL file (in the ncurses sources) which outlines the process.

            There's no tutorial (that would be off-topic here anyway), but others have read the instructions and cross-compiled ncurses as evidenced by a recent bug report.

            Source https://stackoverflow.com/questions/65539831

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install osdev

            You can download it from GitHub.

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/levex/osdev.git

          • CLI

            gh repo clone levex/osdev

          • sshUrl

            git@github.com:levex/osdev.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link

            Explore Related Topics

            Consider Popular TCP Libraries

            masscan

            by robertdavidgraham

            wait-for-it

            by vishnubob

            gnet

            by panjf2000

            Quasar

            by quasar

            mumble

            by mumble-voip

            Try Top Libraries by levex

            cgroups-rs

            by levexRust

            levos7

            by levexC

            levos5

            by levexC

            levos6

            by levexC