endian | Ultra light-weight endian utility for C11 | iOS library

 by   mandreyel C++ Version: Current License: No License

kandi X-RAY | endian Summary

kandi X-RAY | endian Summary

endian is a C++ library typically used in Mobile, iOS, React Native applications. endian has no bugs, it has no vulnerabilities and it has low support. You can download it from GitHub.

A small, but much needed utility library for any program that needs to handle numbers during network or file IO.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              endian has a low active ecosystem.
              It has 13 star(s) with 4 fork(s). There are 2 watchers for this library.
              OutlinedDot
              It had no major release in the last 6 months.
              There are 3 open issues and 3 have been closed. On average issues are closed in 65 days. There are no pull requests.
              It has a neutral sentiment in the developer community.
              The latest version of endian is current.

            kandi-Quality Quality

              endian has no bugs reported.

            kandi-Security Security

              endian has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.

            kandi-License License

              endian does not have a standard license declared.
              Check the repository for any license declaration and review the terms closely.
              OutlinedDot
              Without a license, all rights are reserved, and you cannot use the library in your applications.

            kandi-Reuse Reuse

              endian releases are not available. You will need to build from source code and install.
              Installation instructions are not available. Examples and code snippets are available.

            Top functions reviewed by kandi - BETA

            kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.
            Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of endian
            Get all kandi verified functions for this library.

            endian Key Features

            No Key Features are available at this moment for endian.

            endian Examples and Code Snippets

            No Code Snippets are available at this moment for endian.

            Community Discussions

            QUESTION

            differences in bitmap or rasterized font bitmaps and text display on 3.5" TFT LCD
            Asked 2021-Jun-12 at 16:19

            I am using a 3.5: TFT LCD display with an Arduino Uno and the library from the manufacturer, the KeDei TFT library. The library came with a bitmap font table that is huge for the small amount of memory of an Arduino Uno so I've been looking for alternatives.

            What I am running into is that there doesn't seem to be a standard representation and some of the bitmap font tables I've found work fine and others display as strange doodles and marks or they display upside down or they display with letters flipped. After writing a simple application to display some of the characters, I finally realized that different bitmaps use different character orientations.

            My question

            What are the rules or standards or expected representations for the bit data for bitmap fonts? Why do there seem to be several different text character orientations used with bitmap fonts?

            Thoughts about the question

            Are these due to different target devices such as a Windows display driver or a Linux display driver versus a bare metal Arduino TFT LCD display driver?

            What is the criteria used to determine a particular bitmap font representation as a series of unsigned char values? Are different types of raster devices such as a TFT LCD display and its controller have a different sequence of bits when drawing on the display surface by setting pixel colors?

            What other possible bitmap font representations requiring a transformation which my version of the library currently doesn't offer, are there?

            Is there some method other than the approach I'm using to determine what transformation is needed? I currently plug the bitmap font table into a test program and print out a set of characters to see how it looks and then fine tune the transformation by testing with the Arduino and the TFT LCD screen.

            My experience thus far

            The KeDei TFT library came with an a bitmap font table that was defined as

            ...

            ANSWER

            Answered 2021-Jun-12 at 16:19

            Raster or bitmap fonts are represented in a number of different ways and there are bitmap font file standards that have been developed for both Linux and Windows. However raw data representation of bitmap fonts in programming language source code seems to vary depending on:

            • the memory architecture of the target computer,
            • the architecture and communication pathways to the display controller,
            • character glyph height and width in pixels and
            • the amount of memory for bitmap storage and what measures are taken to make that as small as possible.

            A brief overview of bitmap fonts

            A generic bitmap is a block of data in which individual bits are used to indicate a state of either on or off. One use of a bitmap is to store image data. Character glyphs can be created and stored as a collection of images, one for each character in the character set, so using a bitmap to encode and store each character image is a natural fit.

            Bitmap fonts are bitmaps used to indicate how to display or print characters by turning on or off pixels or printing or not printing dots on a page. See Wikipedia Bitmap fonts

            A bitmap font is one that stores each glyph as an array of pixels (that is, a bitmap). It is less commonly known as a raster font or a pixel font. Bitmap fonts are simply collections of raster images of glyphs. For each variant of the font, there is a complete set of glyph images, with each set containing an image for each character. For example, if a font has three sizes, and any combination of bold and italic, then there must be 12 complete sets of images.

            A brief history of using bitmap fonts

            The earliest user interface terminals such as teletype terminals used dot matrix printer mechanisms to print on rolls of paper. With the development of Cathode Ray Tube terminals bitmap fonts were readily transferable to that technology as dots of luminescence turned on and off by a scanning electron gun.

            Earliest bitmap fonts were of a fixed height and width with the bitmap acting as a kind of stamp or pattern to print characters on the output medium, paper or display tube, with a fixed line height and a fixed line width such as the 80 columns and 24 lines of the DEC VT-100 terminal.

            With increasing processing power, a more sophisticated typographical approach became available with vector fonts used to improve displayed text quality and provide improved scaling while also reducing memory required to describe the character glyphs.

            In addition, while a matrix of dots or pixels worked fairly well for languages such as English, written languages with complex glyph forms were poorly served by bitmap fonts.

            Representation of bitmap fonts in source code

            There are a number of bitmap font file formats which provide a way to represent a bitmap font in a device independent description. For an example see Wikipedia topic - Glyph Bitmap Distribution Format

            The Glyph Bitmap Distribution Format (BDF) by Adobe is a file format for storing bitmap fonts. The content takes the form of a text file intended to be human- and computer-readable. BDF is typically used in Unix X Window environments. It has largely been replaced by the PCF font format which is somewhat more efficient, and by scalable fonts such as OpenType and TrueType fonts.

            Other bitmap standards such as XBM, Wikipedia topic - X BitMap, or XPM, Wikipedia topic - X PixMap, are source code components that describe bitmaps however many of these are not meant for bitmap fonts specifically but rather other graphical images such as icons, cursors, etc.

            As bitmap fonts are an older format many times bitmap fonts are wrapped within another font standard such as TrueType in order to be compatible with the standard font subsystems of modern operating systems such as Linux and Windows.

            However embedded systems that are running on the bare metal or using an RTOS will normally need the raw bitmap character image data in the form similar to the XBM format. See Encyclopedia of Graphics File Formats which has this example:

            Following is an example of a 16x16 bitmap stored using both its X10 and X11 variations. Note that each array contains exactly the same data, but is stored using different data word types:

            Source https://stackoverflow.com/questions/67465098

            QUESTION

            Rust custom bare metal compile target: linker expects "_start" symbol and discards unused ones: How can I specify a custom entry symbol?
            Asked 2021-Jun-10 at 12:17

            I'm cross compiling bare metal 32-bit code for x86 with Rust and I'm facing the problem, that the final object file is empty, if the entry function is not exactly called _start; the linker throws all code away because it sees it as dead. I'm familiar with the fact, that _start is a well-known entry point name, but the question is still:

            What part in Rust, LLVM or Linker forces this? Also attributes like extern "C" fn ..., #[no_mangle] or #[export_name = "foobar"] do not work (get thrown away by the linker). My guess is, that it's not the Rust compiler but the linker. As you can see, I use rust-lld as linker and ld.lld as linker-flavor in my case (see below).

            1. Where does the required _start-come from? Why does the linker throw my other code away?
            2. Whats the best option to specify my custom entry point to the linker?

            x86-unknown-bare_metal.json

            ...

            ANSWER

            Answered 2021-Jun-10 at 12:17

            New Answer [Solution]

            The real solution is quite simple but was hard to find, because it's hard to digg for possible options and solutions in this relatively undocumented field. I found out, that llvm-ld uses the same options, as GNU ld. So I checked against the GNU ld link options and found the solution. It has to be

            Source https://stackoverflow.com/questions/67918256

            QUESTION

            A good way to serialize a known array of small structs
            Asked 2021-Jun-09 at 03:38

            I have a large array of small structs that I would like to serialize to file.

            The struct:

            ...

            ANSWER

            Answered 2021-Jun-08 at 08:52

            Assuming you're using a recent framework: spans are your friend. As a trivial way of writing them:

            Source https://stackoverflow.com/questions/67884211

            QUESTION

            bitshift outside allowed range
            Asked 2021-Jun-08 at 23:24

            We have a code in production that in some situation may left-shift a 32-bit unsigned integer by more than 31 bits. I know this is considered undefined behavior. Unfortunately we can't fix this right now, but we can work this around, if only we can assume how it works in practice.

            On x86/amd64 I know processor for shifts uses only the appropriate less-significant bits of the shift count operand. So that a << b is in fact equivalent to a << (b & 31). From the hardware design this makes perfect sense.

            My question is: how does this work in practice on modern popular platforms, such as arm, mips, RISC and etc. I mean those that are actually used in modern PCs and mobile devices, not outdated or esoteric.

            Can we assume that those behave the same way?

            EDIT:

            1. The code I'm talking about currently runs in a blockchain. It's less important how exactly it works, but at the very least we want to be sure that it yields identical results on all the machines. This is the most important, otherwise this can be exploited to induce a so-called chain split.

            2. Fixing this means hassles, because the fix should be applied simultaneously to all the running machines, otherwise we are yet again at risk of the chain split. But we will do this at some point in an organized (controlled) manner.

            3. Lesser problem with the variety of compilers. We only use GCC. I looked at the code with my own eyes, there's a shl instruction there. Frankly I don't expect it to be anything different given the context (shift operand comes from arbitrary source, can't be predicted at compile time).

            4. Please don't remind me that I "can't assume". I know this. My question is 100% practical. As I said, I know that on x86/amd64 the 32-bit shift instruction only takes 5 least significant bits of the bit count operand.

            How does this behave on current modern architectures? We can also restrict the question to little-endian processors.

            ...

            ANSWER

            Answered 2021-Jun-02 at 20:15

            With code that triggers undefined behavior, the compiler can just about do anything - well, that's why it's undefined - asking for a safe definition of undefined code doesn't make any sense. Theoretical evaluations or observing the compiler translating similar code or assumptions on what "common practice" might be won't really give you an answer.

            Evaluating what a compiler really has translated your UB code to would probably be your only safe bet. If you want to be really sure what happens in the corner cases, have a look at the generated (assembly or machine) code. Modern debuggers give you the toolset to catch those corner cases and tell you what actually happens (the generated machine code is, after all, very well defined). This will be much simpler and much safer than to speculate on what code the compiler might probably emit.

            Source https://stackoverflow.com/questions/67810888

            QUESTION

            Memory reading time
            Asked 2021-Jun-06 at 14:18

            I heard that reading one byte from non-cached memory can take up to 50 CPU cycles.

            So, does reading int from memory take 4 times as long as reading char, meaning up to 200 cycles?

            If not, is it a good idea to get 4-8 chars at a time with *(size_t *)str, saving 150-350 CPU cycles? I imagine endianness might become an issue in that case.

            Also, what about local variables used in a function? Do they all get written into registry, or get inlined if possible, or at least get cached in L1?

            ...

            ANSWER

            Answered 2021-Jun-06 at 14:18

            I heard that reading one byte from non-cached memory can take up to 50 CPU cycles.

            This would be a characteristic of a specific CPU, but in a general sense, yes, reading from non-cached memory is costly.

            So, does reading int from memory take 4 times as long as reading char, meaning up to 200 cycles?

            You seem to be assuming that int is 4 bytes wide, which is not necessarily the case. But no, reading four consecutive bytes from memory does not typically take four times as many cycles as reading one. Memory is ordinarily read in blocks larger than one byte -- at least the machine word size, which is probably 4 on a machine with 4-byte ints -- and that also typically causes a whole cache line worth of memory around the requested location to be loaded into cache, so that subsequent access to nearby locations is faster.

            If not, is it a good idea to get 4-8 chars at a time with *(size_t *)str, saving 150-350 CPU cycles? I imagine endianness might become an issue in that case.

            No, it is not a good idea. Modern compilers are very good at generating fast machine code, and modern CPUs and memory controllers are designed with typical program behaviors in mind. The kind of micro-optimization you describe is unlikely to help performance, and it might even hurt by inhibiting optimizations that your compiler would perform for code written more straightforwardly.

            Moreover, your particular proposed expression has undefined behavior in C if str points (in)to an array of char instead of to a bona fide size_t, as I take to be your intention. It might nevertheless produce the result you expect, but it might also do any number of things you wouldn't like, such as crash the program.

            Also, what about local variables used in a function? Do they all get written into registry, or get inlined if possible, or at least get cached in L1?

            Again, your compiler is very good at generating machine code. Exactly what it does with local variables is compiler-specific, however, and probably varies a bit from function to function. It is in any case outside your control. One writes in C instead of assembly because one wants to leave such considerations to the compiler.

            In general:

            • Write clear code using good algorithms.
            • Rely on your compiler to optimize it.
            • If the result is not fast enough then profile it to find the slowest parts, and work on those.

            Source https://stackoverflow.com/questions/67859785

            QUESTION

            Can I use union to convert between integers of various size?
            Asked 2021-Jun-05 at 17:24

            Let's consider a union of integers of different sizes. Is it guaranteed that if a number fits the range of each of the integer types, it can be written to and read out from any of the union data members correctly?

            E.g. this code

            ...

            ANSWER

            Answered 2021-Apr-30 at 08:27

            No, that only works in little endian. In big endian the bytes are stored from the most significant position down. 0xdeadbeef will be stored in memory as ef be ad de in little endian and de ad be ef big endian in memory so reading 2 bytes from the start address will result in 0xbeef and 0xdead in those machines respectively

            However you're getting undefined behavior, because you're writing the smaller 2-byte field first then read the larger one. That means the high 2 bytes of int32_t will contain garbage. That's disregarding the fact that using union in C++ is already UB. You can only do that in C

            If you write the larger field first the read the smaller one then it works even for signed types:

            Source https://stackoverflow.com/questions/67326410

            QUESTION

            WAVE file unexpected behaviour
            Asked 2021-Jun-04 at 09:08

            I am currently trying to make a .wav file that will play sos in morse.

            The way I went about this is: I have a byte array that contains one wave of a beep. I then repeated that until I had the desired length. After that I inserted those bytes into a new array and put bytes containing 00 (in hexadecimal) to separate the beeps.

            If I add 1 beep to a WAVE file, it creates the file correctly (i.e. I get a beep of the desired length). Here is a picture of the waves zoomed in (I opened the file in Audacity): And here is a picture of the entire wave part:

            The problem now is that when I add a second beep, the second one becomes completely distorted: So this is what the entire file looks like now:

            If I add another beep, it will be the correct beep again, If I add yet another beep it's going to be distorted again, etc. So basically, every other wave is distorted.

            Does anyone know why this happens?

            Here is a link to a .txt file I generated containing the the audio data of the wave file I created: byteTest19.txt

            And here is a lint to a .txt file that I generated using file format.info that is a hexadecimal representation of the bytes in the .wav file I generated containing 5 beeps (with two of them, the even beeps being distorted): test3.txt

            You can tell when a new beep starts because it is preceded by a lot of 00's.

            As far as I can see, the bytes of the second beep does not differ from the first one, which is why I am asking this question.

            If anyone knows why this happens, please help me. If you need more information, don't hesitate to ask. I hope I explained well what I'm doing, if not, that's my bad.

            EDIT Here is my code:

            ...

            ANSWER

            Answered 2021-Jun-04 at 09:07

            The problem

            Your .wav file is Signed 16 bit Little Endian, Rate 44100 Hz, Mono - which means that each sample in the file is 2 bytes long, and describes a signed amplitude. So you can copy-and-paste chunks of samples without any problems, as long as their lengths are divisible by 2 (your block size). Your silences are likely of odd length, so that the 1st sample after a silence is interpreted as

            Source https://stackoverflow.com/questions/67827578

            QUESTION

            Undefined reference to symbol in hand-written ELF file
            Asked 2021-Jun-03 at 15:38

            I have hand-written an ELF32 object file that I would like to link via. gcc but I get an undefined reference when I try to use my function/label. I have it defined in a test C file as extern yet this does not change anything.

            This object file contains the following assembly:

            ...

            ANSWER

            Answered 2021-Jun-03 at 15:38

            The error is best understood by using lld to perform the link:

            Source https://stackoverflow.com/questions/67801322

            QUESTION

            How to convert an integer to byte string in Ruby?
            Asked 2021-Jun-02 at 23:50

            In Python, you can convert an integer to 32 bytes big endian like this:

            ...

            ANSWER

            Answered 2021-Jun-02 at 12:34
            def to_bytestring( num_chars=nil )
              unless num_chars
                bits_needed = Math.log( self ) / Math::LOG2
                num_chars = ( bits_needed / 8.0 ).ceil
              end
              if pack_code = { 1=>‘C’, 2=>‘S’, 4=>‘L’ }[ num_chars ]
                [ self ].pack( pack_code )
              else
                (0…(num_chars)).map{ |i|
                  (( self >> i*8 ) & 0xFF ).chr
                }.join
              end
            end
            

            Source https://stackoverflow.com/questions/67803054

            QUESTION

            Creating a program header with read only flag causes segfault
            Asked 2021-Jun-01 at 15:22

            I've been writing ELF binaries using NASM, and I created a segment with the read-only flag turned on. Running the program causes a segfault. I tested the program in replit, and it ran just fine so what's the problem? I created a regular NASM hello world program with the hello world string inside the .rodata section and that ran fine. I checked the binary with readelf to make sure the string was in a read only segment.

            The only solution I've come up with is to set the executable flag in the rodata segment so it has read / execute permissions, but that's hacky and I'd like the rodata segment to be read-only.

            This is the code for the ELF-64 hello world.

            ...

            ANSWER

            Answered 2021-Jun-01 at 15:22
            textSegment:
                dd 1 ; loadable segment
                dd 0x4 ; read / execute permissions
            

            Source https://stackoverflow.com/questions/67789275

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install endian

            You can download it from GitHub.

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/mandreyel/endian.git

          • CLI

            gh repo clone mandreyel/endian

          • sshUrl

            git@github.com:mandreyel/endian.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link

            Explore Related Topics

            Consider Popular iOS Libraries

            swift

            by apple

            ionic-framework

            by ionic-team

            awesome-ios

            by vsouza

            fastlane

            by fastlane

            glide

            by bumptech

            Try Top Libraries by mandreyel

            mio

            by mandreyelC++

            cratetorrent

            by mandreyelRust

            w-tinylfu

            by mandreyelC++

            dotfiles

            by mandreyelCSS

            natpp

            by mandreyelC++