tinycc | tinycc fork : hopefully , better OSX support , EFI targets

 by   andreiw C Version: Current License: LGPL-2.1

kandi X-RAY | tinycc Summary

kandi X-RAY | tinycc Summary

tinycc is a C library typically used in macOS applications. tinycc has no bugs, it has no vulnerabilities, it has a Weak Copyleft License and it has low support. You can download it from GitHub.

This tree adds: - some bare minimum OSX support (merged). - support for generating ARM64 PE32+ images (not yet merged). - support for generating X64, ARM64, IA32 (untested) and ARM (untested) UEFI images (not yet merged). - a "Hello, World!" UEFI example in examples/uefi (not yet merged). - a UEFI-targetting compiler (x86_64-uefi-tcc and arm64-uefi-tcc), that can be built (Tiano EDK2) to be hosted on UEFI (not yet merged).
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              tinycc has a low active ecosystem.
              It has 19 star(s) with 2 fork(s). There are 6 watchers for this library.
              OutlinedDot
              It had no major release in the last 6 months.
              There are 0 open issues and 2 have been closed. On average issues are closed in 18 days. There are no pull requests.
              It has a neutral sentiment in the developer community.
              The latest version of tinycc is current.

            kandi-Quality Quality

              tinycc has no bugs reported.

            kandi-Security Security

              tinycc has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.

            kandi-License License

              tinycc is licensed under the LGPL-2.1 License. This license is Weak Copyleft.
              Weak Copyleft licenses have some restrictions, but you can use them in commercial projects.

            kandi-Reuse Reuse

              tinycc releases are not available. You will need to build from source code and install.
              Installation instructions are not available. Examples and code snippets are available.

            Top functions reviewed by kandi - BETA

            kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.
            Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of tinycc
            Get all kandi verified functions for this library.

            tinycc Key Features

            No Key Features are available at this moment for tinycc.

            tinycc Examples and Code Snippets

            No Code Snippets are available at this moment for tinycc.

            Community Discussions

            QUESTION

            Curious readonly symbols showing up as in initialized-data-section (D) in nm
            Asked 2021-Jan-31 at 15:23

            I've noticed that with gcc (but not clang), const (readonly) initialized data objects no longer show up as R data objects in nm and instead they become D (initialized section) objects.

            That would suggest the data object will be placed in writable memory, however, when the same object file is linked with either gcc or clang (but not tcc), then it seems to be placed in readonly memory anyway.

            clang doesn't seem to use these curious readonly-D symbols (instead the object remains R). Tinycc does make such objects into D symbols too, but those D symbols doesn't seem to have that curious property that causes linkers to put them in readonly memory.

            Can you explain what's going on here?

            The script below demonstrates the behavior with gcc, clang, and tinycc used in all combinations as compilers and linkers:

            ...

            ANSWER

            Answered 2021-Jan-31 at 15:23

            Most likely your gcc is configured with --enable-default-pie (gcc -v to check).

            In PIE, readonlyObject needs to be writable at program startup to allow dynamic relocation processing code to write the address of fn0 into its first field. To arrange for that, gcc places such objects into sections with .data.rel.ro prefix, and the linker collects such sections separately from other .data sections. The dynamic linker (or, in case of static PIEs, linked-in relocation processing code) can then mprotect that region after writing into it.

            Thus, with gcc (and implicit -fpie -pie) you have:

            • readonlyObject in .data.rel.ro
            • classified by nm as "global data"
            • writable at program startup for relocation
            • readonly when main is reached

            With clang or gcc -fno-pie you have:

            • readonlyObject in .rodata
            • classified by nm as "global constant"
            • readonly even on program startup

            Source https://stackoverflow.com/questions/65978465

            QUESTION

            Dynamically construct user-defined math function for efficient calling in C++?
            Asked 2020-Sep-16 at 07:16

            An example is something like Desmos (but as a desktop application). The function is given by the user as text, so it cannot be written at compile-time. Furthermore, the function may be reused thousands of times before it changes. However, a true example would be something where the function could change more frequently than desmos, and its values could be used more as well.

            I see four methods for writing this code:

            1. Parse the user-defined function with a grammar every single time the function is called. (Slow with many function calls)
            2. Construct the syntax tree of the math expression so that the nodes contain function pointers to the appropriate math operations, allowing the program to skip parsing the text every single time the function is called. This should be faster than #1 for many function calls, but it still involves function pointers and a tree, which adds indirection and isn't as fast as if the functions were pre-compiled (and optimized).
            3. Use something like The Tiny C Compiler as the backend for dynamic code generation with libtcc to quickly compile the user's function after translating it into C code, and then use it in the main program. Since this compiler can compile something like 10,000 very simple programs on my machine per second, there should be next to no delay with parsing new functions. Furthermore, this compiler generates machine code for the function, so there are no pointers or trees involved, and optimization is done by TinyCC. This method is more daunting for an intermediate programmer like me.
            4. Write my own tiny compiler (not of C, but tailored specifically to my problem) to generate machine code almost instantly. This is probably 20x more work than #3, and doesn't do much in the way of future improvements (adding a summation operation generator would require me to write more assembly code for that).

            Is there any easier, yet equally or more efficient method than #3, while staying in the realm of C++? I'm not experienced enough with lambdas and templates and the standard library to tell for sure if there isn't some abstract way to write this code easily and efficiently.

            Even a method that is faster than #2 but slower than #3, and requires no dynamic code generation would be an improvement.

            This is more of an intellectual curiosity than a real-world problem, which is why I am concerned so much with performance, and is why I wouldn't use someone else's math parsing library. It's also why I wouldn't consider using javascript or python interpreter which can interpret this kind of thing on-the-fly.

            ...

            ANSWER

            Answered 2020-Sep-16 at 03:06

            I think something along the lines of your option 2 would be good. Except maybe to be a little easier would be to have an Expr class with a float Expr::eval(std::unordered_map vars) method. Then implement subclasses like Var with a name, Add with left and right, Sub, Mult, Div, etc all the functions you want. When evaluating you just pass in the map with like {{"x",3},{"y",4}} or whatever and each Expr object would pass that down to any subexpressions and then do it's operation.

            I would think this would be reasonably fast. How complicated of expressions do you think your user's would be putting in? Most expressions probably won't require more than 10-20 function calls.

            It can also get a lot faster

            If you're trying to make something that graphs functions (or similar) you could speed this up considerably if you made your operations able to work with vectors of values rather than single scalar values.

            Assuming you wanted to graph something like x^3 - 6x^2 + 11x - 6 with 10,000 points, if you had your Expr objects only working on single values at a time, yeah this would be like ~10-15 function calls * 10,000 points = a lot jumping around! However, if your Expr objects could take arrays of values, like calling eval with {{"x",[1,2,3...10000]}} then this would only be ~10-15 function calls, total, into optimized C++. This could easily scale up to a larger number of points and still be very fast.

            Source https://stackoverflow.com/questions/63894460

            QUESTION

            How set CMAKE_STATIC_LINKER_FLAGS immediately after the executable file? [tcc -ar]
            Asked 2020-Jun-12 at 08:07

            How set CMAKE_STATIC_LINKER_FLAGS in CMakeLists.txt immediately after the executable file?

            for example: I need:

            tcc.exe-arqc staticRun.lib CMakeFiles/staticRun.dir/utils/system.c.obj

            but cmake after this settings:

            set (CMAKE_AR C:/run/code/toolchains/c++/MinGW-tcc/bin/tcc.exe CACHE FILEPATH "" FORCE)

            set (CMAKE_STATIC_LINKER_FLAGS -ar CACHE STRING "" FORCE)

            add -ar key like this:

            tcc.exe qc staticRun.lib CMakeFiles/staticRun.dir/utils/system.c.obj-ar

            so, building the static library failed.

            P.S.

            tcc.exe -ar - mean

            Tools: create library : tcc -ar [rcsv] lib.a files

            ...

            ANSWER

            Answered 2020-Jun-12 at 08:07

            The simplest is just to change the line that is used to run the static library with your custom semantics:

            Source https://stackoverflow.com/questions/62339192

            QUESTION

            win32 without mouse, how to assign keyboard shortcuts to buttons
            Asked 2020-Apr-29 at 12:58

            I wrote a simple win32 GUI application with buttons and other controls, in pure C, no MFC. I want to make it more accessible for those who can not use mouse.

            1. First, my example code does not respond to Tab key to move focus from one button to another. I can press UI button using mouse, than it becomes focused and i can activate button using Space-bar, but i can't move focus to other buttons using Tab or Shift+Tab. How can i fix this?

            2. I want to assign keyboard cues (little underscores) to buttons, so user can use keyboard shortcuts to activate buttons.

            I have google it around, but answers are hard googleable, so i need someone to point me to some documentation. A little piece of code would be very helpful.

            Here is the code i have. I compile and run it on Linux using WINE + TinyCC

            ...

            ANSWER

            Answered 2020-Apr-29 at 12:58

            It was simple. In main message processing loop, i call IsDialogMessage() with proper HWND. Then, if this function returns zero, i call normal TranslateMessage and DispatchMessage functions. Here is code:

            Source https://stackoverflow.com/questions/61478202

            QUESTION

            Assembler warning with gcc warning when placing data in .text
            Asked 2019-Oct-18 at 17:44

            When I compile

            ...

            ANSWER

            Answered 2019-Oct-18 at 17:19

            Apparently gcc emits a .section .text,"a",@progbits directive instead of just .section .text. I don't see any way to avoid it. However the default linker script usually merges all sections named .text.* so you can do something like __attribute__((section(".text.consts"))) and in the final binary it will be in the .text section.

            @Clifford found a hackish workaround which involves putting a # after the section name so that the assembler considers the rest of the line a comment.

            Source https://stackoverflow.com/questions/58455300

            QUESTION

            Assigning a pointer to a larger array to a pointer to a smaller VLA
            Asked 2018-Dec-12 at 14:20

            I noticed C compilers (gcc, clang, tinycc) allow me to assign a pointer to a larger array to a pointer to a smaller VLA without a warning:

            ...

            ANSWER

            Answered 2018-Dec-12 at 14:20

            const char[3] is not compatible with char[37].

            Nor is "pointer to qualified type" compatible with "pointer to type" - don't mix this up with "qualified pointer to type". (Const correctness does unfortunately not work with array pointers.)

            The relevant part being the rule of simple assignment C17 6.5.16.1:

            • the left operand has atomic, qualified, or unqualified pointer type, and (considering the type the left operand would have after lvalue conversion) both operands are pointers to qualified or unqualified versions of compatible types, and the type pointed to by the left has all the qualifiers of the type pointed to by the right;

            Looking at various compilers:

            • gcc in "gnu mode" is useless for checking C conformance. You must compile with -std=cxx -pedantic-errors. After which gcc behaves fine: gcc -std=c17 -pedantic-errors:

              error: pointers to arrays with different qualifiers are incompatible in ISO C [-Wpedantic]

            • icc gives the same diagnostics as gcc, it works fine.

            • Whereas clang -std=c17 -pedantic-errors does not report errors, so it apparently does not conform to the C standard.

            Source https://stackoverflow.com/questions/53743670

            QUESTION

            What kind of lexer/parser was used in the very first C compiler?
            Asked 2017-Sep-13 at 04:56

            In the early 1970s, Dennis Ritchie wrote the very first C compiler.

            In the year 2017, I wanted to write a C compiler. Books like Deep C Secrets (Peter Van Der Linden) say that C was, above all else, designed to be easy to compile. But I've been having an inordinate amount of trouble with it.

            For starters, it's already relatively difficult to come up with Lex/Yacc specifications for the C language, and these tools didn't even exist yet when Ritchie made his compiler!

            Plus, there are a great many examples of surprisingly small C compilers that do not use any help from Lex & Yacc. (Check out this tiny obfuscated C compiler from Fabrice Bellard. Note that his "production" tinycc source is actually quite a bit longer, most likely in an effort to accommodate more architectures, and to be more readable)

            So what am I missing here? What kind of lexer/parser did Ritchie use in his compiler? Is there some easier way of writing compilers that I just haven't stumbled onto?

            ...

            ANSWER

            Answered 2017-Sep-13 at 04:56

            Yacc's name is an abbreviation for "yet another compiler compiler", which strongly suggests that it was neither the first nor the second such tool.

            Indeed, the Wikipedia article on History of Compiler Construction notes that

            In the early 1960s, Robert McClure at Texas Instruments invented a compiler-compiler called TMG, the name taken from "transmogrification". In the following years TMG was ported to several UNIVAC and IBM mainframe computers.

            Not long after Ken Thompson wrote the first version of Unix for the PDP-7 in 1969, Doug McIlroy created the new system's first higher-level language: an implementation of McClure's TMG. TMG was also the compiler definition tool used by Ken Thompson to write the compiler for the B language on his PDP-7 in 1970. B was the immediate ancestor of C.

            That's not quite an answer to your question, but it provides some possibilities.

            Original answer:

            I wouldn't be at all surprised if Ritchie just banged together a hand-built top-down or operator precedence parser. The techniques were well-known, and the original C language presented few challenges. But parser generating tools definitely existed.

            Postscript:

            A comment on the OP by Alexey Frunze points to this early version of the C compiler. It's basically a recursive-descent top-down parser, up to the point where expressions need to be parsed at which point it uses a shunting-yard-like operator precedence grammar. (See the function tree in the first source file for the expression parser.) This style of starting with a top-down algorithm and switching to a bottom-up algorithm (such as operator-precedence) when needed is sometimes called "left corner" (LC) parsing.

            So that's basically the architecture which I said wouldn't surprise me, and it didn't :).

            It's worth noting that the compiler unearthed by Alexey (and also by @Torek in a comment to this post) does not handle anything close to what we generally consider the C language these days. In particular, it handles only a small subset of the declaration syntax (no structs or unions, for example), which is probably the most complicated part of the K&R C grammar. So it does not answer your question about how to produce a "simple" parser for C.

            C is (mostly) parseable with an LALR(1) grammar, although you need to implement some version of the "lexer hack" in order to correctly parse cast expressions. The input to the parser (translation phase 7) will be a stream of tokens produced by the preprocessing code (translation phase 4, probably incorporating phases 5 and 6), which itself may draw upon a (f)lex tokenizer (phase 3) whose input will have been sanitized in some fashion according to phases 1 and 2. (See § 5.1.1.2 for a precise definition of the phases).

            Sadly, (f)lex was not designed to be part of a pipeline; they really want to just handle the task of reading the source. However, flex can be convinced to let you provide chunks of input by redefining the YY_INPUT macro. Handling trigraphs (if you chose to do that) and line continuations can be done using a simple state machine; it's convenient that these transformations only shrink the input, simplifying handling of the maximum input length parameter to YY_INPUT. (Don't provide input one character at a time as suggested by the example in the flex manual.)

            Since the preprocessor must produce a stream of tokens (at this point, whitespace is no longer important), it is convenient to use bison's push-parser interface. (Indeed, it is very often more convenient to use the push API.) If you take that suggestion, you will end up with phase 4 as the top-level driver of the parse.

            You could hand-build a preprocessor-directive parser, but getting #if expressions and pragmas right suggests the use of a separate bison parser for preprocessing.

            If you just want to learn how to build a compiler, you might want to start with a simpler language such as Tiger, the language used as a running example in Andrew Appel's excellent textbooks on compiler construction.

            Source https://stackoverflow.com/questions/46166191

            QUESTION

            C linking error (with tcc)
            Asked 2017-Jul-05 at 00:38

            I'm trying to run the example from tiny cc (tcc-0.9.26-win64-bin.zip) called libtcc_test.c.

            I've copied libtcc.h from libtcc into include and libtcc.def into lib.
            Then I ran tcc ./examples/libtcc_test.c and got a linking error :/

            ...

            ANSWER

            Answered 2017-Apr-01 at 10:59

            To link in a library, you need to add a -l${library_basename} flag after all c files or o files. If the library is named libtcc.a or libtcc.so (on Windows it's probably tcc.dll or libtcc.dll), you need to add -ltcc.

            Source https://stackoverflow.com/questions/43155517

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install tinycc

            You can download it from GitHub.

            Support

            Today this includes some basic build support (CONFIG_OSX) in ./configure and ./Makefile. It also makes the -run mode function, allowing tcc to open up libc.dylib.
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/andreiw/tinycc.git

          • CLI

            gh repo clone andreiw/tinycc

          • sshUrl

            git@github.com:andreiw/tinycc.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link