parser-gen | Network packet parser generator | Parser library
kandi X-RAY | parser-gen Summary
kandi X-RAY | parser-gen Summary
This project generates packet parsers for use in network devices such as switches and routers. It generates bot fixed and programmable parsers. Fixed parsers use a parse graph that is chosen at generation time, while programmable parsers use a parse graph that is chosen at run time. The generator was originally created to fascilitate exploration of the parser design space. More information can be found in Design Principles for Packet Parsers by Glen Gibb et al. (See references.). The generator is not the same version used to produce results for the paper. This version offers fewer configurable parameters in order to make the code easier to understand and modify.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Prints the trace of the chain .
- Calculate the chains of the chain .
- Iterate over all clusters of a given node .
- Reads header from file
- Finds max length chains in base chain .
- Calculate extract locations for each header .
- Insert a BarrierNode into the DAG .
- Find the covered clusters reachable from the given clusters .
- Extend chain .
- Iterate over the given header .
parser-gen Key Features
parser-gen Examples and Code Snippets
Community Discussions
Trending Discussions on parser-gen
QUESTION
I'm writing a compiler, using top-down table-driven parsing. I've converted my grammar into LL(1), and it's the following:
...ANSWER
Answered 2020-Feb-19 at 21:15This won't help you much because, as noted below, LL(1) parse tables generated for that grammar cannot be accurate.
However, for what it's worth, here's my reverse engineering of those tables. It's probable that you could get a deeper understanding of this procedure by reading the text book referenced by the tool. (Note: link is not an endorsement, neither of the book nor of the vendor. I just copied it from the tool.)
The terminal symbols appear in order in the top row of the parse table (the one which the instructions say should be removed for use). So terminal symbol 1 is ','
, symbol 2 is '+'
, and so on up to symbol 46, which is the $
conventionally used as an end-of-input marker. (That's different from '$'
, which would be a literal dollar sign.)
Non-terminal symbols don't appear explicitly (so you can't recover their names from the tables) but they are also numbered in order. There are 54 of them, and each row of the parse table (after the first two) corresponds to a non-terminal symbol.
There are 110 productions, which are listed (with their corresponding index) in the Predict set section of the output from that tool. Each production corresponds to one entry in the "push map", which (for reasons unknown to me) uses the string conversion of the production number as a key.
The corresponding value in the push map is a list of indices: negative indices refer to terminals and positive indices refer to non-terminals. The index 0 is not used, which is why row 0 of the parse map is unused. From these indices, it is possible to reconstruct the right-hand side of the production, but they are actually used to indicate what to push onto the parse stack at each step in the parse.
The stack contains the list current predictions, with the top element of the stack being the immediate prediction at this point in the parse.
So the algorithm is as follows:
Initialise the parser stack to
[1, -46]
, which indicates that the current prediction consists of the right-hand side of the production->
followed by the end-of-input marker$
.Repeat the following until terminated by an error or by acceptance:
- If the top of the stack is negative:
- If the lookahead token has the corresponding token number (that is, the absolute value of the stack top), then pop the stack and accept the lookahead token. If that token is the end-of-input indicator, then the parse is finished and the input was valid. Otherwise, the new lookahead token is the next input token.
- If the lookahead token does not correspond with the top of the stack, then the input is incorrect. Report an error and terminate the parse.
- If the top of the stack is positive:
- Retrieve the value
rhs
fromparseTable[stack.top()][lookahead]
. Ifrhs
has a value greater than the number of productions (in this case, the values 111 or 112) then the input is incorrect. Report an error and terminate the parse. (The value will tell you whether it was a scan error or a pop error, but that might not make much difference to you. It could be used to improve error reporting.) - Pop the parse stack, and push the elements from
pushMap[rhs]
onto the stack, starting at the end. (For example, ifrhs
were 4, you would use the list frompushMap["4"]
, which is[10, -1]
. So you would push first-1
and then10
onto the parser stack.) - For the push-map generated by the hacking-off tool, it appears that there will be no entry in the pushMap for ε right-hand sides. So if
pushMap[rhs]
doesn't exist, you just pop the parse stack; there is nothing to push.
- Retrieve the value
- If the top of the stack is negative:
That algorithm does not include any procedure for producing a syntax tree for successful parses. But if you want to do anything more than just decide whether the input is a valid program or not, then you will definitely want to produce some kind of syntax tree.
Note: The grammar is not LL(1) so the parse tables are wrong.I don't know how much credibility you should give the tool you are using.
Your grammar is not LL(1), but the tool does not provide any indication of that fact.
A simple example is
QUESTION
I would like to have a way to describe logic/spec level structs that include abstract lists. Example 2.2.7 on page 27 of the ACSL Reference Manual suggests that there is a way to do this and it is as follows:
...ANSWER
Answered 2020-Jan-07 at 15:07Not all ACSL constructions are supported by the current Frama-C implementation. With each Frama-C release comes an ACSL implementation manual, which describes the constructions that are not yet implemented. For Frama-C 20.0 Calcium, this can be found here. In this document, unsupported constructions appear in red in the relevant BNF rule. Note however that other parts of the manual are left untouched. Notably, the fact that an example is included in the implementation manual does not imply that it is expected to be successfully parsed by the current Frama-C version. In your case, these are the rules of figure 2.17 on page 57, which show that indeed records are not implemented.
As you have already discovered by yourselves, it is indeed possible to define a C struct
(possibly ghost
) and an ACSL type out of it. Of course, since the struct
lives in the C world, its fields must have C types (ACSL types in ghost declarations is unsupported as well).
Similarly, you can simulate the absence of direct record definition by an update (the \with
construction) of all the fields of an arbitrary record, as in the following example:
QUESTION
I'm writing my own LALR(1) parser-generator so I'm not sure if I have an issue with my parser-generator or my grammar.
I'm trying to generate a parser for regexes. I have the following rules for character classes (slightly simplified):
...ANSWER
Answered 2019-Jul-23 at 19:21Yes, LALR(1) is insufficient. A LALR(1) parser-generator should be complaining about a shift-reduce conflict in the production:
QUESTION
So, I'm trying to build a syntax for the Menhir parser-generator for OCaml.
In that language, there's three sections to a file, separated by %%
(no, it's not pretty; unfortunately, it's inherited from the ancient ocamlyacc.)
I'm trying to create a separate syntax-region for each of these three, plus one for anything after an extraneous, third %%
:
ANSWER
Answered 2019-Jan-08 at 16:40Your problem is that the @@
separators are included in both start and end patterns of the region, so the end match of one region obscures the potential start match of the next region. In other words, your code would work if sections were delimited by @@@@
instead of @@
.
As you do need to assert both sides of a section, you can stop the matching of the end region via :help :syn-pattern-offset
. The me=s-1
(match end is one character before the start of the match) offset still asserts that a section ends with @@
, but doesn't consume those two characters any longer. With that, the nextgroup
can do its magic and start the next group right after the previous one ended:
QUESTION
Somebody mentioned, that there is already an answer to this question. Well, the other person was looking for a parse error regarding boost::spirit. Since boost::spirit is a parser-generator, one might think that he wants to know, how to generate a good parse error. I'm looking to solve a compiler error.
When I attempt to compile the code below, I'm always getting a compiler error, that the std::pair cannot be constructed from a single int. WTH?
...ANSWER
Answered 2018-Jul-17 at 18:34Try including:
#include
QUESTION
I am trying to create a parser-generator using flex/bison. This is my partial parser.y code:
...ANSWER
Answered 2018-Jan-25 at 07:07I tried to run this code on an Ubuntu 64-bit instance (Ubuntu 17.10). I don't know why but the same code runs fine on a 32 bit system (Ubuntu 14.10).
Maybe it's because of the large Integer sizes. Here is the code if you're interested.
QUESTION
I am trying to write to a JSON to a file using JSON Spirit.
I am using the code similar to the examples given on the website to do this as follows:
...ANSWER
Answered 2017-Nov-16 at 05:19Turn on the linking option:
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install parser-gen
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page