rips | Rust IP Stack - A userspace IP stack | Networking library
kandi X-RAY | rips Summary
kandi X-RAY | rips Summary
This is a work in progress to migrate over from the older code base. So far this "new" one has very little, except the packet crate. So you probably want to check out the older one if you are looking for working protocols.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of rips
rips Key Features
rips Examples and Code Snippets
Community Discussions
Trending Discussions on rips
QUESTION
I have a file called survey.txt
in which I used cut -d, -f1 survey.csv
to get the following result:
ANSWER
Answered 2021-Apr-18 at 01:33$ sort -f survey.txt | uniq -ic | sort -nr | head -n 3
7 Twix
5 Skittles
4 Sour Patch Kids
QUESTION
def bear_room():
print("\nthere's a bear here")
print("\nthe bear has a bunch of honey")
print("\nthe fat bear is front of another door")
print("\nhow are you going to move the bear?")
choice = input("\n\nTaunt bear, take honey, open door?: ")
if choice == "take honey":
print("\nthe bear looks at you then slaps your face off")
elif choice == "open door":
print("\nget the hell out")
elif choice == "Taunt bear":
print("\n*Bear rips your heart out*")
else:
print("\nInvalid entry")
bear_room()
bear_room()
...ANSWER
Answered 2021-Mar-23 at 18:32You can introduce another function called timer
. This function will use the time module in Python. The code for the timer is:
QUESTION
I'm trying to create a regex in my app and having trouble understanding it. I've created one regex but still, I want it to be modified to do not match specific things. I need to create regex for both iOS and Android.
This is my regex
...ANSWER
Answered 2021-Mar-08 at 10:09For finding all the matches for your search word use:
QUESTION
C# Loops. I need help with this question:
Write a program that reads 10 data from the keyboard and add only those that are negative.
I just started class in January an we're in C# loops. I haven't been in school for over 3 weeks now because of the coronavirus. So i'm kinda lost now and we just started online class so big rips to me:(
...ANSWER
Answered 2020-Mar-27 at 06:05you are adding to sum outside of your loop ...
move that if into the loop after the assignment of n, add n to sum and not to n and you are good
QUESTION
I am passing the request as in my feature file and i am trying to do assert from request to response.
I have tried must contains queries but i am not sure if i am doing it correct, could you please help.
...ANSWER
Answered 2019-Jun-12 at 00:23Possible in 0.9.3: https://github.com/intuit/karate#scenario-outline-enhancements
First change the Examples:
column header to data!
QUESTION
I'm implementing a narrow and limited scripting DSL using python and I'd like to be able to functionally do the equivalent of the following:
...ANSWER
Answered 2019-Nov-01 at 10:59The issue you are facing is not specific to numpy
itself, or advanced indexing in numpy
, or whether it creates copies or not. Instead it is driven entirely by ambiguities around whether:
indexing is guaranteed to return values that are somehow "inside" the container (its not); and whether
in-place add is guaranteed to return a modified version of the original value (its not)
Consider the expression:
QUESTION
This relates to this question
Thinking about it though, on a modern intel CPU the SEC phase is implemented in microcode meaning there would be a check whereby a burned in key is used to verify the signature on the PEI ACM. If it doesn't match then it needs to do something, if it does match it needs to do something else. Given this is implemented as an MSROM procedure there must be a way of branching but given that the MSROM instructions do not have RIPs.
Usually, when a branch mispredicts as being taken then when the instruction retires, the ROB will check the exception code and hence add the instruction length to the RIP of the ROB line or just use the next ROB entry's IP which will result in the front end being resteered to that address amongst branch prediction updates. With the BOB, this functionality has now been lent to the jump execution units. Obviously this can't happen with an MSROM routine as the front-end has nothing to do with it.
My thoughts would be that there is a specific jump instruction that only the MSROM routine can issue that jumps to a different location in the MSROM and it could be configured such that MSROM branch instructions are always predicted not taken and when the branch execution unit encounters this instruction and the branch is taken, it produces an exception code and perhaps concatenates the special jump destination to it and an exception occurs on retirement. Alternatively, the execution unit could take care of it and it could use the BOB but I'm under the impression that the BOB is indexed by branch instruction RIP then there's also the fact that exceptions that generate MSROM code are usually handled at retirement; a branch misprediction doesn't require the MSROM I don't think and rather all actions are preformed internally.
...ANSWER
Answered 2019-May-23 at 15:46Microcode branches are apparently special.
Intel's P6 and SnB families do not support dynamic prediction for microcode branches, according to Andy Glew's description of original P6 (What setup does REP do?). Given the similar performance of SnB-family rep
-string instructions, I assume this PPro fact applies to even the most recent Skylake / CoffeeLake CPUs1.
But there is a penalty for microcode branch misprediction, so they are statically(?) predicted. (This is why rep movsb
startup cost goes in increments of 5 cycles for low/medium/high counts in ECX, and aligned vs. misaligned.)
A microcoded instruction takes a full line to itself in the uop cache. When it reaches the front of the IDQ, it takes over the issue/rename stage until it's done issuing microcode uops. (See also How are microcodes executed during an instruction cycle? for more detail, and some evidence from perf event descriptions like idq.dsb_uops
that show the IDQ can be accepting new uops from the uop cache while the issue/rename stage is reading from the microcode-sequencer.)
For rep
-string instructions, I think each iteration of the loop has to actually issue through the front-end, not just loop inside the back-end and reuse those uops. So this involves feedback from the OoO back-end to find out when the instruction is finished executing.
I don't know the details of what happens when issue/rename switches over to reading uops from the MS-ROM instead of the IDQ.
Even though each uop doesn't have its own RIP (being part of a single microcoded instruction), I'd guess that the branch mispredict detection mechanism works similarly to normal branches.
rep movs
setup times on some CPUs seem to go in steps of 5 cycles depending on which case it is (small vs. large, alignment, etc). If these are from microcode branch mispredict, that would appear to mean that the mispredict penalty is a fixed number of cycles, unless that's just a special case of rep movs
. May be because the OoO back-end can keep up with the front-end? And reading from the MS-ROM shortens the path even more than reading from the uop cache, making the miss penalty that low.
It would be interesting to run some experiments into how much OoO exec is possible around rep movsb
, e.g. with two chains of dependent imul
instructions, to see if it (partially) serializes them like lfence
. We hope not, but to achieve ILP the later imul
uops would have to issue without waiting for the back-end to drain.
I did some experiments here on Skylake (i7-6700k). Preliminary result: copy sizes of 95 bytes and less are cheap and hidden by the latency of the IMUL chains, but they do basically fully overlap. Copy sizes of 96 bytes or more drain the RS, serializing the two IMUL chains. It doesn't matter whether it's rep movsb
with RCX=95 vs. 96 or rep movsd
with RCX=23 vs. 24. See discussion in comments for some more summary of my findings; if I find time I'll post more details.
The "drains the RS" behaviour was measured with the rs_events.empty_end:u
even becoming 1 per rep movsb
instead of ~0.003. other_assists.any:u
was zero, so it's not an "assist", or at least not counted as one.
Perhaps whatever uop is involved only detects a mispredict when reaching retirement, if microcode branches don't support fast recovery via the BoB? The 96 byte threshold is probably the cutoff for some alternate strategy. RCX=0 also drains the RS, presumably because it's also a special case.
Would be interesting to test with rep scas
(which doesn't have fast-strings support, and is just slow and dumb microcode.)
Intel's 1994 Fast Strings patent describes the implementation in P6. It doesn't have an IDQ (so it makes sense that modern CPUs that do have buffers between stages and a uop cache will have some changes), but the mechanism they describe for avoiding branches is neat and maybe still used for modern ERMSB: the first n
copy iterations are predicated uops for the back-end, so they can be issued unconditionally. There's also a uop that causes the back-end to send its ECX value to the microcode sequencer, which uses that to feed in exactly the right number of extra copy iterations after that. Just the copy uops (and maybe updates of ESI, EDI, and ECX, or maybe only doing that on an interrupt or exception), not microcode-branch uops.
This initial n
uops vs. feeding in more after reading RCX could be the 96-byte threshold I was seeing; it came with an extra idq.ms_switches:u
per rep movsb
(up from 4 to 5).
https://eprint.iacr.org/2016/086.pdf suggests that microcode can trigger an assist in some cases, which might be the modern mechanism for larger copy sizes and would explain draining the RS (and apparently ROB), because it only triggers when the uop is committed (retired), so it's like a branch without fast-recovery.
The execution units can issue an assist or signal a fault by associating an event code with the result of a micro- op. When the micro-op is committed (§ 2.10), the event code causes the out-of-order scheduler to squash all the micro-ops that are in-flight in the ROB. The event code is forwarded to the microcode sequencer, which reads the micro-ops in the corresponding event handler"
The difference between this and the P6 patent is that this assist-request can happen after some non-microcode uops from later instructions have already been issued, in anticipation of the microcoded instruction being complete with only the first batch of uops. Or if it's not the last uop in a batch from microcode, it could be used like a branch for picking a different strategy.
But that's why it has to flush the ROB.
My impression of the P6 patent is that the feedback to the MS happens before issuing uops from later instructions, in time for more MS uops to be issued if needed. If I'm wrong, then maybe it's already the same mechanism still described in the 2016 paper.
Usually, when a branch mispredicts as being taken then when the instruction retires,
Intel since Nehalem has had "fast recovery", starting recovery when a mispredicted branch executes, not waiting for it to reach retirement like an exception.
This is the point of having a Branch-Order-Buffer on top of the usual ROB retirement state that lets you roll back when any other type of unexpected event becomes non-speculative. (What exactly happens when a skylake CPU mispredicts a branch?)
Footnote 1: IceLake is supposed to have the "fast short rep" feature, which might be a different mechanism for handling rep
strings, rather than a change to microcode. e.g. maybe a HW state machine like Andy mentions he wished he'd designed in the first place.
I don't have any info on performance characteristics, but once we know something we might be able to make some guesses about the new implementation.
QUESTION
I began to learn the language of GO and I do not quite understand something, maybe I'm just confused and tired. Here is my code, there is an array of result (from encoded strings, size 2139614 elements). I need to decode them and use them further. But when I run an iteration, the resultrips is twice as large and the first half is completely empty. Therefore, I make a slice and add to it the desired range.
Why it happens?
It might be easier to decode the result immediately and re-record it, but I don’t know how to do it, well)))
maybe there is a completely different way and as a beginner I don’t know it yet
...ANSWER
Answered 2019-Apr-19 at 17:36When you write:
QUESTION
In my application, there is a screen for generating PDFs. On the web browser, the PDF is getting downloaded, the content is displayed but on mobile it is getting downloaded and saved in PDF format and the content is not getting written to the PDF.
Plugin used: cordova pdfMake
Here is the code for reference
...ANSWER
Answered 2019-Apr-09 at 06:54finally, i fix the bug. my fileopener
plugin version doesn't support the ionic -3. i changed plugin version
QUESTION
I have done some reading about Spectre v2 and obviously you get the non technical explanations. Peter Cordes has a more in-depth explanation but it doesn't fully address a few details. Note: I have never performed a Spectre v2 attack so I do not have hands on experience. I have only read up about about the theory.
My understanding of Spectre v2 is that you make an indirect branch mispredict for instance if (input < data.size)
. If the Indirect Target Array (which I'm not too sure of the details of -- i.e. why it is separate from the BTB structure) -- which is rechecked at decode for RIPs of indirect branches -- does not contain a prediction then it will insert the new jump RIP (branch execution will eventually insert the target RIP of the branch), but for now it does not know the target RIP of the jump so any form of static prediction will not work. My understanding is it is always going to predict not taken for new indirect branches and when Port 6 eventually works out the jump target RIP and prediction it will roll back using the BOB and update the ITA with the correct jump address and then update the local and global branch history registers and the saturating counters accordingly.
The hacker needs to train the saturating counters to always predict taken which, I imagine, they do by running the if(input < data.size)
multiple times in a loop where input
is set to something that is indeed less than data.size
(catching errors accordingly) and on the final iteration of the loop, make input
more than data.size
(1000 for instance); the indirect branch will be predicted taken and it will jump to the body of the if statement where the cache load takes place.
The if statement contains secret = data[1000]
(A particular memory address (data[1000]) that contains secret data is targeted for loading from memory to cache) then this will be allocated to the load buffer speculatively. The preceding indirect branch is still in the branch execution unit and waiting to complete.
I believe the premise is that the load needs to be executed (assigned a line fill buffer) before the load buffers are flushed on the misprediction. If it has been assigned a line fill buffer already then nothing can be done. It makes sense that there isn't a mechanism to cancel a line fill buffer allocation because the line fill buffer would have to pend before storing to the cache after returning it to the load buffer. This could cause line fill buffers to become saturated because instead of deallocating when required (keeping it in there for speed of other loads to the same address but deallocating when the there are no other available line buffers). It would not be able to deallocate until it receives some signal that a flush is not going to occur, meaning it has to halt for the previous branch to execute instead of immediately making the line fill buffer available for the stores of the other logical core. This signalling mechanism could be difficult to implement and perhaps it didn't cross their minds (pre-Spectre thinking) and it would also introduce delay in the event that branch execution takes enough time for hanging line fill buffers to cause a performance impact i.e. if data.size
is purposefully flushed from the cache (CLFLUSH
) before the final iteration of the loop meaning branch execution could take up to 100 cycles.
I hope my thinking is correct but I'm not 100% sure. If anyone has anything to add or correct then please do.
...ANSWER
Answered 2019-Feb-27 at 21:35Thanks Brendan and Hadi Brais, after reading your answers and finally reading the spectre paper it is clear now where I was going wrong in my thinking and I confused the two a little.
I was partially describing Spectre v1 which causes a bounds check bypass by mistraining the branch history of a jump i.e. if (x < array1_size)
to a spectre gadget. This is obviously not an indirect branch. The hacker does this by invoking a function containing the spectre gadget with legal parameters to prime the branch predictor (PHT+BHT) and then invoke with illegal parameters to bring array1[x]
into cache. They then reprime the branch history by supplying legal parameters and then flush array1_size
from cache (which I'm not sure how they do because even if the attacker process knows the VA of array1_size
, the line cannot be flushed because the TLB contains a different PCID for the process, so it must be caused to be evicted in some way i.e. filling the set at that virtual address). They then invoke with the same illegal parameters as before and as array1[x]
is in cache but array1_size
is not, array[x]
will resolve quickly and begin the load of array2[array1[x]]
while still waiting on array1_size
, which loads a position in array2
based on the secret at any x that transcends the bounds of array1
. The attacker then recalls the function with a valid value of x and times the function call (I assume the attacker must know the contents of array1
because if array2[array1[8]]
results in a faster access they need to know what is at array1[8]
as that is the secret, but surely that array would have to contain every 2^8 bit combination right).
Spectre v2 on the other hand requires a second attack process that knows the virtual address of an indirect branch in the victim process so that it can poison the target and replace it with another address. If the attack process contains a jump instruction that would reside in the same set, way and tag in the IBTB as the victim indirect branch would then it just trains that branch instruction to predict taken and jump to a virtual address which happens to be that of the gadget in the victim process. When the victim process encounters the indirect branch the wrong target address from the attack program is in the IBTB. It is crucial that it is an indirect branch because falsities as a result of a process switch are usually checked at decode i.e. if the branch target differs from the target in the BTB for that RIP then it flushes the instructions fetched before it. This cannot be done with indirect branches because it does not know the target until the execution stage and hence the idea is that the indirect branch selected depends on a value that needs to be fetched from cache. It then jumps to this target address which is that of the gadget and so on and so forth.
The attacker needs to know the source code of the victim process to identify a gadget and they need to know the VA at which it will reside. I assume this could be done by knowing predictably where code will be loaded. I believe .exes are typically loaded at x00400000 for instance and then there is a BaseOfCode in the PE header.
Edit: I just read Appendix B of the spectre paper and it makes for a nice Windows implementation of Spectre v2.
As a proof-of-concept, we constructed a simple target application which provides the service of computing a SHA1 hash of a key and an input message. This implementation consisted of a program which continuously runs a loop which calls Sleep(0), loads the input from a file, invokes the Windows cryptography functions to compute the hash, and prints the hash whenever the input changes. We found that the
Sleep()
call is done with data from the input file in registers ebx, edi, and an attacker-known value for edx, i.e., the content of two registers is controlled by the attacker. This is the input criteria for the type of Spectre gadget described in the beginning of this section.
It uses ntdll.dll
(.dll full of native API system call stubs) and kernel32.dll
(Windows API) which are always mapped in the user virtual address space on the direction of ASLR (specified in the .dll images), except the physical address is likely to be the same due to copy-on-write view mapping into the page cache. The indirect branch to poison will be in the Windows API Sleep()
function in kernel32.dll
which appears to indirectly call NtDelayExecution()
in ntdll.dll
. The attacker then ascertains the address of the indirect branch instruction and maps a page encompassing the victim address that contains the target address into its own address space and changes the target address stored at that address to that of the gadget that they identified to be residing somewhere in the same or another function in ntdll.dll
(I'm not entirely sure (due to ASLR) how the attacker knows for certain where the victim process maps kernel32.dll
and ntdll.dll
in its address space in order to locate the address of the indirect branch in Sleep()
for the victim. Appendix B claims they used 'Simple pointer operations' to locate the indirect branch and address that contains the target -- how that works I'm not sure). Threads are then launched with the same affinity of the victim (so that the victim and mistraining threads hyperthread on the same physical core) which call Sleep()
themselves to train it indirectly which in the address space context of the hack process will now jump to the address of the gadget. The gadget is temporarily replaced with a ret
so that it returns from Sleep()
smoothly. These threads will also execute a sequence before the indirect jump to mimic what the global branch history of the victim would be before encountering the indirect jump to fully ensure that the branch is taken in an alloyed history. A separate thread is then launched with the complement of the thread affinity of the victim that repeatedly evicts the victim's memory address containing the jump destination to ensure that when the victim does encounter the indirect branch it will take a long RAM access to resolve which allows the gadget to speculate ahead before the branch destination can be checked against the BTB entry and the pipeline flushed. In JavaScript, eviction is done by loading to the same cache set i.e. in multiples of 4096. The mistraining threads, eviction threads and victim threads are all running and looping at this stage. When the victim process loop calls Sleep()
, the indirect branch speculates to the gadget due to the IBTB entry that the hacker poisoned previously. A probing thread is launched with the complement of the victim process thread affinity (so as not to interfere with the mistraining and victim branch history). The probing thread will modify the header of the file that the victim process uses which results in those values residing in ebx
and edi
when Sleep()
is called meaning the probing thread can directly influence the values stored in ebx
and edi
. The spectre gadget branched to in the example adds the value stored at [ebx+edx+13BE13BDh]
to edi
and then loads a value at the address stored in edi
and adds it with a carry to dl
. This allows the probing thread to learn the value stored at [ebx+edx+13BE13BDh]
as if it selects an original edi
of 0 then the value accessed in the second operation will be loaded from the virtual address range 0x0 – 0x255 by which time the indirect branch will resolve but the side effects are already present. The attack process needs to ensure that it has mapped the same physical address into the same location in its virtual address space in order to probe the probing array with a timing attack. Not sure how it does this but in windows it would, AFAIK, need to map a view of a page-file–backed section object that has been opened by the victim at that location. Either that or it would manipulate the victim to call the spectre gadget with a negative TC ebx
value such that ebx+edx+13BE13BDh
= 0
, =1
,..., =255
and somehow time that call. This could potentially also be achieved by using APC injection.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install rips
Rust is installed and managed by the rustup tool. Rust has a 6-week rapid release process and supports a great number of platforms, so there are many builds of Rust available at any time. Please refer rust-lang.org for more information.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page