FireCracker | Generate word list combining inputted words | Generator Utils library
kandi X-RAY | FireCracker Summary
kandi X-RAY | FireCracker Summary
Create word list by combining words and permuting characters. Many options avaiable - program is very usefull if you know a part of the password.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Initialize the frame
- Extract options from a word list
- 2D String array
- Method used to extract words from a list of strings
- Gets all characters between the given text
- Removes duplicate characters from input string
- Read text from file
- Launch a new window
- Edit the save path
- Checks if the specified string contains the given character set
- Start the application
FireCracker Key Features
FireCracker Examples and Code Snippets
Community Discussions
Trending Discussions on FireCracker
QUESTION
BACKGROUND:
I am trying to write a DPDK app which is supposed to handle packets coming from inside a Virtual Machine Monitor.
Basically the VMM is getting the packets from its guest, then is going to send those packets to DPDK. Then Dpdk sends them out on the NIC.
Virtual Machine -> Virtual Machine Manager -> DPDK -> NIC
This architecture from above is supposed to replace and outperform the original architecture. In the original architecture, the VMM is putting the packets on a TAP interface.
Original:
Virtual Machine -> Virtual Machine Manager -> TAP interface -> NIC
Problem:
I have written the new architecture and the throughput is way worse than when using the TAP interface. (TAP 300 MB/s any direction, Dpdk: 50MB/s VM sender, 5MB/s VM receiver)
I am suspecting that I am not configuring my DPDK Application properly. Could you give an opinion on my configurations?
Environment:
I have done all the testing inside a Qemu virtual machine, so the architectures described above were both ran inside this Virtual Machine:
3 logical CPUs (out of 8 on host)
4096 MB memory
OS: Ubuntu 20.4
2 NICs, one for SSH and one for DPDK
What I did so far:
2GB Hugepages
Isolated the cpu which DPDK is using.
Here is the code: https://github.com/mihaidogaru2537/DpdkPlayground/blob/Strategy_1/primary_dpdk_firecracker/server.c
All functional logic is in "lcore_main", everything else is just configuration.
All the advice I could find about increasing performance would involve hardware stuff and not configuration parameters, I don't know if the values I am using for things such as:
...ANSWER
Answered 2021-May-29 at 09:39[Answer is based on the live debug and configuration settings done to improve performance]
Factors that were affecting performance for Kernel and DPDK interfaces were
- Host system was not using CPU which were isolated for VM
- KVM-QEMU CPU threads were not pinned
- QEMU was not using huge page backed memory
- emulator and io threads of QEMU were not pinned.
- Inside VM the kernel boot parameter was set to 1GB, which was causing TLB to miss on the host.
Corrected configuration:
- setup host with 4 * 1GB huge page on host
- edit qemu XML to reflect VCPU, iothread, emulator threads on desired host CPU
- edit qemu to use host 4 * 1GB page
- edit VM grub to have isolate CPU and use 2MB pages.
- run DPDK application using isolated core on VM
- taskset firecracker thread on ubuntu
We were able to achieve around 3.5X to 4X performance on the current DPDK code.
note: there is a lot of room to improve code performance for DPDK primary and secondary application also.
QUESTION
The architecture is as follows:
Secondary DPDK app ------> Primary DPDK App ----> (EDIT)Interface
Inside my Secondary I have a vector of u8 bytes representing an L2 packet. I want to send this L2 packet to the Primary App so the primary could send it to the internet. From what I understood, the L2 packet has to be wrapped in mbuf in order to be able to put on a shared ring. But I have no clue on how to do this wrapping.
What I don't know exactly: my packet is just a vector of bytes, how could I extract useful information out of it in order to fill the mbuf fields? And which fields of the mbuf should be filled minimally for this to work?
For better understanding, here is what should happen step by step:
- Vector of bytes gets in secondary (doesn't matter how)
- Secondary gets an mbuf from the shared mempool.
- Secondary puts the vector inside the mbuf (the vector is an L2 packet)
- mbuf has many fields representing many things, so I don't know which field to fill and with what.
- Secondary places the mbuf on a shared ring.
- Primary grabs the mbuf from shared ring.
- Primary send the mbuf to the internet.
This is what I coded so far, the secondary App is in Rust and primary App is in C language.
Secondary is here (github):
Remember, L2 Packet is just Vec, that is like [1, 222, 23, 34...], a simple array.
...ANSWER
Answered 2021-Apr-20 at 09:56@Mihai, if one needs to create a DPDK buffer in secondary and send it via RTE_RING following are the steps to do so
- Start the secondary application
- Get the Mbuf pool ptr via
rte_mempool_lookup
- Allocate mbuf from mbuf pool via
rte_pktmbuf_alloc
- set minimum fields in mbuf such as
pkt_len, data_len, next and nb_segs
to appropriate values. - fetch the starting of the region to mem copy your custom packet with
rte_pktmbuf_mtod_offset or rte_pktmbuf_mtod
- then memcopy the content from user vector to DPDK area
Note: based on the checksum offload, actual frame len and chain mbuf mode other fields need to be updated.
code snippet
QUESTION
Basically I am trying to cargo build a crate which has a build.rs file. This crate is located inside a bigger project and it's supposed to be a lib crate. And inside this build.rs file, I am trying to compile a .C file which includes a couple of headers.
Fun fact: I got this build.rs file and crate structure from another little demo crate, and in that demo crate I had no problem to compile this exact C file with these headers.
FULL ERROR HERE:
Here is a link to github: https://github.com/mihaidogaru2537/FirecrackerPlayground/tree/dpdk_component/firecracker/src/dpdk_component
There is the crate I am talking about and you can also see the bigger project in which it resides. In the README file you can see the full error.
Either I do cargo build from the root of the big project or from the root of this problematic crate, the error is the same.
"cargo:warning=/usr/include/asm-generic/errno.h:5:10: fatal error: asm-generic/errno-base.h: No such file or directory cargo:warning= 5 | #include "
The missing file might change depending on the .flag("-I/path/..") calls I am doing inside the build.rs
As you can see, right now it's unable to find errno-base.h, but I am including the path to asm-generic.
Here is the code of the build.rs file from the crate where the compilation of this C file works, as you can see, I did not have to add any include flags before calling compile.
...ANSWER
Answered 2021-Apr-12 at 09:37I somehow managed to fix it by adjusting the cargo build command to use x86_64-unknown-linux-gnu as target instead of x86_64-unknown-linux-musl (By default cargo build was doing the musl target somehow)
So if you are trying to build a rust app which is using DPDK libraries and you are getting missing headers, make sure to try the following:
cargo build --target=x86_64-unknown-linux-gnu
Well if you have to use musl and there is no alternative, I don't have an answer. But to me this was enough.
If someone has an explanation why musl is nor working in this scenario, please let us know.
Reddit Rust community helped me as well, check this link out if you are interested: https://www.reddit.com/r/rust/comments/mo3i08/unable_to_compile_c_file_inside_buildrs_headers/
So Why build.rs was unable to find .C headers?
ANSWER
Because I was using x86_64-unknown-linux-musl as target.
QUESTION
I'm currently working on a web application that has to function as some sort of webshop later on. I'm now working on an addToCart function, that has to pick certain data from the clicked element (the name and the price of the product, and add 1 to pcs and save everything to a session), and paste these 2 values into a template I've made and place this in the shopCart. I'm now trying to print out the 2 values I've just mentioned, but I'm stuck now.
This is the current javascript code I've made for loading in all the products, and my attempt on showing some of the values of the clicked items:
...ANSWER
Answered 2020-Sep-28 at 08:12Because addToCart()
is a callback, you can use this
to access its context (the caller element):
QUESTION
I'm pretty new to programming and right now I have a website that shows different fireworks from my json file in the frontend. I have made a categories menu, that I would like to actually work. I wondered what would be the most efficient way to make this menu work. I have made a function that returns all existing json objects from my json file, and wondered if it would be possible to use this existing function to filter on the different categories, and only show the items with the designated categorie. The script I'm using to loop through all de json object is:
...ANSWER
Answered 2020-Sep-22 at 16:19You can use click
event here when user click on a
tag get the id
which has different category then loop through all productCard
divs and check if the a tag inside it has data-slug
equal to the category which user has selected depending on this show that div.
Demo Code :
QUESTION
I'm trying to make my functionality work that eventually has to add items to my shopping cart. However, the tutorial I'm following is making use of data in his html that he passed down with a data type in the element itself. I'm retreiving data from a json file. I would like to know how I can make this simple functionality work, so I can continue on working out the functionality. At this moment I'm getting back "undefined" in my console.
Html code:
...ANSWER
Answered 2020-Sep-18 at 10:03Take a good look at what dataset vs .data - Difference? dataset
do, or data for that matter, and then take a look at your inner HTML in .productItem
there is not a single data attribute inside, no wonder dataset
can not fetch any and is undefined. Please tel me what is wanted result in console.log(productInfo[0].name);
What is expected to be inside and we will extract it.
You can fetch a name by simply doing:
QUESTION
I am aware of the cold-start and warm-start in AWS Lambda.
However, I am not sure during the warm-start if the Lambda architecture reuses the Firecracker VM in the backend? Or does it do the invocation in a fresh new VM?
Is there a way to enforce VM level isolation for every invocation through some other AWS solution?
...ANSWER
Answered 2020-Sep-13 at 21:41Based on what stated on the documentation for Lambda execution context, Lambda tries to reuse the execution context between subsequent executions, this is what leads to cold-start (when the context is spun up) and warm-start (when an existing context is reused).
You typically see this latency when a Lambda function is invoked for the first time or after it has been updated because AWS Lambda tries to reuse the execution context for subsequent invocations of the Lambda function.
This is corroborated by another statement in the documentation for the Lambda Runtime Environment where it's stated that:
When a Lambda function is invoked, the data plane allocates an execution environment to that function, or chooses an existing execution environment that has already been set up for that function, then runs the function code in that environment.
A later passage of the same page gives a bit more info on how environments/resources are shared among functions and executions in the same AWS Account:
Execution environments run on hardware virtualized virtual machines (microVMs). A microVM is dedicated to an AWS account, but can be reused by execution environments across functions within an account. [...] Execution environments are never shared across functions, and microVMs are never shared across AWS accounts.
Additionally, there's also another doc page that gives some more details on isolation among environments but again, no mention to the ability to enforce 1 execution per environment.
As far as I know there's no way to make it so that a new execution will use a new environment rather than an existing one. AWS doesn't provide much insight in this but the wording around the topic seems to suggest that most people actually try to do the opposite of what you're looking for:
When you write your Lambda function code, do not assume that AWS Lambda automatically reuses the execution context for subsequent function invocations. Other factors may dictate a need for AWS Lambda to create a new execution context, which can lead to unexpected results, such as database connection failures.
I would say that if your concern is isolation from other customers/accounts, AWS guarantees isolation by means of virtualisation that although not being at the physical level, depending on their SLAs and your SLAs/requirements might be enough. If instead you're thinking on doing some kind of multi-tenant infrastructure that requires Lambda executions to be isolated from one another then this component might not be what you're looking for.
QUESTION
I have a function all_purch_spark() that sets a Spark Context as well as SQL Context for five different tables. The same function then successfully runs a sql query against an AWS Redshift DB. It works great. I am including the entire function below (stripped of sensitive data of course). Please forgive its length but I wanted to show it as is given the problem I am facing.
My problem is with the second function repurch_prep() and how it calls the first function all_purch_spark(). I can't figure out how to avoid errors such as this one: NameError: name 'sqlContext' is not defined
I will show the two functions and error below.
Here is the first function all_purch_spark(). Again I put the whole function here for reference. I know it is long but wasn't sure I could reduce it to a meaningful example.
...ANSWER
Answered 2020-Aug-19 at 10:55The solution per @Lamanus was to place variable outside of function making them global rather than storing them in a function (as I did) and call that function from another.
QUESTION
I'm hitting a lifetime error when compiling a change I made for Firecracker (on aarch64
, but I doubt the issue is architecture-dependent):
ANSWER
Answered 2020-Apr-04 at 13:01A simple, reproducible example of the problem (playground):
QUESTION
I would like to know if there is a difference between gVisor and Weave Ignite in terms of their use-cases (if there is any). To me, both of them seem to try a similar thing: make the execution of code in virtualized environments more secure.
gVisor is doing this by introducing runsc
, a runtime that enables sandboxed containers and Ignite is doing it by using Firecracker, which in their context also seems to be used as a sandbox.
ANSWER
Answered 2020-Feb-26 at 10:03Both Firecracker and gVisor are technologies which provide sandboxing / isolation but in a different way.
- Firecracker (orange box) is a Virtual Machine Manager.
- gVisor (green box) has an architecture which controls/filters the system calls that reach the actual host.
Weave Ignite is a tool that helps you use Firecracker in order to run containers inside lightweight VMs and also do that with a nice UX, similar to using Docker.
This is also mentioned in the Scope section of github.com/weaveworks/ignite
ScopeIgnite is different from Kata Containers or gVisor. They don't let you run real VMs, but only wrap a container in new layer providing some kind of security boundary (or sandbox).
Ignite on the other hand lets you run a full-blown VM, easily and super-fast, but with the familiar container UX. This means you can "move down one layer" and start managing your fleet of VMs powering e.g. a Kubernetes cluster, but still package your VMs like containers.
Regarding the use-case part of your question, it's my feeling that because of the stronger isolation VMs offer, Ignite can be more production-ready. Also, the approach of gVisor seems to have a significant performance cost, as it is mentioned at The True Cost of Containing: A gVisor Case Study:
Conclusion
- gVisor is arguably more secure than
runc
- Unfortunately, our analysis shows that the true costs of effectively containing are high: system calls are 2.2× slower, memory allocations are 2.5× slower, large downloads are 2.8× slower, and file opens are 216× slower
Current Sandboxing MethodsFurther readingSandboxing with gVisor
Do I Need gVisor?
No. If you're running production workloads, don't even think about it! Right now, this is a metaphorical science experiment. That's not to say you may not want to use it as it matures. I don't have any problem with the way it's trying to solve process isolation and I think it's a good idea. There are also alternatives you should take the time to explore before adopting this technology in the future.
Where might I want to use it?As an operator, you'll want to use gVisor to isolate application containers that aren't entirely trusted. This could be a new version of an open source project your organization has trusted in the past. It could be a new project your team has yet to completely vet or anything else you aren't entirely sure can be trusted in your cluster. After all, if you're running an open source project you didn't write (all of us), your team certainly didn't write it so it would be good security and good engineering to properly isolate and protect your environment in case there may be a yet unknown vulnerability.
My answer has information from the following sources which are in quote sections when taken "as-is" and I recommend them for further reading:
- What is gVisor? from Rancher Blog
- Making Containers More Isolated: An Overview of Sandboxed Container Technologies
- Containers, Security, and Echo Chambers blog by Jessie Frazelle
- The True Cost of Containing: A gVisor Case Study
- Kata Containers vs gVisor?
- Firecracker: Lightweight Virtualization for Serverless Applications paper from AWS
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
Install FireCracker
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page