firecracker | Secure and fast microVMs for serverless computing | Continuous Deployment library
kandi X-RAY | firecracker Summary
kandi X-RAY | firecracker Summary
The main component of Firecracker is a virtual machine monitor (VMM) that uses the Linux Kernel Virtual Machine (KVM) to create and run microVMs. Firecracker has a minimalist design. It excludes unnecessary devices and guest-facing functionality to reduce the memory footprint and attack surface area of each microVM. This improves security, decreases the startup time, and increases hardware utilization. Firecracker has also been integrated in container runtimes, for example Kata Containers and Weaveworks Ignite. Firecracker was developed at Amazon Web Services to accelerate the speed and efficiency of services like AWS Lambda and AWS Fargate. Firecracker is open sourced under Apache version 2.0. To read more about Firecracker, check out firecracker-microvm.io.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of firecracker
firecracker Key Features
firecracker Examples and Code Snippets
Community Discussions
Trending Discussions on firecracker
QUESTION
I am trying to get some information from a website using bs4 and requests.
I am trying to get to a specific div:
.....
however when I use the following code:
...ANSWER
Answered 2021-Oct-31 at 06:49Website is serving these part of content dynamically, so you wont get it with request in that way.
Alternativ approachTry to use selenium, it will render the page and you will get your results.
QUESTION
BACKGROUND:
I am trying to write a DPDK app which is supposed to handle packets coming from inside a Virtual Machine Monitor.
Basically the VMM is getting the packets from its guest, then is going to send those packets to DPDK. Then Dpdk sends them out on the NIC.
Virtual Machine -> Virtual Machine Manager -> DPDK -> NIC
This architecture from above is supposed to replace and outperform the original architecture. In the original architecture, the VMM is putting the packets on a TAP interface.
Original:
Virtual Machine -> Virtual Machine Manager -> TAP interface -> NIC
Problem:
I have written the new architecture and the throughput is way worse than when using the TAP interface. (TAP 300 MB/s any direction, Dpdk: 50MB/s VM sender, 5MB/s VM receiver)
I am suspecting that I am not configuring my DPDK Application properly. Could you give an opinion on my configurations?
Environment:
I have done all the testing inside a Qemu virtual machine, so the architectures described above were both ran inside this Virtual Machine:
3 logical CPUs (out of 8 on host)
4096 MB memory
OS: Ubuntu 20.4
2 NICs, one for SSH and one for DPDK
What I did so far:
2GB Hugepages
Isolated the cpu which DPDK is using.
Here is the code: https://github.com/mihaidogaru2537/DpdkPlayground/blob/Strategy_1/primary_dpdk_firecracker/server.c
All functional logic is in "lcore_main", everything else is just configuration.
All the advice I could find about increasing performance would involve hardware stuff and not configuration parameters, I don't know if the values I am using for things such as:
...ANSWER
Answered 2021-May-29 at 09:39[Answer is based on the live debug and configuration settings done to improve performance]
Factors that were affecting performance for Kernel and DPDK interfaces were
- Host system was not using CPU which were isolated for VM
- KVM-QEMU CPU threads were not pinned
- QEMU was not using huge page backed memory
- emulator and io threads of QEMU were not pinned.
- Inside VM the kernel boot parameter was set to 1GB, which was causing TLB to miss on the host.
Corrected configuration:
- setup host with 4 * 1GB huge page on host
- edit qemu XML to reflect VCPU, iothread, emulator threads on desired host CPU
- edit qemu to use host 4 * 1GB page
- edit VM grub to have isolate CPU and use 2MB pages.
- run DPDK application using isolated core on VM
- taskset firecracker thread on ubuntu
We were able to achieve around 3.5X to 4X performance on the current DPDK code.
note: there is a lot of room to improve code performance for DPDK primary and secondary application also.
QUESTION
The architecture is as follows:
Secondary DPDK app ------> Primary DPDK App ----> (EDIT)Interface
Inside my Secondary I have a vector of u8 bytes representing an L2 packet. I want to send this L2 packet to the Primary App so the primary could send it to the internet. From what I understood, the L2 packet has to be wrapped in mbuf in order to be able to put on a shared ring. But I have no clue on how to do this wrapping.
What I don't know exactly: my packet is just a vector of bytes, how could I extract useful information out of it in order to fill the mbuf fields? And which fields of the mbuf should be filled minimally for this to work?
For better understanding, here is what should happen step by step:
- Vector of bytes gets in secondary (doesn't matter how)
- Secondary gets an mbuf from the shared mempool.
- Secondary puts the vector inside the mbuf (the vector is an L2 packet)
- mbuf has many fields representing many things, so I don't know which field to fill and with what.
- Secondary places the mbuf on a shared ring.
- Primary grabs the mbuf from shared ring.
- Primary send the mbuf to the internet.
This is what I coded so far, the secondary App is in Rust and primary App is in C language.
Secondary is here (github):
Remember, L2 Packet is just Vec, that is like [1, 222, 23, 34...], a simple array.
...ANSWER
Answered 2021-Apr-20 at 09:56@Mihai, if one needs to create a DPDK buffer in secondary and send it via RTE_RING following are the steps to do so
- Start the secondary application
- Get the Mbuf pool ptr via
rte_mempool_lookup
- Allocate mbuf from mbuf pool via
rte_pktmbuf_alloc
- set minimum fields in mbuf such as
pkt_len, data_len, next and nb_segs
to appropriate values. - fetch the starting of the region to mem copy your custom packet with
rte_pktmbuf_mtod_offset or rte_pktmbuf_mtod
- then memcopy the content from user vector to DPDK area
Note: based on the checksum offload, actual frame len and chain mbuf mode other fields need to be updated.
code snippet
QUESTION
Basically I am trying to cargo build a crate which has a build.rs file. This crate is located inside a bigger project and it's supposed to be a lib crate. And inside this build.rs file, I am trying to compile a .C file which includes a couple of headers.
Fun fact: I got this build.rs file and crate structure from another little demo crate, and in that demo crate I had no problem to compile this exact C file with these headers.
FULL ERROR HERE:
Here is a link to github: https://github.com/mihaidogaru2537/FirecrackerPlayground/tree/dpdk_component/firecracker/src/dpdk_component
There is the crate I am talking about and you can also see the bigger project in which it resides. In the README file you can see the full error.
Either I do cargo build from the root of the big project or from the root of this problematic crate, the error is the same.
"cargo:warning=/usr/include/asm-generic/errno.h:5:10: fatal error: asm-generic/errno-base.h: No such file or directory cargo:warning= 5 | #include "
The missing file might change depending on the .flag("-I/path/..") calls I am doing inside the build.rs
As you can see, right now it's unable to find errno-base.h, but I am including the path to asm-generic.
Here is the code of the build.rs file from the crate where the compilation of this C file works, as you can see, I did not have to add any include flags before calling compile.
...ANSWER
Answered 2021-Apr-12 at 09:37I somehow managed to fix it by adjusting the cargo build command to use x86_64-unknown-linux-gnu as target instead of x86_64-unknown-linux-musl (By default cargo build was doing the musl target somehow)
So if you are trying to build a rust app which is using DPDK libraries and you are getting missing headers, make sure to try the following:
cargo build --target=x86_64-unknown-linux-gnu
Well if you have to use musl and there is no alternative, I don't have an answer. But to me this was enough.
If someone has an explanation why musl is nor working in this scenario, please let us know.
Reddit Rust community helped me as well, check this link out if you are interested: https://www.reddit.com/r/rust/comments/mo3i08/unable_to_compile_c_file_inside_buildrs_headers/
So Why build.rs was unable to find .C headers?
ANSWER
Because I was using x86_64-unknown-linux-musl as target.
QUESTION
I'm currently working on a web application that has to function as some sort of webshop later on. I'm now working on an addToCart function, that has to pick certain data from the clicked element (the name and the price of the product, and add 1 to pcs and save everything to a session), and paste these 2 values into a template I've made and place this in the shopCart. I'm now trying to print out the 2 values I've just mentioned, but I'm stuck now.
This is the current javascript code I've made for loading in all the products, and my attempt on showing some of the values of the clicked items:
...ANSWER
Answered 2020-Sep-28 at 08:12Because addToCart()
is a callback, you can use this
to access its context (the caller element):
QUESTION
I'm pretty new to programming and right now I have a website that shows different fireworks from my json file in the frontend. I have made a categories menu, that I would like to actually work. I wondered what would be the most efficient way to make this menu work. I have made a function that returns all existing json objects from my json file, and wondered if it would be possible to use this existing function to filter on the different categories, and only show the items with the designated categorie. The script I'm using to loop through all de json object is:
...ANSWER
Answered 2020-Sep-22 at 16:19You can use click
event here when user click on a
tag get the id
which has different category then loop through all productCard
divs and check if the a tag inside it has data-slug
equal to the category which user has selected depending on this show that div.
Demo Code :
QUESTION
I'm trying to make my functionality work that eventually has to add items to my shopping cart. However, the tutorial I'm following is making use of data in his html that he passed down with a data type in the element itself. I'm retreiving data from a json file. I would like to know how I can make this simple functionality work, so I can continue on working out the functionality. At this moment I'm getting back "undefined" in my console.
Html code:
...ANSWER
Answered 2020-Sep-18 at 10:03Take a good look at what dataset vs .data - Difference? dataset
do, or data for that matter, and then take a look at your inner HTML in .productItem
there is not a single data attribute inside, no wonder dataset
can not fetch any and is undefined. Please tel me what is wanted result in console.log(productInfo[0].name);
What is expected to be inside and we will extract it.
You can fetch a name by simply doing:
QUESTION
I am aware of the cold-start and warm-start in AWS Lambda.
However, I am not sure during the warm-start if the Lambda architecture reuses the Firecracker VM in the backend? Or does it do the invocation in a fresh new VM?
Is there a way to enforce VM level isolation for every invocation through some other AWS solution?
...ANSWER
Answered 2020-Sep-13 at 21:41Based on what stated on the documentation for Lambda execution context, Lambda tries to reuse the execution context between subsequent executions, this is what leads to cold-start (when the context is spun up) and warm-start (when an existing context is reused).
You typically see this latency when a Lambda function is invoked for the first time or after it has been updated because AWS Lambda tries to reuse the execution context for subsequent invocations of the Lambda function.
This is corroborated by another statement in the documentation for the Lambda Runtime Environment where it's stated that:
When a Lambda function is invoked, the data plane allocates an execution environment to that function, or chooses an existing execution environment that has already been set up for that function, then runs the function code in that environment.
A later passage of the same page gives a bit more info on how environments/resources are shared among functions and executions in the same AWS Account:
Execution environments run on hardware virtualized virtual machines (microVMs). A microVM is dedicated to an AWS account, but can be reused by execution environments across functions within an account. [...] Execution environments are never shared across functions, and microVMs are never shared across AWS accounts.
Additionally, there's also another doc page that gives some more details on isolation among environments but again, no mention to the ability to enforce 1 execution per environment.
As far as I know there's no way to make it so that a new execution will use a new environment rather than an existing one. AWS doesn't provide much insight in this but the wording around the topic seems to suggest that most people actually try to do the opposite of what you're looking for:
When you write your Lambda function code, do not assume that AWS Lambda automatically reuses the execution context for subsequent function invocations. Other factors may dictate a need for AWS Lambda to create a new execution context, which can lead to unexpected results, such as database connection failures.
I would say that if your concern is isolation from other customers/accounts, AWS guarantees isolation by means of virtualisation that although not being at the physical level, depending on their SLAs and your SLAs/requirements might be enough. If instead you're thinking on doing some kind of multi-tenant infrastructure that requires Lambda executions to be isolated from one another then this component might not be what you're looking for.
QUESTION
I have a function all_purch_spark() that sets a Spark Context as well as SQL Context for five different tables. The same function then successfully runs a sql query against an AWS Redshift DB. It works great. I am including the entire function below (stripped of sensitive data of course). Please forgive its length but I wanted to show it as is given the problem I am facing.
My problem is with the second function repurch_prep() and how it calls the first function all_purch_spark(). I can't figure out how to avoid errors such as this one: NameError: name 'sqlContext' is not defined
I will show the two functions and error below.
Here is the first function all_purch_spark(). Again I put the whole function here for reference. I know it is long but wasn't sure I could reduce it to a meaningful example.
...ANSWER
Answered 2020-Aug-19 at 10:55The solution per @Lamanus was to place variable outside of function making them global rather than storing them in a function (as I did) and call that function from another.
QUESTION
I'm hitting a lifetime error when compiling a change I made for Firecracker (on aarch64
, but I doubt the issue is architecture-dependent):
ANSWER
Answered 2020-Apr-04 at 13:01A simple, reproducible example of the problem (playground):
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install firecracker
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page