ELF | game research with AlphaGoZero/AlphaZero reimplementation | Reinforcement Learning library
kandi X-RAY | ELF Summary
kandi X-RAY | ELF Summary
ELF: a platform for game research with AlphaGoZero/AlphaZero reimplementation
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of ELF
ELF Key Features
ELF Examples and Code Snippets
@Override
public String toString() {
return "The elf blacksmith";
}
Community Discussions
Trending Discussions on ELF
QUESTION
I was designing my kernel using C. While making the kprintf
function (a function like printf
but works with the kernel) I saw that signed integers (precisely the data type is long
), va_args
is converting them to unsigned long
.
Here's a snippet of the code:
kPrint.c
ANSWER
Answered 2022-Apr-04 at 12:06The call kprintf("Number: %d\n", -1234);
is incorrect because %d
extracts a long long
. It must be kprintf("Number: %d\n", -1234LL);
.
-1234 is a 32 bit operand. The problem could be that this is being passed in a 64 bit aligned word, but not being sign extended to 64 bits.
So that is to say, the -1234 value in 64 bits needs to be fffffffffffffb2e
, but the 32 bit parameter is producing a 00000000fffffb2e
image on the stack, which is 4294966062.
According to this hypothesis, we would have to pass -1000 to obtain the observed 429496629, however. It bears no relation to -1234. Something else could be going on, like garbage bits being interpreted as data.
The behavior is not well-defined, after all: you're shoving an integer of one size into a completely typeless and unsafe parameter passing mechanism and pulling out an integer of a different size.
QUESTION
I'm trying to learn assembly through compiling Rust. I have found a way to compile Rust code to binary machine code and be able to objdump
it to view the assembly. However if I write the following:
ANSWER
Answered 2022-Mar-11 at 12:38There is one compiler pass before the generation of LLVM-IR, which is the generation of MIR, the Rust intermediate representation. If you emit this for the given code with a command such as this one:
QUESTION
in my library (ELF arm64, Android) I see the same mangled symbol name twice (names changed):
...ANSWER
Answered 2022-Mar-09 at 12:27Lower case letters in the symbol type (in your example, d
, b
, and r
) indicate local symbols. These are not subject to linkage and may hence appear multiple times in the same binary. There is nothing wrong with that.
The main source of such symbols are local symbols in object files. The linker just transfers the local symbols of all object files involved into the symbol table of the binary without linking them together. So most likely, multiple object files defined a local symbol named _ZL15s_symbolNameXYs
.
QUESTION
I am building a RISC-V emulator which basically loads a whole ELF file into memory.
Up to now, I used the pre-compiled test binaries that the risc-v foundation provided which conveniently had an entry point exactly at the start of the .text
section.
For example:
...ANSWER
Answered 2022-Mar-06 at 16:08My question is: what is the actual "formula" of how exactly you get the entry point address of the _start procedure as an offset from byte 0?
First, forget about sections. Only segments matter at runtime.
Second, use readelf -Wl
to look at segments. They tell you exactly which chunk of file ([.p_offset, .p_offset + .p_filesz)
) goes into which in-memory region ([.p_vaddr, .p_vaddr + .p_memsz)
).
The exact calculation of "at which offset in the file does _start
reside" is:
- Find
Elf32_Phdr
which "covers" the address contained inElf32_Ehdr.e_entry
. - Using that
phdr
, file offset of_start
is:ehdr->e_entry - phdr->p_vaddr + phdr->p_offset
.
Update:
So, am I always looking for the 1st program header?
No.
Also by "covers" you mean that the 1st phdr->p_vaddr is always equal to e_entry?
No.
You are looking for a the program header (describing relationship between in-memory and on-file data) which overlaps the ehdr->e_entry
in memory. That is, you are looking for the segment for which phdr->p_vaddr <= ehdr->e_entry && ehdr->e_entry < phdr->p_vaddr + phdr->p_memsz
. This segment is often the first, but that is in no way guaranteed. See also this answer.
QUESTION
I have inherited some code that compiles fine under g++ 9 and 10, but gives a runtime error for both compilers when optimization is turned on (that is, compiling -O0 works, but compiling -Og gives a runtime error from the MMU.)
The problem is that there is a Meyers singleton defined in an inline static method of a class, and that object seems to be optimized away. There is a complication that the static object in the method is declared with a section attribute (this is the g++ language extension for placing options in specific sections in the object file.)
Here is a summary of the situation.
File c.hpp
...ANSWER
Answered 2022-Feb-19 at 11:16GCC uses COMDAT section groups when available to implement vague linkage. Despite being explicitly named as MY_C_SECTION
, the compiler still emits a COMDAT group with _ZZN7my_prod1C8instanceEvE1c
as the key symbol:
QUESTION
I am trying to implement my own binary loader for learning purposes, but cannot figure out the data segment.
...ANSWER
Answered 2022-Feb-14 at 07:56QUESTION
I am running to an error that says "Invalid ELF header" for a package called "argon2". When uploading my code to AWS Lambda through the serverless framework. The code runs perfectly when running locally.
Development on MacOS Big Sur version 11.4
Image of the error I am getting
I've researched a little bit on the error and people are saying to use Docker to compile the packages and then send to Lambda. I haven't worked with Docker much and wanted to explore other options before attempting the docker solution.
Any guidance or help would be much appreciated! I've been stuck on this bug for over a day or two.
...ANSWER
Answered 2021-Sep-24 at 19:59What is going on?
The package you are using (argon2
) contains bindings to a C implementation of the argon2 algorithm. In other words, the package itself wraps a library written in C, and makes it accessible inside your node.js environment.
That C package is also shipped with your package, and compiled (or downloaded) when you run npm install argon2
. That means that after installing argon2, there is a binary shared library on your file system, that is interfacing with your node environment. Because you are installing on MacOS, the binary will be compiled (or downloaded) for Mac. This means you end up with a MACH-O file (which is the format executables for macOS come in) instead of an ELF file (which is format Linux uses for executables).
Now your Lambda (which runs Linux under the hood) effectively complains, that the executable you've built on your Mac is not a Linux executable.
How do you fix this?
In simple terms, you need a way to npm install
that will build or download the argon2 version for Linux. There are two different ways to achieve this. Pick one:
npm install
on Linux
Pretty much as the title says, download and build your dependencies under Linux. You can use a Virtual Machine, or a Docker container to do that. Another option would be to use the AWS build SaaS product (AWS CodeBuild) to do this.
npm install --target_arch=x64 --target_platform=linux --target_libc=glibc
on Mac
Thankfully argon2 comes with support for node-pre-gyp. They effectively provide you with prebuild binaries, which means that you can just pull the linux binaries end do not have to compile them yourself. To do that, thow away your node_modules folder and run npm install --target_arch=x64 --target_platform=linux
. This will download the Linux files, instead of the macOs files. You can then push everything into your lambda. Please note that this means your application will not run locally anymore, since your mac cannot run the Linux executable (you would have to npm install
again, leaving out the two parameters to get back to the MacOS version).
Please note that there might be packages apart from argon2 that do not support MacOS, in which case you would have to take the first option.
QUESTION
ANSWER
Answered 2021-Oct-13 at 04:16The fs
segment register is used in x86-64 Linux to point to thread-local storage. See How are the fs/gs registers used in Linux AMD64? So this instruction will xor the rdx
register with the value found at offset 0x30 in the thread-local storage block.
This code is part of a pointer encryption mechanism in glibc to help harden against certain exploits. There is some explanation of it at https://sourceware.org/glibc/wiki/PointerEncryption. The value at fs:0x30
is an "key" for a trivial "encryption" algorithm; pointers are xor'ed with this value (and then rotated) when they are stored, and rotated back and xor'ed again when they are retrieved from memory, which recovers the original pointer.
There is no particular significance to the number 0x30; it just happens to be the offset where that value is stored. You can see in the inline assembly that this number comes from offsetof (tcbhead_t, pointer_guard)
; so the storage at the fs
base address is laid out as a tcbhead_t
struct, and given the other members that it contains, the pointer_guard
member has ended up at offset 0x30
. So looking at the name pointer_guard
for the member is more informative than its numerical offset.
QUESTION
I am doing baremetal development on ARM and emulating Raspi 3 on QEMU. Below is my minimal assembly code :
...ANSWER
Answered 2022-Jan-06 at 10:11The QEMU -kernel option treats the file it loads differently depending on whether it is an ELF file or not.
If it is an ELF file, it is loaded according to what the ELF file says it should be loaded as, and started by executing from the ELF entry point. If it is not an ELF file, it is assumed to be a Linux kernel, and started in the way that the Linux kernel's booting protocol requires.
In particular, for a multi-core board, if -kernel gets an ELF file it starts all the cores at once at the entry point. If it gets a non-ELF file then it will do whatever that hardware is supposed to do for loading a Linux kernel. For raspi3b this means emulating the firmware behaviour of "secondary cores sit in a loop waiting for the primary core to release them by writing to a 'mailbox' address. This is the behaviour you're seeing in gdb -- the 0x300 address that cores 1-3 are at is in the "spin in a loop waiting" code.
In general, unless your guest code is a Linux kernel or is expecting to be booted in the same way as a Linux kernel, don't use the -kernel option to load it. -kernel is specifically "try to do what Linux kernels want", and it also tends to have a lot of legacy "this seemed like a useful thing to somebody" behaviour that differs from board to board or between different guest CPU architectures. The "generic loader" is a good way to load ELF files if you want complete manual control for "bare metal" work.
For more info on the various QEMU options for loading guest code, see this answer.
QUESTION
I'm toying ptrace
with the code below. I found that the system call number for execve
was 59 even when I compiled with the -m32
option. Since I'm using Ubuntu on a 64-bit machine, it could be understandable.
Soon, the question arose: "Do libc32 behave differently on 32-bit machine and 64-bit machine? Are they different?" So I checked what libc32 had in 64-bit. However, the execve
system call number for libc was 11, which was identical the execv
system call number for 32-bit systems. So where does the magic happen? Thank you in advance.
Here's the code. It's originated from https://www.linuxjournal.com/article/6100
...ANSWER
Answered 2021-Dec-26 at 06:37execve
is special; it's the only one that has special interaction with PTRACE_TRACEME
. The way strace
works, other system calls do show the 32-bit call number. (And modern strace needs special help to know whether that's a 32-bit call number for int 0x80
/ sysenter
, or a 64-bit call number, since 64-bit processes can still invoke int 0x80
, although they normally shouldn't. This support was only added in 2019, with PTRACE_GET_SYSCALL_INFO
)
You're right, when the kernel is actually invoked, EAX holds 11
, __NR_execve
from unistd_32.h
. It's set by mov $0xb,%eax
before glibc's execve wrapper jumps to the VDSO page to enter the kernel via whatever efficient method is supported on this hardware (normally sysenter
.)
But execution doesn't actually stop until it reaches some code in the main execve
implementation that checks for PTRACE_TRACEME
and raises SIGTRAP
.
Apparently sometime before that happens, it calls void set_personality_64bit(void)
in arch/x86/kernel/process_64.c, which includes
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install ELF
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page