ductilej | Java compiler plugin that turns Java | Bytecode library
kandi X-RAY | ductilej Summary
kandi X-RAY | ductilej Summary
A Java compiler plugin that turns Java into a mostly dynamically typed language
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Create a typed NULL expression from the given type .
ductilej Key Features
ductilej Examples and Code Snippets
Community Discussions
Trending Discussions on Bytecode
QUESTION
I am decompiling java application, and i have already done with 99% .class files. But, I have a problem with couple of them: error while decompilation (errors are same type). Example:
Procyon: java.lang.IllegalArgumentException: Argument 'index' must be in the range [0, 63], but value was: 15873...
CFR:
...ANSWER
Answered 2022-Feb-25 at 12:29There is nothing wrong with all decompilers i have mentioned before.
It was a constant_pool_count issue. It happened because of some JPHP decompiler offset troubles. So, if you are trying to reverse jphp applications, use your own software to delim .phb to .class blocks with couple of bytes before each of them
QUESTION
I have the following class:
...ANSWER
Answered 2021-Nov-03 at 05:27The MethodParameters attribute is used to indicate that parameters are final. https://docs.oracle.com/javase/specs/jvms/se17/html/jvms-4.html#jvms-4.7.24
In order for javac to add this attribute, you need to pass the -parameters
option.
QUESTION
When a method is called via invokevirtual, the calling method pops off the values to pass to the called method along with the objectref and places them in the new stack frame.
How does it know which stack entry is the objectref? My guess is that it does so by looking at the type of the called method and parses this to determine how many values to pop off, but this seems extremely inefficient. Is there some other mechanism that I'm overlooking?
...ANSWER
Answered 2021-Nov-20 at 06:36There's no one "right" way to do this, but the simplest strategy is to leave the values on the stack, and the called method refers to them via negative offsets. For example, if the called method has 3 params, they're referenced from the base stack offset minus 3, 2, and 1. Each is copied to a local variable and then referenced in the usual manner. The stack offset can be updated to reflect that the params have been consumed. Of course, each local param can also be initially assigned by a bunch of pops, one for each param.
Other tricks can be performed to speed things up. There's no reason that local variables need to be stored differently than the stack. They can be stored on the stack itself. The passed in params occupy their original locations on the stack, and then additional space is allocated for the remaining local variables by just updating the stack offset. A base stack offset is remembered, and all local variables are referenced via the base offset.
Essentially, a local variable is just like a stack slot, except it can be accessed at any time, regardless of what's currently been pushed on top.
QUESTION
With gcc you can use -S to stop compilation after your code has been compiled into assembly. Is there a similar feature with Python/bytecode? I know of ways like:
...ANSWER
Answered 2021-Nov-09 at 22:25If what you are looking for is the output of the disassembler, then you can run the module as a script:
QUESTION
I'm trying to understand python bytecode and I'm caught on CALL_FUNCTION and RETURN_VALUE.
Does a function have its own stack? If not, what does the documentation mean by "Returns TOS to the caller of the function"?
Sorry for the vagueness
...ANSWER
Answered 2021-Nov-03 at 09:06In CPython every function gets its own stack, it's called a frame in CPython and it's an implementation-specific detail(very old one) and other implementation of Python like IronPython1 and Jython doesn't have this functionality or implement it differently.
To clarify when we say stack there are multiple stacks involved:
- Python stack: The stack of frame objects
- Function values stack: The values in each frame object are stored in this stack to be operated on within the scope of this frame2
- C stack: For C function calls
When a function is called a new frame object is created first and placed on the Python stack
. This frame object contains the code object of the function, global variables the function has access to, and also the local variables defined in the function get stored in the frame object.
You can get the current frames in Python stack and current frame using the utilities provided in the inspect module.
The issue with this is that it is a Python object, it has its own type PyFrame_Type
, it gets reference count(gets all headers from PyVarObject
) and consumes some memory and if we have a chain of function calls, each time we will be creating these frame objects in memory all over the heap.
In Python 3.11, the frame object will be replaced by an array of structs that won't have an object header. The frame objects will still be available, but only if we request for it using inspect.currentframe()
or sys._get_frame()
.
2 Function values stack
We can check stacksize of a function by accessing co_stacksize
attribute of function's code object, this value is determined during the compilation time:
QUESTION
I compiled the following method:
...ANSWER
Answered 2021-Nov-02 at 16:39Frontend compilers generate code using simple patterns, and they rely on optimization passes to clean things up. At the point that the x == y
expression is generated, the compiler doesn't "know" that the very next thing is a return statement. It could potentially check this, but that extra step can be handled just as easily with some sort of peephole optimizer.
The benefit of a peephole optimizer is that it can perform cascading optimizations, that is, the result of one optimization can feed into the next one. The code that generated the x == y
expression doesn't really have any way of performing anything more than one optimization step without adding more complexity.
The java compiler used to have an optimization feature, but this was ditched in favor of HotSpot, which can perform even more powerful optimizations. Performing optimizations in the java compiler would slow it down and not really improve things all that much.
QUESTION
I know that compilers perform data structure alignment and padding according to 4-byte(for 32-bit systems) or 8-byte(64-bit systems) boundaries. But do interpreters align bytecode commands when they generate bytecode? If a command is coded by 1 byte and operands are coded by 1, 2, 4 or 8 bytes then it's seems it's not good for a processor to fetch data if bytecode is interpreted in looped switch? What do you think?
P.S I'm not asking about interpreters that perform JIT.
...ANSWER
Answered 2021-Oct-07 at 15:06In general, the answer is no, but the JVM does require 32-bit alignment for the data portions of the lookupswitch and tableswitch instructions. Up to 3 bytes of padding (zeros) must be encoded to ensure proper alignment.
QUESTION
I have this very simple class
...ANSWER
Answered 2021-Oct-05 at 18:44Yes, it's because they are double. In Java Virtual Machine Specification section 2.6.1 Local variables you can read:
A single local variable can hold a value of type boolean, byte, char, short, int, float, reference, or returnAddress. A pair of local variables can hold a value of type long or double.
QUESTION
I know that the bytecode specification allows classes to have methods with the same signature, differing only in the return type, unlike in the Java language. Some languages even make use of that under certain circumstances. My question is related to reflection:
if in a class I find a (non-private) method with the same name and parameter types as (a non final, non private) one its superclass , and with a return type equal or being a subtype of the return type of the said method in the superclass, when can I assume that code invoking the 'supermethod' statically will always result in the execution of the 'overriding(?)' method (naturally assuming the call is made on an object which is of that class)? Even in cases of other languages compiled to the JVM bytecode, or if runtime code generation is involved, or in hacked synthetic classes like the lambda forwarders?
My question was brought about from noticing how in the Scala standard library, an Iterable[E]
has a method:
ANSWER
Answered 2021-Sep-08 at 12:01It all eventually depends on the JVM instruction used:
invokespecial
would invoke the method without doing dynamic resolution based on the type of current object.invokevirtual
would dispatch based on the class.
Related: Why invokeSpecial is needed when invokeVirtual exists
So the answer is it depends on the generated bytecode.
QUESTION
I am compiling a simple language into JVM Bytecode and having some issues with Java object method calls. The verifier gives the error below
java.lang.VerifyError: (class: Test_1, method: main signature: ()V) Expecting to find object/array on stack
and below is the generated Java source code from my bytecodes by IntelliJ
...ANSWER
Answered 2021-Aug-24 at 12:44The signature of ArrayList.get
method at 22 is wrong.
The correct one is (I)Ljava/lang/Object;
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install ductilej
You can use ductilej like any standard Java library. Please include the the jar files in your classpath. You can also use any IDE and you can run and debug the ductilej component as you would do with any other Java program. Best practice is to use a build tool that supports dependency management such as Maven or Gradle. For Maven installation, please refer maven.apache.org. For Gradle installation, please refer gradle.org .
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page