sizeof | Configurable sizeOf engine for Ehcache | Dashboard library
kandi X-RAY | sizeof Summary
kandi X-RAY | sizeof Summary
This library lets you size Java Object instances in bytes.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Determine the JVM information for the current JVM
- Detect the jvm information
- Checks if is 64 bit
- Detect the open JDK information
- Attempts to load the agent
- Checks if the agent is available
- Extracts the agent jar file from the jar
- Filter the collection of fields
- Search for an annotation on an element
- Checks if a custom annotation pattern matches the pattern
- Filter class
- Returns true if the annotation is present on the instance
- Checks if the JRockit is enabled
- Retrieves the value of a platform MBean attribute
- Filter out classes
- Get a set of all keys contained in this map
- Returns the size of the object
- Guesses the size of the array
- Get the virtual machine class
- Returns a list of tools jars found in the JRE
- Returns the size of the specified object
sizeof Key Features
sizeof Examples and Code Snippets
Community Discussions
Trending Discussions on sizeof
QUESTION
Motivating background info: I maintain a C++ library, and I spent way too much time this weekend tracking down a mysterious memory-corruption problem in an application that links to this library. The problem eventually turned out to be caused by the fact that the C++ library was built with a particular -DBLAH_BLAH
compiler-flag, while the application's code was being compiled without that -DBLAH_BLAH
flag, and that led to the library-code and the application-code interpreting the classes declared in the library's header-files differently in terms of data-layout. That is: sizeof(ThisOneParticularClass)
would return a different value when invoked from a .cpp file in the application than it would when invoked from a .cpp file in the library.
So far, so unfortunate -- I have addressed the immediate problem by making sure that the library and application are both built using the same preprocessor-flags, and I also modified the library so that the presence or absence of the -DBLAH_BLAH
flag won't affect the sizeof()
its exported classes... but I feel like that wasn't really enough to address the more general problem of a library being compiled with different preprocessor-flags than the application that uses that library. Ideally I'd like to find a mechanism that would catch that sort of problem at compile-time, rather than allowing it to silently invoke undefined behavior at runtime. Is there a good technique for doing that? (All I can think of is to auto-generate a header file with #ifdef/#ifndef
tests for the application code to #include
, that would deliberately #error
out if the necessary #define
s aren't set, or perhaps would automatically-set the appropriate #define
s right there... but that feels a lot like reinventing automake
and similar, which seems like potentially opening a big can of worms)
ANSWER
Answered 2022-Apr-04 at 16:07One way of implementing such a check is to provide definition/declaration pairs for global variables that change, according to whether or not particular macros/tokens are defined. Doing so will cause a linker error if a declaration in a header, when included by a client source, does not match that used when building the library.
As a brief illustration, consider the following section, to be added to the "MyLibrary.h" header file (included both when building the library and when using it):
QUESTION
Is use in a default member initializer still an odr-use, even if the default member initializer is not used by any constructor?
For example, is this program ill-formed because g
is odr-used and therefore its definition implicitly instantiated?
ANSWER
Answered 2022-Mar-13 at 00:25As stated in the comments, g
is odr-used. However, there is a definition for it available, so there is no non-diagnosable violation here; MSVC is wrong to accept it. (This is true even without the constructor declaration; the implicitly declared B::B()
is never defined, but the default member initializer is still an odr-use like it is here.)
QUESTION
I made a bubble sort implementation in C, and was testing its performance when I noticed that the -O3
flag made it run even slower than no flags at all! Meanwhile -O2
was making it run a lot faster as expected.
Without optimisations:
...ANSWER
Answered 2021-Oct-27 at 19:53It looks like GCC's naïveté about store-forwarding stalls is hurting its auto-vectorization strategy here. See also Store forwarding by example for some practical benchmarks on Intel with hardware performance counters, and What are the costs of failed store-to-load forwarding on x86? Also Agner Fog's x86 optimization guides.
(gcc -O3
enables -ftree-vectorize
and a few other options not included by -O2
, e.g. if
-conversion to branchless cmov
, which is another way -O3
can hurt with data patterns GCC didn't expect. By comparison, Clang enables auto-vectorization even at -O2
, although some of its optimizations are still only on at -O3
.)
It's doing 64-bit loads (and branching to store or not) on pairs of ints. This means, if we swapped the last iteration, this load comes half from that store, half from fresh memory, so we get a store-forwarding stall after every swap. But bubble sort often has long chains of swapping every iteration as an element bubbles far, so this is really bad.
(Bubble sort is bad in general, especially if implemented naively without keeping the previous iteration's second element around in a register. It can be interesting to analyze the asm details of exactly why it sucks, so it is fair enough for wanting to try.)
Anyway, this is pretty clearly an anti-optimization you should report on GCC Bugzilla with the "missed-optimization" keyword. Scalar loads are cheap, and store-forwarding stalls are costly. (Can modern x86 implementations store-forward from more than one prior store? no, nor can microarchitectures other than in-order Atom efficiently load when it partially overlaps with one previous store, and partially from data that has to come from the L1d cache.)
Even better would be to keep buf[x+1]
in a register and use it as buf[x]
in the next iteration, avoiding a store and load. (Like good hand-written asm bubble sort examples, a few of which exist on Stack Overflow.)
If it wasn't for the store-forwarding stalls (which AFAIK GCC doesn't know about in its cost model), this strategy might be about break-even. SSE 4.1 for a branchless pmind
/ pmaxd
comparator might be interesting, but that would mean always storing and the C source doesn't do that.
If this strategy of double-width load had any merit, it would be better implemented with pure integer on a 64-bit machine like x86-64, where you can operate on just the low 32 bits with garbage (or valuable data) in the upper half. E.g.,
QUESTION
Constraints in C++20 are normalized before checked for satisfaction by dividing them on atomic constraints. For example, the constraint E = E1 || E2
has two atomic constrains E1
and E2
And substitution failure in an atomic constraint shall be considered as false value of the atomic constraint.
If we consider a sample program, there concept Complete = sizeof(T)>0
checks for the class T
being defined:
ANSWER
Answered 2022-Jan-07 at 15:39This is Clang bug #49513; the situation and analysis is similar to this answer.
sizeof(T)>0
is an atomic constraint, so [temp.constr.atomic]/3 applies:
To determine if an atomic constraint is satisfied, the parameter mapping and template arguments are first substituted into its expression. If substitution results in an invalid type or expression, the constraint is not satisfied. [...]
sizeof(void)>0
is an invalid expression, so that constraint is not satisfied, and constraint evaluation proceeds to sizeof(U)>0
.
As in the linked question, an alternative workaround is to use "requires requires requires"; demo:
QUESTION
From this comment in GCC bug #53119:
In C,
...{0}
is the universal zero initializer equivalent to C++'s{}
(the latter being invalid in C). It is necessary to use whenever you want a zero-initialized object of a complete but conceptually-opaque or implementation-defined type. The classic example in the C standard library ismbstate_t
:
ANSWER
Answered 2021-Nov-30 at 14:20memset(p, 0, n)
sets to all-bits-0.
An initializer of { 0 }
sets to the value 0.
On just about any machine you've ever heard of, the two concepts are equivalent.
However, there have been machines where the floating-point value 0.0 was not represented by a bit pattern of all-bits-0. And there have been machines where a null pointer was not represented by a bit pattern of all-bits-0, either. On those machines, an initializer of { 0 }
would always get you the zero initialization you wanted, while memset
might not.
See also question 7.31 and question 5.17 in the C FAQ list.
Postscript: It's not clear to me why mbstate_t
would be a "classic example" of this issue, though.
P.P.S. One other difference, as pointed out by @ryker: memset
will set any "holes" in a padded structure to 0, while setting that structure to { 0 }
might not.
QUESTION
Ran the following in Visual Studio 2022 in release mode:
...ANSWER
Answered 2021-Nov-19 at 15:51TL;DR: unfortunate combination of backward compatibility and ABI compatibility issues makes std::mutex
bad until the next ABI break. OTOH, std::shared_mutex
is good.
A decent implementation of std::mutex
would try to use an atomic operation to acquire the lock, if busy, possibly would try spinning in a read loop (with some pause
on x86), and ultimately will resort to OS wait.
There are a couple of ways to implement such std::mutex
:
- Directly delegate to corresponding OS APIs that do all of above.
- Do spinning and atomic thing on its own, call OS APIs only for OS wait.
Sure, the first way is easier to implement, more friendly to debug, more robust. So it appears to be the way to go. The candidate APIs are:
CRITICAL_SECTION
APIs. A recursive mutex, that is lacking static initializer and needs explicit destructionSRWLOCK
. A non-recursive shared mutex that has static initializer and doesn't need explicit destructionWaitOnAddress
. An API to wait on particular variable to be changed, similar to Linuxfutex
.
These primitives have OS version requirements:
CRITICAL_SECTION
existed since I think Windows 95, thoughTryEnterCriticalSection
was not present in Windows 9x, but the ability to useCRITICAL_SECTION
withCONDITION_VARIABLE
was added since Windows Vista, withCONDITION_VARIABLE
itself.SRWLOCK
exists since Windows Vista, butTryAcquireSRWLockExclusive
exists since Windows 7, so it can only directly implementstd::mutex
starting in Windows 7.WaitOnAddress
was added since Windows 8.
By the time when std::mutex
was added, Windows XP support by Visual Studio C++ library was needed, so it was implemented using doing things on its own. In fact, std::mutex
and other sync stuff was delegated to ConCRT (Concurrency Runtime)
For Visual Studio 2015, the implementation was switched to use the best available mechanism, that is SRWLOCK
starting in Windows 7, and CRITICAL_SECTION
stating in Windows Vista. ConCRT turned out to be not the best mechanism, but it still was used for Windows XP and 2003. The polymorphism was implemented by making placement new of classes with virtual functions into a buffer provided by std::mutex
and other primitives.
Note that this implementation breaks the requirement for std::mutex
to be constexpr
, because of runtime detection, placement new, and inability of pre-Window 7 implementation to have only static initializer.
As time passed support of Windows XP was finally dropped in VS 2019, and support of Windows Vista was dropped in VS 2022, the change is made to avoid ConCRT usage, the change is planned to avoid even runtime detection of SRWLOCK (disclosure: I've contributed these PRs). Still due to ABI compatibility for VS 2015 though VS 2022 it is not possible to simplify std::mutex
implementation to avoid all this putting classes with virtual functions.
What is more sad, though SRWLOCK
has static initializer, the said compatibility prevents from having constexpr
mutex: we have to placement new the implementation there. It is not possible to avoid placement new, and make an implementation to construct right inside std::mutex
, because std::mutex
has to be standard layout class (see Why is std::mutex a standard-layout class?).
So the size overhead comes from the size of ConCRT mutex.
And the runtime overhead comes from the chain of call:
- library function call to get to the standard library implementation
- virtual function call to get to
SRWLOCK
-based implementation - finally Windows API call.
Virtual function call is more expensive than usually due to standard library DLLs being built with /guard:cf
.
Some part of the runtime overhead is due to std::mutex
fills in ownership count and locked thread. Even though this information is not required for SRWLOCK
. It is due to shared internal structure with recursive_mutex
. The extra information may be helpful for debugging, but it does take time to fill it in.
std::shared_mutex
was designed to support only systems starting Windows 7. So it uses SRWLOCK
directly.
The size of std::shared_mutex
is the size of SRWLOCK
. SRWLOCK
has the same size as a pointer (though internally it is not a pointer).
It still involves some avoidable overhead: it calls C++ runtime library, just to call Windows API, instead of calling Windows API directly. This looks fixable with the next ABI, though.
std::shared_mutex
constructor could be constexpr, as SRWLOCK
does not need dynamic initializer, but the standard prohibits voluntary adding constexpr
to the standard classes.
QUESTION
If one defines a new variable in C++, then the name of the variable can be used in the initialization expression, for example:
...ANSWER
Answered 2021-Oct-06 at 22:12According to the C++17 standard (11.3.6 Default arguments)
9 A default argument is evaluated each time the function is called with no argument for the corresponding parameter. A parameter shall not appear as a potentially-evaluated expression in a default argument. Parameters of a function declared before a default argument are in scope and can hide namespace and class member name
It provides the following example:
QUESTION
An interesting discussion has arisen in the comments to this recent question: Now, although the language there is C, the discussion has drifted to what the C++ Standard specifies, in terms of what constitutes undefined behaviour when accessing the elements of a multidimensional array using a function like std::memcpy
.
First, here's the code from that question, converted to C++ and using const
wherever possible:
ANSWER
Answered 2021-Sep-27 at 19:34std::memcpy(arr_copy, arr, sizeof arr);
(your example) is well-defined.
std::memcpy(arr_copy, arr[0], sizeof arr);
, on the other hand, causes undefined behavior (at least in C++; not entirely sure about C).
Multidimensional arrays are 1D arrays of arrays. As far as I know, they don't get much (if any) special treatment compared to true 1D arrays (i.e. arrays with elements of non-array type).
Consider an example with a 1D array:
QUESTION
According to cppref:
std::allocator::allocate_at_least
Allocates
count * sizeof(T)
bytes of uninitialized storage, wherecount
is an unspecified integer value not less thann
, by calling::operator new
(an additionalstd::align_val_t
argument might be provided), but it is unspecified when and how this function is called.Then, this function creates an array of type
T[count]
in the storage and starts its lifetime, but does not start lifetime of any of its elements.
However, I think the already existing std::allocator::allocate
can do the same thing.
Why do we need std::allocator::allocate_at_least
in C++23?
ANSWER
Answered 2021-Sep-08 at 07:18This comes from notes of cppref:
allocate_at_least is mainly provided for contiguous containers, e.g. std::vector and std::basic_string, in order to reduce reallocation by making their capacity match the actually allocated size when possible.
The "unspecified when and how" wording makes it possible to combine or optimize away heap allocations made by the standard library containers, even though such optimizations are disallowed for direct calls to ::operator new. For example, this is implemented by libc++.
After calling allocate_at_least and before construction of elements, pointer arithmethic of T* is well-defined within the allocated array, but the behavior is undefined if elements are accessed.
QUESTION
The C standard specifies:
A pointer to void shall have the same representation and alignment requirements as a pointer to a character type. Similarly, pointers to qualified or unqualified versions of compatible types shall have the same representation and alignment requirements. All pointers to structure types shall have the same representation and alignment requirements as each other. All pointers to union types shall have the same representation and alignment requirements as each other. Pointers to other types need not have the same representation or alignment requirements.
i.e. sizeof(int*)
is not necessarily equal to sizeof(char*)
- but sizeof(struct A*)
is necessarily equal to sizeof(struct B*)
.
What is the rationale behind this requirement? As I understand it the rationale behind differing sizes for basic types is to support use cases like near/far/huge pointers (edit: as was pointed out in comments and in the accepted answer, this is not the rationale) - but doesn't this same rationale apply to struct
s in different locations in memory?
ANSWER
Answered 2021-Aug-26 at 21:23The answer is very simple: struct
and union
types can be declared as opaque types, ie: without an actual definition of the struct
or union
details. If the representation of pointers was different depending on the structures' details, how would the compiler determine what representation to use for opaque pointers appearing as arguments, return values, or even just reading from or storing them to memory.
The natural consequence of the ability to manipulate opaque pointer types is all such pointers must have the same representation. Note however that pointers to struct
and pointers to union
may have a different representation, as well as pointers to basic types such as char
, int
, double
...
Another distinction regarding pointer representation is between pointers to data and pointers to functions, which may have a different size. Such a difference is more common in current architectures, albeit still rare outside operating system and device driver space. 64-bit for function pointers seems a waste as 4GB should be amply sufficient for code space, but modern architectures take advantage of this extra space to store pointer signatures to harden code against malicious attacks. Another use is to take advantage of hardware that ignores some of the pointer bits (eg: x86_64 ignores the top 16 bits) to store type information or to use NaN values unmodified as pointers.
Furthermore, the near/far/huge pointer attributes from legacy 16 bit code were not correctly addressed by this remark in the C Standard as all pointers could be near, far or huge. Yet the distinction between code pointers and data pointers in mixed model code was covered by it and seems still current on some OSes.
Finally, Posix mandates that all pointers have the same size and representation so mixed model code should quickly become a historical curiosity.
It is arguable that architectures where the representation is different for different data types are vanishingly rare nowadays and it be high time to clean up the standard and remove this option. The main objection is support for architectures where the addressable units are large words and 8-bit bytes are addressed using extra information, making char *
and void *
larger than regular pointers. Yet such architectures make pointer arithmetics very cumbersome and are quite rare too (I personally have never seen one).
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install sizeof
You can use sizeof like any standard Java library. Please include the the jar files in your classpath. You can also use any IDE and you can run and debug the sizeof component as you would do with any other Java program. Best practice is to use a build tool that supports dependency management such as Maven or Gradle. For Maven installation, please refer maven.apache.org. For Gradle installation, please refer gradle.org .
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page