dictionary | High-performance dictionary coding | Runtime Evironment library
kandi X-RAY | dictionary Summary
Support
Quality
Security
License
Reuse
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample Here
dictionary Key Features
dictionary Examples and Code Snippets
def flatten_dict_items(dictionary): """Returns a dictionary with flattened keys and values. This function flattens the keys and values of a dictionary, which can be arbitrarily nested structures, and returns the flattened version of such structures: ```python example_dictionary = {(4, 5, (6, 8)): ("a", "b", ("c", "d"))} result = {4: "a", 5: "b", 6: "c", 8: "d"} flatten_dict_items(example_dictionary) == result ``` The input dictionary must satisfy two properties: 1. Its keys and values should have the same exact nested structure. 2. The set of all flattened keys of the dictionary must not contain repeated keys. Args: dictionary: the dictionary to zip Returns: The zipped dictionary. Raises: TypeError: If the input is not a dictionary. ValueError: If any key and value do not have the same structure layout, or if keys are not unique. """ return _pywrap_nest.FlattenDictItems(dictionary)
def write_csv_from_dict(filename, input_dict): """Writes out a `.csv` file from an input dictionary. After writing out the file, it checks the new list against the golden to make sure golden file is up-to-date. Args: filename: String that is the output file name. input_dict: Dictionary that is to be written out to a `.csv` file. """ f = open(PATH_TO_DIR + "/data/" + six.ensure_str(filename), "w") for k, v in six.iteritems(input_dict): line = k for item in v: line += "," + item f.write(line + "\n") f.flush() print("Wrote to file %s" % filename) check_with_golden(filename)
def _pyval_update_fields(pyval, fields, depth): """Append the field values from `pyval` to `fields`. Args: pyval: A python `dict`, or nested list/tuple of `dict`, whose value(s) should be appended to `fields`. fields: A dictionary mapping string keys to field values. Field values extracted from `pyval` are appended to this dictionary's values. depth: The depth at which `pyval` should be appended to the field values. """ if not isinstance(pyval, (dict, list, tuple)): raise ValueError('Expected dict or nested list/tuple of dict') for (key, target) in fields.items(): for _ in range(1, depth): target = target[-1] target.append(pyval[key] if isinstance(pyval, dict) else []) if isinstance(pyval, (list, tuple)): for child in pyval: _pyval_update_fields(child, fields, depth + 1)
Trending Discussions on dictionary
Trending Discussions on dictionary
QUESTION
I've got a project that is working fine in windows os but when I switched my laptop and opened an existing project in MacBook Pro M1. I'm unable to run an existing android project in MacBook pro M1. first I was getting
Execution failed for task ':app:kaptDevDebugKotlin'. > A failure occurred while executing org.jetbrains.kotlin.gradle.internal.KaptExecution > java.lang.reflect.InvocationTargetException (no error message)
this error was due to the Room database I applied a fix that was adding below library before Room database and also changed my JDK location from file structure from JRE to JDK.
kapt "org.xerial:sqlite-jdbc:3.34.0"
//Room components
kapt "org.xerial:sqlite-jdbc:3.34.0"
implementation "androidx.room:room-ktx:$rootProject.roomVersion"
kapt "androidx.room:room-compiler:$rootProject.roomVersion"
androidTestImplementation "androidx.room:room-testing:$rootProject.roomVersion"
after that now I'm getting an issue which is Unknown host CPU architecture: arm64
there is an SDK in my project that is using this below line.
android {
externalNativeBuild {
ndkBuild {
path 'Android.mk'
}
}
ndkVersion '21.4.7075529'
}
App Gradle
externalNativeBuild {
cmake {
path "src/main/cpp/CMakeLists.txt"
version "3.18.1"
//version "3.10.2"
}
}
[CXX1405] error when building with ndkBuild using /Users/mac/Desktop/Consumer-Android/ime/dictionaries/jnidictionaryv2/Android.mk: Build command failed. Error while executing process /Users/mac/Library/Android/sdk/ndk/21.4.7075529/ndk-build with arguments {NDK_PROJECT_PATH=null APP_BUILD_SCRIPT=/Users/mac/Desktop/Consumer-Android/ime/dictionaries/jnidictionaryv2/Android.mk APP_ABI=arm64-v8a NDK_ALL_ABIS=arm64-v8a NDK_DEBUG=1 APP_PLATFORM=android-21 NDK_OUT=/Users/mac/Desktop/Consumer-Android/ime/dictionaries/jnidictionaryv2/build/intermediates/cxx/Debug/4k4s2lc6/obj NDK_LIBS_OUT=/Users/mac/Desktop/Consumer-Android/ime/dictionaries/jnidictionaryv2/build/intermediates/cxx/Debug/4k4s2lc6/lib APP_SHORT_COMMANDS=false LOCAL_SHORT_COMMANDS=false -B -n} ERROR: Unknown host CPU architecture: arm64
which is causing this issue and whenever I comment on this line
path 'Android.mk'
it starts working fine, is there any way around which will help me run this project with this piece of code without getting this NDK issue?
Update - It seems that Room got fixed in the latest updates, Therefore you may consider updating Room to latest version (2.3.0-alpha01 / 2.4.0-alpha03 or above)
ANSWER
Answered 2022-Apr-04 at 18:41To solve this on a Apple Silicon M1 I found three options
AUse NDK 24
android {
ndkVersion "24.0.8215888"
...
}
You can install it with
echo "y" | sudo ${ANDROID_HOME}/tools/bin/sdkmanager --install 'ndk;24.0.8215888'
or
echo "y" | sudo ${ANDROID_HOME}/sdk/cmdline-tools/latest/bin/sdkmanager --install 'ndk;24.0.8215888'
Depending what where sdkmanager
is located
Change your ndk-build
to use Rosetta x86. Search for your installed ndk with
find ~ -name ndk-build 2>/dev/null
eg
vi ~/Library/Android/sdk/ndk/22.1.7171670/ndk-build
and change
DIR="$(cd "$(dirname "$0")" && pwd)"
$DIR/build/ndk-build "$@"
to
DIR="$(cd "$(dirname "$0")" && pwd)"
arch -x86_64 /bin/bash $DIR/build/ndk-build "$@"
QUESTION
I have a dictionary of the form:
{"level": [1, 2, 3],
"conf": [-1, 1, 2],
"text": ["here", "hel", "llo"]}
I want to filter the lists to remove every item at index i
where an index in the value "conf"
is not >0.
So for the above dict
, the output should be this:
{"level": [2, 3],
"conf": [1, 2],
"text": ["hel", "llo"]}
As the first value of conf
was not > 0.
I have tried something like this:
new_dict = {i: [a for a in j if a >= min_conf] for i, j in my_dict.items()}
But that would work just for one key.
ANSWER
Answered 2022-Feb-21 at 05:50I believe this will work: For each list, we will filter the values where conf
is negative, and after that we will filter conf
itself.
d = {"level":[1,2,3], "conf":[-1,1,2], "text":["-1","hel","llo"]}
for key in d:
if key != "conf":
d[key] = [d[key][i] for i in range(len(d[key])) if d["conf"][i] >= 0]
d["conf"] = [i for i in d["conf"] if i>=0]
print(d)
A simpler solution will be (exactly the same but using list comprehension, so we don't need to do it separately for conf
and the rest:
d = {"level":[1,2,3], "conf":[-1,1,2], "text":["-1","hel","llo"]}
d = {i:[d[i][j] for j in range(len(d[i])) if d["conf"][j] >= 0] for i in d}
Output: {'level': [2, 3], 'conf': [1, 2], 'text': ['hel', 'llo']}
QUESTION
class Userattribute(BaseModel):
name: str
value: str
user_id: str
id: str
This is my model:
class Userattribute(Base):
__tablename__ = "user_attribute"
name = Column(String)
value = Column(String)
user_id = Column(String)
id = Column(String, primary_key=True, index=True)
In a crud.py I define a get_attributes
method.
def get_attributes(db: Session, skip: int = 0, limit: int = 100):
return db.query(models.Userattribute).offset(skip).limit(limit).all()
This is my GET
endpoint:
@app.get("/attributes/", response_model=List[schemas.Userattribute])
def read_attributes(skip: int = 0, limit: int = 100, db: Session = Depends(get_db)):
users = crud.get_attributes(db, skip=skip, limit=limit)
print(users)
return users
The connection to the database seems to work, but a problem is the datatype:
pydantic.error_wrappers.ValidationError: 7 validation errors for Userattribute
response -> 0
value is not a valid dict (type=type_error.dict)
response -> 1
value is not a valid dict (type=type_error.dict)
response -> 2
value is not a valid dict (type=type_error.dict)
response -> 3
value is not a valid dict (type=type_error.dict)
response -> 4
value is not a valid dict (type=type_error.dict)
response -> 5
value is not a valid dict (type=type_error.dict)
response -> 6
value is not a valid dict (type=type_error.dict)
Why does FASTApi expect a dictionary here? I don´t really understand it, since I am not able to even print the response. How can I fix this?
ANSWER
Answered 2022-Mar-23 at 22:19SQLAlchemy does not return a dictionary, which is what pydantic expects by default. You can configure your model to also support loading from standard orm parameters (i.e. attributes on the object instead of dictionary lookups):
class Userattribute(BaseModel):
name: str
value: str
user_id: str
id: str
class Config:
orm_mode = True
You can also attach a debugger right before the call to return
to see what's being returned.
Since this answer has become slightly popular, I'd like to also mention that you can make orm_mode = True
the default for your schema classes by having a common parent class that inherits from BaseModel
:
class OurBaseModel(BaseModel):
class Config:
orm_mode = True
class Userattribute(OurBaseModel):
name: str
value: str
user_id: str
id: str
This is useful if you want to support orm_mode
for most of your classes (and for those where you don't, inherit from the regular BaseModel
).
QUESTION
I'm experimenting with Hunspell and how to interact with it using Java Project Panama (Build 19-panama+1-13 (2022/1/18)). I was able to get some initial testing done, as in creating a handle to Hunspell
and subsequently using that to perform a spell check. I'm now trying something more elaborate, letting Hunspell give me suggestions
for a word not present in the dictionary. This is the code that I have for that now:
public class HelloHun {
public static void main(String[] args) {
MemoryAddress hunspellHandle = null;
try (ResourceScope scope = ResourceScope.newConfinedScope()) {
var allocator = SegmentAllocator.nativeAllocator(scope);
// Point it to US english dictionary and (so called) affix file
// Note #1: it is possible to add words to the dictionary if you like
// Note #2: it is possible to have separate/individual dictionaries and affix files (e.g. per user/doc type)
var en_US_aff = allocator.allocateUtf8String("/usr/share/hunspell/en_US.aff");
var en_US_dic = allocator.allocateUtf8String("/usr/share/hunspell/en_US.dic");
// Get a handle to the Hunspell shared library and load up the dictionary and affix
hunspellHandle = Hunspell_create(en_US_aff, en_US_dic);
// Feed it a wrong word
var javaWord = "koing";
// Do a simple spell check of the word
var word = allocator.allocateUtf8String(javaWord);
var spellingResult = Hunspell_spell(hunspellHandle, word);
System.out.println(String.format("%s is spelled %s", javaWord, (spellingResult == 0 ? "incorrect" : "correct")));
// Hunspell also supports giving suggestions for a word - which is what we do next
// Note #3: by testing this `koing` word in isolation - we know that there are 4 alternatives for this word
// Note #4: I'm still investigating how to access individual suggestions
var suggestions = allocator.allocate(10);
var suggestionCount = Hunspell_suggest(hunspellHandle, suggestions, word);
System.out.println(String.format("There are %d suggestions for %s", suggestionCount, javaWord));
// `suggestions` - according to the hunspell API - is a `pointer to an array of strings pointer`
// we know how many `strings` pointer there are, as that is the returned value from `suggest`
// Question: how to process `suggestions` to get individual suggestions
} finally {
if (hunspellHandle != null) {
Hunspell_destroy(hunspellHandle);
}
}
}
}
What I'm seeing is that a call to Hunspell_suggest
(created from jextract
) succeeds and gives me back (4) suggestions (which I verified using Hunspell from the commandline) - so no problem there.
What is more challenging for me now is how do I unpack the suggestions
element that comes back from this call? I've been looking at various examples, but none of them seem to go into this level of detail (and even if I find examples, they seem to be using outdated panama APIs).
So in essence, here is my question:
How do I unpack a structure that reportedly consists of a pointer to an array of strings pointer using panama JDK19 APIs to their respective collection of strings?
ANSWER
Answered 2022-Feb-24 at 21:41Looking at the header here: https://github.com/hunspell/hunspell/blob/master/src/hunspell/hunspell.h#L80
/* suggest(suggestions, word) - search suggestions
* input: pointer to an array of strings pointer and the (bad) word
* array of strings pointer (here *slst) may not be initialized
* output: number of suggestions in string array, and suggestions in
* a newly allocated array of strings (*slts will be NULL when number
* of suggestion equals 0.)
*/
LIBHUNSPELL_DLL_EXPORTED int Hunspell_suggest(Hunhandle* pHunspell,
char*** slst,
const char* word);
The slst
is a classic 'out' parameter. i.e. we pass a pointer to some value (in this case a char**
i.e. an array of strings), and the function will set this pointer for us, as a way to return multiple results. (the first result being the number of suggestions)
In panama you use 'out' parameters by allocating a segment with the layout of the type the parameter is a pointer of. In this case char***
is a pointer to char**
, so the layout is ADDRESS
. We then pass the created segment to the function, and finally retrieve/use the value from that segment after the function call, which will have filled in the segment contents:
// char***
var suggestionsRef = allocator.allocate(ValueLayout.ADDRESS); // allocate space for an address
var suggestionCount = Hunspell_suggest(hunspellHandle, suggestionsRef, word);
// char** (the value set by the function)
MemoryAddress suggestions = suggestionsRef.get(ValueLayout.ADDRESS, 0);
After that, you can iterate over the array of strings:
for (int i = 0; i < suggestionCount; i++) {
// char* (an element in the array)
MemoryAddress suggestion = suggestions.getAtIndex(ValueLayout.ADDRESS, i);
// read the string
String javaSuggestion = suggestion.getUtf8String(suggestion, 0);
}
QUESTION
I have the following dataframe where col2 is a dictionary with a list of tuples as values. The keys are consistantly 'added' and 'deleted' in the whole dataframe.
Input df
col1 col2 value1 {'added': [(59, 'dep1_v2'), (60, 'dep2_v2')], 'deleted': [(59, 'dep1_v1'), (60, 'dep2_v1')]} value 2 {'added': [(61, 'dep3_v2')], 'deleted': [(61, 'dep3_v1')]}Here's a copy-pasteable example dataframe:
jsons = ["{'added': [(59, 'dep1_v2'), (60, 'dep2_v2')], 'deleted': [(59, 'dep1_v1'), (60, 'dep2_v1')]}",
"{'added': [(61, 'dep3_v2')], 'deleted': [(61, 'dep3_v1')]}"]
df = pd.DataFrame({"col1": ["value1", "value2"], "col2": jsons})
edit
col2 directly comes from the diff_parsed field of pydriller output
I want to "explode" col2 so that I obtain the following result:
Desired output
col1 number added deleted value1 59 dep1_v2 dep1_v1 value1 60 dep2_v2 dep2_v1 value2 61 dep3_v2 dep3_v1So far, I tried the following:
df = df.join(pd.json_normalize(df.col2))
df.drop(columns=['col2'], inplace=True)
The above code is simplified. I first manipulate the column to convert to proper json. It was in an attempt to first explode on 'added' and 'deleted' and then try to play around with the format to obtain what I want...but the list of tuples is not preserved and I obtain the following:
col1 added deleted value1 59, dep1_v2, 60, dep2_v2 59, dep1_v1, 60, dep2_v1 value2 61, dep3_v1 61, dep3_v2Thanks
ANSWER
Answered 2022-Feb-03 at 01:47Here's a solution. It's a little long, but it works:
tmp = pd.concat([df, pd.json_normalize(df['col2'])], axis=1).drop('col2', axis=1).explode(['added', 'deleted'])
new_df = pd.concat([tmp.drop(['added', 'deleted'], axis=1).reset_index(drop=True), pd.DataFrame(tmp['added'].tolist()).merge(pd.DataFrame(tmp['deleted'].tolist()), on=0).set_axis(['number', 'added', 'deleted'], axis=1)], axis=1)
Output:
>>> new_df
col1 number added deleted
0 value1 59 dep1_v2 dep1_v1
1 value1 60 dep2_v2 dep2_v1
2 value2 61 dep3_v2 dep3_v1
QUESTION
I confronted strange behavior in Dictionary collection in Julia. a Dictionary can be defined in Julia like this:
dictionary = Dict(1 => 77, 2 => 66, 3 => 1)
and you can access keys using keys
:
> keys(dictionary)
# [output]
KeySet for a Dict{Int64, Int64} with 3 entries. Keys:
2
3
1
# now i want to make sure Julia consider above order. so i use collect and then i will call first element of it
> collect(keys(dictionary))[1]
# [output]
2
as you can see the order of keys in keys(dictionary)
output is so strange. seems Julia doesn't consider the order of (key=>value) in input! even it doesn't seem to be ordered ascending or descending. How Julia does indexing for keys(dictionary)
output?
Expected Output:
> keys(dictionary)
# [output]
KeySet for a Dict{Int64, Int64} with 3 entries. Keys:
1
2
3
> collect(keys(dictionary))[1]
# [output]
1
I expect keys(dictionary)
give me the keys in the order that I entered them in defining dictionary
.
ANSWER
Answered 2022-Jan-29 at 19:41The key order in Dict
is currently undefined (this might change in the future).
If you want order to be preserved use OrderedDict
from DataStructures.jl:
julia> using DataStructures
julia> dictionary = OrderedDict(1 => 77, 2 => 66, 3 => 1)
OrderedDict{Int64, Int64} with 3 entries:
1 => 77
2 => 66
3 => 1
julia> keys(dictionary)
KeySet for a OrderedDict{Int64, Int64} with 3 entries. Keys:
1
2
3
QUESTION
I have a list of 'Id's' that I wish to associate with a property from another list, their 'rows'. I have found a way to do it by making smaller dictionaries and concatenating them together which works, but I wondered if there was a more pythonic way to do it?
Code
row1 = list(range(1, 6, 1))
row2 = list(range(6, 11, 1))
row3 = list(range(11, 16, 1))
row4 = list(range(16, 21, 1))
row1_dict = {}
row2_dict = {}
row3_dict = {}
row4_dict = {}
for n in row1:
row1_dict[n] = 1
for n in row2:
row2_dict[n] = 2
for n in row3:
row3_dict[n] = 3
for n in row4:
row4_dict[n] = 4
id_to_row_dict = {}
id_to_row_dict = {**row1_dict, **row2_dict, **row3_dict, **row4_dict}
print('\n')
for k, v in id_to_row_dict.items():
print(k, " : ", v)
Output of dictionary which I want to replicate more pythonically
1 : 1
2 : 1
3 : 1
4 : 1
5 : 1
6 : 2
7 : 2
8 : 2
9 : 2
10 : 2
11 : 3
12 : 3
13 : 3
14 : 3
15 : 3
16 : 4
17 : 4
18 : 4
19 : 4
20 : 4
Desired output
Same as my output above, I just want to see if there is a better way to do it?
ANSWER
Answered 2021-Dec-17 at 08:09This dict-comprehension should do it:
rows = [row1, row2, row3, row4]
{k: v for v, row in enumerate(rows, 1) for k in row}
QUESTION
Let's say I have two dictionaries and I know want to measure the time needed to check if a key is in the dictionary. I tried to run this piece of code:
from timeit import timeit
dct1 = {str(i): 1 for i in range(10**7)}
dct2 = {i: 1 for i in range(10**7)}
print(timeit('"7" in dct1', setup='from __main__ import dct1', number=10**8))
print(timeit('7 in dct2', setup='from __main__ import dct2', number=10**8))
Here are the results that I get:
2.529034548999334
2.212983401999736
Now, let's say I try to mix integers and strings in both dictionaries, and measure access time again:
dct1[7] = 1
dct2["7"] = 1
print(timeit('"7" in dct1', setup='from __main__ import dct1', number=10**8))
print(timeit('7 in dct1', setup='from __main__ import dct1', number=10**8))
print(timeit('7 in dct2', setup='from __main__ import dct2', number=10**8))
print(timeit('"7" in dct2', setup='from __main__ import dct2', number=10**8))
I get something weird:
3.443614432000686
2.6335261530002754
2.1873921409987815
2.272667104998618
The first value is much higher than what I had before (3.44 vs 2.52). However, the third value is basically the same as before (2.18 vs 2.21). Why is this happening? Can you reproduce the same thing or is this only me? Also, I can't understand the big difference between the first and the second value: it looks like it's more difficult to access a string key, but the same thing seems to apply only slightly to the second dictionary. Why?
Update
You don't even need to actually add a new key. All you need to do to see an increase in complexity is just checking if a key with different type exists!! This is much weirder than I thought. Look at the example here:
from timeit import timeit
dct1 = {str(i): 1 for i in range(10**7)}
dct2 = {i: 1 for i in range(10**7)}
print(timeit('"7" in dct1', setup='from __main__ import dct1', number=10**8))
# 2.55
print(timeit('7 in dct2', setup='from __main__ import dct2', number=10**8))
# 2.26
7 in dct1
"7" in dct2
print(timeit('"7" in dct1', setup='from __main__ import dct1', number=10**8))
# 3.34
print(timeit('7 in dct2', setup='from __main__ import dct2', number=10**8))
# 2.35
ANSWER
Answered 2021-Sep-21 at 10:20Let me try to answer my own question. The dict implementation in CPython is optimised for lookups of str keys. Indeed, there are two different functions that are used to perform lookups:
lookdict
is a generic dictionary lookup function that is used with all types of keyslookdict_unicode
is a specialised lookup function used for dictionaries composed of str-only keys
Python will use the string-optimised version until a search for non-string data, after which the more general function is used.
And it looks like you cannot even reverse the behaviour of a particular dict instance: once it starts using the generic function, you can't go back to using the specialised one!
QUESTION
I was reading about differences between threads and processes, and literally everywhere online, one difference is commonly written without much explanation:
If a process gets blocked, remaining processes can continue execution. If a user level thread gets blocked, all of its peer threads also get blocked.
It doesn't make any sense to me. What would be the sense of concurrency if a scheduler cannot switch between a blocked thread and a ready/runnable thread. The reason given is that since the OS doesn't differentiate between the various threads of a given parent process, it blocks all of them at once.
I find it very unconvincing, since all modern OS have thread control blocks with a thread ID, even if it is valid only within the memory space of the parent process. Like the example given in Galvin's Operating Systems book, I wouldn't want the thread which is handling my typing to be blocked if the spell checking thread cannot connect to some online dictionary, perhaps.
Either I am understanding this concept wrong, or all these websites have just copied some old thread differences over the years. Moreover, I cannot find this statement in books, like Galvin's or maybe in William Stalling's COA book where threads have been discussed.
These are resouces where I found the statements:
ANSWER
Answered 2021-Aug-30 at 11:12There is a difference between kernel-level and user-level threads. In simple words:
- Kernel-level threads: Threads that are managed by the operating system, including scheduling. They are what is executed on the processor. That's what probably most of us think of threads.
- User-level threads: Threads that are managed by the program itself. They are also called fibers or coroutines in some contexts. In contrast to kernel-level threads, they need to "yield the execution", i.e. switching from one user-level to another user-level thread is done explicitly by the program. User-level threads are mapped to kernel-level threads.
As user-level threads need to be mapped to kernel-level threads, you need to choose a suiteable mapping. You could map each user-level to a separate kernel-level thread. You could also map many user-level to one kernel-level thread. In the latter mapping, you let multiple concurrent execution paths be executed by a single thread "as we know it". If one of those paths blocks, recall that user-level threads need to yield the execution, then the executing (kernel-level) thread blocks, which causes all other assigned paths to also be effectively blocked. I think, this is what the statement refers to. FYI: In Java, user-level threads – the multithreading you do in your programs – are mapped to kernel-level threads by the JVM, i.e. the runtime system.
Related stuff:
QUESTION
Consider the following metaclass/class definitions:
class Meta(type):
"""A python metaclass."""
def greet_user(cls):
"""Print a friendly greeting identifying the class's name."""
print(f"Hello, I'm the class '{cls.__name__}'!")
class UsesMeta(metaclass=Meta):
"""A class that uses `Meta` as its metaclass."""
As we know, defining a method in a metaclass means that it is inherited by the class, and can be used by the class. This means that the following code in the interactive console works fine:
>>> UsesMeta.greet_user()
Hello, I'm the class 'UsesMeta'!
However, one major downside of this approach is that any documentation that we might have included in the definition of the method is lost. If we type help(UsesMeta)
into the interactive console, we see that there is no reference to the method greet_user
, let alone the docstring that we put in the method definition:
Help on class UsesMeta in module __main__:
class UsesMeta(builtins.object)
| A class that uses `Meta` as its metaclass.
|
| Data descriptors defined here:
|
| __dict__
| dictionary for instance variables (if defined)
|
| __weakref__
| list of weak references to the object (if defined)
Now of course, the __doc__
attribute for a class is writable, so one solution would be to rewrite the metaclass/class definitions like so:
from pydoc import render_doc
from functools import cache
def get_documentation(func_or_cls):
"""Get the output printed by the `help` function as a string"""
return '\n'.join(render_doc(func_or_cls).splitlines()[2:])
class Meta(type):
"""A python metaclass."""
@classmethod
@cache
def _docs(metacls) -> str:
"""Get the documentation for all public methods and properties defined in the metaclass."""
divider = '\n\n----------------------------------------------\n\n'
metacls_name = metacls.__name__
metacls_dict = metacls.__dict__
methods_header = (
f'Classmethods inherited from metaclass `{metacls_name}`'
f'\n\n'
)
method_docstrings = '\n\n'.join(
get_documentation(method)
for method_name, method in metacls_dict.items()
if not (method_name.startswith('_') or isinstance(method, property))
)
properties_header = (
f'Classmethod properties inherited from metaclass `{metacls_name}`'
f'\n\n'
)
properties_docstrings = '\n\n'.join(
f'{property_name}\n{get_documentation(prop)}'
for property_name, prop in metacls_dict.items()
if isinstance(prop, property) and not property_name.startswith('_')
)
return ''.join((
divider,
methods_header,
method_docstrings,
divider,
properties_header,
properties_docstrings,
divider
))
def __new__(metacls, cls_name, cls_bases, cls_dict):
"""Make a new class, but tweak `.__doc__` so it includes information about the metaclass's methods."""
new = super().__new__(metacls, cls_name, cls_bases, cls_dict)
metacls_docs = metacls._docs()
if new.__doc__ is None:
new.__doc__ = metacls_docs
else:
new.__doc__ += metacls_docs
return new
def greet_user(cls):
"""Print a friendly greeting identifying the class's name."""
print(f"Hello, I'm the class '{cls.__name__}'!")
class UsesMeta(metaclass=Meta):
"""A class that uses `Meta` as its metaclass."""
This "solves" the problem; if we now type help(UsesMeta)
into the interactive console, the methods inherited from Meta
are now fully documented:
Help on class UsesMeta in module __main__:
class UsesMeta(builtins.object)
| A class that uses `Meta` as its metaclass.
|
| ----------------------------------------------
|
| Classmethods inherited from metaclass `Meta`
|
| greet_user(cls)
| Print a friendly greeting identifying the class's name.
|
| ----------------------------------------------
|
| Classmethod properties inherited from metaclass `Meta`
|
|
|
| ----------------------------------------------
|
| Data descriptors defined here:
|
| __dict__
| dictionary for instance variables (if defined)
|
| __weakref__
| list of weak references to the object (if defined)
That's an awful lot of code to achieve this goal, however. Is there a better way?
How does the standard library do it?
I'm also curious about the way certain classes in the standard library manage this. If we have an Enum
definition like so:
from enum import Enum
class FooEnum(Enum):
BAR = 1
Then, typing help(FooEnum)
into the interactive console includes this snippet:
| ----------------------------------------------------------------------
| Readonly properties inherited from enum.EnumMeta:
|
| __members__
| Returns a mapping of member name->value.
|
| This mapping lists all enum members, including aliases. Note that this
| is a read-only view of the internal mapping.
How exactly does the enum
module achieve this?
The reason why I'm using metaclasses here, rather than just defining classmethod
s in the body of a class definition
Some methods that you might write in a metaclass, such as __iter__
, __getitem__
or __len__
, can't be written as classmethod
s, but can lead to extremely expressive code if you define them in a metaclass. The enum
module is an excellent example of this.
ANSWER
Answered 2021-Aug-22 at 18:43I haven't looked at the rest of the stdlib, but EnumMeta
accomplishes this by overriding the __dir__
method (i.e. specifying it in the EnumMeta
class):
class EnumMeta(type):
.
.
.
def __dir__(self):
return (
['__class__', '__doc__', '__members__', '__module__']
+ self._member_names_
)
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install dictionary
Support
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesExplore Kits - Develop, implement, customize Projects, Custom Functions and Applications with kandi kits
Save this library and start creating your kit
Share this Page