wiki | Starting point for finding getting started information | Blockchain library
kandi X-RAY | wiki Summary
kandi X-RAY | wiki Summary
SingularityNET is an open and decentralized network of AI services made accessible through the blockchain. AI developers publish their services onto the SingularityNET network where they can be used by anyone with an internet connection. AI deveopers are able to charge for the use of their services using the native AGI token -- an ERC20 token hosted on Ethereum. AI services span the entire gamut of offerings in artificial intelligence and machine learning. AI developers can provide services for inference or model training across myriad domains such as image/video, speech, text, time-series, bio-AI, network analysis, etc. AI services can be as small as individual algorithms or large as end-to-end solutions and standalone AI applications. AI developers can also deploy autonomous AI agents that interoperate with other services on the network.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of wiki
wiki Key Features
wiki Examples and Code Snippets
Community Discussions
Trending Discussions on wiki
QUESTION
I am trying to run a CentOS 8 server through VirtualBox (6.1.30) (Vagrant), which worked just fine yesterday for me, but today I tried running a sudo yum update
. I keep getting this error for some reason:
ANSWER
Answered 2022-Mar-26 at 20:59Check out this article: CentOS Linux EOL
The below commands helped me:
QUESTION
I have an array of positive integers. For example:
...ANSWER
Answered 2022-Feb-27 at 22:44This problem has a fun O(n) solution.
If you draw a graph of cumulative sum vs index, then:
The average value in the subarray between any two indexes is the slope of the line between those points on the graph.
The first highest-average-prefix will end at the point that makes the highest angle from 0. The next highest-average-prefix must then have a smaller average, and it will end at the point that makes the highest angle from the first ending. Continuing to the end of the array, we find that...
These segments of highest average are exactly the segments in the upper convex hull of the cumulative sum graph.
Find these segments using the monotone chain algorithm. Since the points are already sorted, it takes O(n) time.
QUESTION
Haskell typeclasses often come with laws; for instance, instances of Monoid
are expected to observe that x <> mempty = mempty <> x = x
.
Typeclass laws are often written with single-equals (=
) rather than double-equals (==
). This suggests that the notion of equality used in typeclass laws is something other than that of Eq
(which makes sense, since Eq
is not a superclass of Monoid
)
Searching around, I was unable to find any authoritative statement on the meaning of =
in typeclass laws. For instance:
- The Haskell 2010 report does not even contain the word "law" in it
- Speaking with other Haskell users, most people seem to believe that
=
usually means extensional equality or substitution but is fundamentally context-dependent. Nobody provided any authoritative source for this claim. - The Haskell wiki article on monad laws states that
=
is extensional, but, again, fails to provide a source, and I wasn't able to track down any way to contact the author of the relevant edit.
The question, then: Is there any authoritative source on or standard for the semantics for =
in typeclass laws? If so, what is it? Additionally, are there examples where the intended meaning of =
is particularly exotic?
(As a side note, treating =
extensionally can get tricky. For instance, there is a Monoid (IO a)
instance, but it's not really clear what extensional equality of IO
values looks like.)
ANSWER
Answered 2022-Feb-24 at 22:30Typeclass laws are not part of the Haskell language, so they are not subject to the same kind of language-theoretic semantic analysis as the language itself.
Instead, these laws are typically presented as an informal mathematical notation. Most presentations do not need a more detailed mathematical exposition, so they do not provide one.
QUESTION
In JavaScript, values of objects and arrays can be indexed like the following: objOrArray[index]
. Is there an identity "index" value?
In other words:
Is there a value of x
that makes the following always true?
ANSWER
Answered 2021-Oct-05 at 01:31The indexing operation doesn't have an identity element. The domain and range of indexing is not necessarily the same -- the domain is arrays and objects, but the range is any type of object, since array elements and object properties can hold any type. If you have an array of integers, the domain is Array
, while the range is Integer
, so it's not possible for there to be an identity. a[x]
will always be an integer, which can never be equal to the array itself.
And even if you have an array of arrays, there's no reason to expect any of the elements to be a reference to the array itself. It's possible to create self-referential arrays like this, but most are not. And even if it is, the self-reference could be in any index, so there's no unique identity value.
QUESTION
I'm trying to make a Discord bot that just says if someone is online on the game.
However I keep getting this message:
[ERR_REQUIRE_ESM]: require() of ES Module from not supported. Instead change the require of index.js in... to a dynamic import() which is available in all CommonJS modules.
This is my code:
...ANSWER
Answered 2021-Sep-07 at 06:38node-fetch
v3 recently stopped support for the require
way of importing it in favor of ES Modules. You'll need to use ESM imports now, like:
QUESTION
With regard to the Log4j JNDI remote code execution vulnerability that has been identified CVE-2021-44228 - (also see references) - I wondered if Log4j-v1.2 is also impacted, but the closest I got from source code review is the JMS-Appender.
The question is, while the posts on the Internet indicate that Log4j 1.2 is also vulnerable, I am not able to find the relevant source code for it.
Am I missing something that others have identified?
Log4j 1.2 appears to have a vulnerability in the socket-server class, but my understanding is that it needs to be enabled in the first place for it to be applicable and hence is not a passive threat unlike the JNDI-lookup vulnerability which the one identified appears to be.
Is my understanding - that Log4j v1.2 - is not vulnerable to the jndi-remote-code execution bug correct?
ReferencesThis blog post from Cloudflare also indicates the same point as from AKX....that it was introduced from Log4j 2!
Update #1 - A fork of the (now-retired) apache-log4j-1.2.x with patch fixes for few vulnerabilities identified in the older library is now available (from the original log4j author). The site is https://reload4j.qos.ch/. As of 21-Jan-2022 version 1.2.18.2 has been released. Vulnerabilities addressed to date include those pertaining to JMSAppender, SocketServer and Chainsaw vulnerabilities. Note that I am simply relaying this information. Have not verified the fixes from my end. Please refer the link for additional details.
...ANSWER
Answered 2022-Jan-01 at 18:43The JNDI feature was added into Log4j 2.0-beta9.
Log4j 1.x thus does not have the vulnerable code.
QUESTION
I was looking for the canonical implementation of MergeSort on Haskell to port to HOVM, and I found this StackOverflow answer. When porting the algorithm, I realized something looked silly: the algorithm has a "halve" function that does nothing but split a list in two, using half of the length, before recursing and merging. So I thought: why not make a better use of this pass, and use a pivot, to make each half respectively smaller and bigger than that pivot? That would increase the odds that recursive merge calls are applied to already-sorted lists, which might speed up the algorithm!
I've done this change, resulting in the following code:
...ANSWER
Answered 2022-Jan-27 at 19:15Your split
splits the list in two ordered halves, so merge
consumes its first argument first and then just produces the second half in full. In other words it is equivalent to ++
, doing redundant comparisons on the first half which always turn out to be True
.
In the true mergesort the merge actually does twice the work on random data because the two parts are not ordered.
The split
though spends some work on the partitioning whereas an online bottom-up mergesort would spend no work there at all. But the built-in sort tries to detect ordered runs in the input, and apparently that extra work is not negligible.
QUESTION
I'm working on a React Native application. My Android builds began to fail in the CI environment (and locally) without any changes.
...ANSWER
Answered 2021-Sep-03 at 11:46Go to your package.json file and delete as many dependencies as you can until the project builds successfully. Then start adding back the dependencies one by one to detect which ones have troubles.
Then you can manually patch those dependencies by acceding them on node_modules/[dependencie]/android/build.gradle and setting androidx.core:core-ktx: or androidx.core:core: to a specific version (1.6.0 in my case).
QUESTION
I made a bubble sort implementation in C, and was testing its performance when I noticed that the -O3
flag made it run even slower than no flags at all! Meanwhile -O2
was making it run a lot faster as expected.
Without optimisations:
...ANSWER
Answered 2021-Oct-27 at 19:53It looks like GCC's naïveté about store-forwarding stalls is hurting its auto-vectorization strategy here. See also Store forwarding by example for some practical benchmarks on Intel with hardware performance counters, and What are the costs of failed store-to-load forwarding on x86? Also Agner Fog's x86 optimization guides.
(gcc -O3
enables -ftree-vectorize
and a few other options not included by -O2
, e.g. if
-conversion to branchless cmov
, which is another way -O3
can hurt with data patterns GCC didn't expect. By comparison, Clang enables auto-vectorization even at -O2
, although some of its optimizations are still only on at -O3
.)
It's doing 64-bit loads (and branching to store or not) on pairs of ints. This means, if we swapped the last iteration, this load comes half from that store, half from fresh memory, so we get a store-forwarding stall after every swap. But bubble sort often has long chains of swapping every iteration as an element bubbles far, so this is really bad.
(Bubble sort is bad in general, especially if implemented naively without keeping the previous iteration's second element around in a register. It can be interesting to analyze the asm details of exactly why it sucks, so it is fair enough for wanting to try.)
Anyway, this is pretty clearly an anti-optimization you should report on GCC Bugzilla with the "missed-optimization" keyword. Scalar loads are cheap, and store-forwarding stalls are costly. (Can modern x86 implementations store-forward from more than one prior store? no, nor can microarchitectures other than in-order Atom efficiently load when it partially overlaps with one previous store, and partially from data that has to come from the L1d cache.)
Even better would be to keep buf[x+1]
in a register and use it as buf[x]
in the next iteration, avoiding a store and load. (Like good hand-written asm bubble sort examples, a few of which exist on Stack Overflow.)
If it wasn't for the store-forwarding stalls (which AFAIK GCC doesn't know about in its cost model), this strategy might be about break-even. SSE 4.1 for a branchless pmind
/ pmaxd
comparator might be interesting, but that would mean always storing and the C source doesn't do that.
If this strategy of double-width load had any merit, it would be better implemented with pure integer on a 64-bit machine like x86-64, where you can operate on just the low 32 bits with garbage (or valuable data) in the upper half. E.g.,
QUESTION
I am trying to efficiently compute a summation of a summation in Python:
WolframAlpha is able to compute it too a high n value: sum of sum.
I have two approaches: a for loop method and an np.sum method. I thought the np.sum approach would be faster. However, they are the same until a large n, after which the np.sum has overflow errors and gives the wrong result.
I am trying to find the fastest way to compute this sum.
...ANSWER
Answered 2022-Jan-16 at 12:49(fastest methods, 3 and 4, are at the end)
In a fast NumPy method you need to specify dtype=np.object
so that NumPy does not convert Python int
to its own dtypes (np.int64
or others). It will now give you correct results (checked it up to N=100000).
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install wiki
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page