gammaln | Natural logarithm of the gamma function | Natural Language Processing library
kandi X-RAY | gammaln Summary
kandi X-RAY | gammaln Summary
gammaln === [NPM version][npm-image]][npm-url] [Build Status][build-image]][build-url] [Coverage Status][coverage-image]][coverage-url] [Dependencies][dependencies-image]][dependencies-url].
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of gammaln
gammaln Key Features
gammaln Examples and Code Snippets
Community Discussions
Trending Discussions on gammaln
QUESTION
I was trying to evaluate a customized function over every point on an n-dimensional grid, after which I can marginalize and do corner plots.
This is conceptually simple but I'm struggling to find a way to do it efficiently. I tried a loop regardless, and it is indeed too slow, especially considering that I will be needing this for more parameters (a1, a2, a3...
) as well. I was wondering if there is a faster way or any reliable package that could help?
EDITS: Sorry that my description of myfunction
hasn't been very clear, since the function takes some specific external data. Nevertheless here's a sample function that demonstrates it:
ANSWER
Answered 2021-Apr-28 at 14:53Vectorising that loop won't save you any time and in fact may make things worse.
Instead of looping through a1_array
and a2_array
to create pairs, you can generate all pairs from the get go by putting them in a 100x100x2 array. This operation takes an insignificant amount of time compared to python loops. But when you're actually in the function and you're broadcasting your arrays so that you can do your calculations on data, you're now suddenly dealing with 100x100x2x500x500 arrays. You won't have memory for this and if you rely on file swapping it makes the operations exponentially slower.
Not only are you not saving any time (well, you do for very small arrays but it's the difference between 0.03 s vs 0.005 s), but with python loops you're only using a few 10s of MB of RAM, while with the vectorised approach it skyrockets into the GB.
But out of curiosity, this is how it could be vectorised.
QUESTION
The following
...ANSWER
Answered 2021-Mar-03 at 12:04For real x you can use the series expansion of log(Ei(x)) for x → ∞ (see here) which is highly accurate for large x
.
From some quick experimentation, 18 terms is enough for full float precision when x >= 50 (and
that's around where scipy
starts losing full precision). Also the series expansion is really nice,
the coefficients being the factorial numbers, so we can use precise evaluation without catastrophic
cancellation:
QUESTION
(This is a question that we (the pyhf dev team) recently got and thought was good and worth sharing. So we're posting a modified version of it here.)
I am trying to do a simple hypothesis test with pyhf v0.4.0
. The model I am using has a small signal and so I need to scan signal strengths almost all the way out to mu=100
. However, I am consistently getting a convergence problem. Why is the fit failing to converge?
The following is my environment, the code I'm using, and my error.
Environment ...ANSWER
Answered 2020-Feb-06 at 07:44Looking at the model, the background estimate shouldn't be zero, so add an epsilon of 1e-7
to it and then an 1%
background uncertainty. Though the issue here is that reasonable intervals for signal strength are between μ ∈ [0,10]
. If your model is such that you aren't sensitive to a signal strength in this range then you should test a new signal model which is the original signal scaled by some scale factor.
For visualization purposes let's extend the environment a bit
QUESTION
I'm trying to solve an integral equation using the following code (irrelevant parts removed):
...ANSWER
Answered 2019-Nov-05 at 10:54As an example, I tried 1 / x
for the integration between 1
and alpha
to retrieve the target integral 2.0
. This
QUESTION
I'm trying to use the lifetimes header to create the Recency, Frequency and T from a given data but it keeps showing the following error AttributeError: module 'scipy.misc' has no attribute 'logsumexp'
...ANSWER
Answered 2019-May-30 at 08:16Downgrading to scipy==1.1.0 solves the issue
QUESTION
I am trying to call scipy.optimize.minimize
to minimize a function poissonNegLogLikelihood
, which is defined as follows:
ANSWER
Answered 2019-Sep-24 at 17:27You are computing np.log(LAM = beats = [0])
in poissonNegLogLikelihood()
and log(0)
is -inf
. Therefore, it seems to me that your initial guess betas
is the problem. You should test with adequate values.
QUESTION
I have a function to calculate the log gamma function that I am decorating with numba.njit
.
ANSWER
Answered 2019-Mar-08 at 10:13It can be quite some work to reimplement some often used functions, not only to reach the performance, but also to get a well defined level of precision. So the direct way would be to simply wrap a working implementation.
In case of gammaln
scipy- calls a C-implemntation of this function. Therefore the speed of the scipy-implementation also depends on the compiler and compilerflags used when compiling the scipy dependencies.
It is also not very suprising that the performance results for one value can differ quite a lot from the results of larger arrays. In the first case the calling overhead (including type conversions, input checking,...) dominates, in the second case the performance of the implementation gets more and more important.
Improving your implementation
- Write explicit loops. In Numba vectorized operations are expanded to loops and after that Numba tries to join the loops. It is often better to write out and join this loops manually.
- Think of the differences of basic arithmetic implementations. Python always checks for a division by 0 and raises an exception in such a case, which is very costly. Numba also uses this behaviour by default, but you can also switch to Numpy-error checking. In this case a division by 0 results in NaN. The way NaN and Inf -0/+0 is handled in further calculations is also influenced by the fast-math flag.
Code
QUESTION
I am trying to write a fast algorithm to compute the log gamma function. Currently my implementation seems naive, and just iterates 10 million times to compute the log of the gamma function (I am also using numba to optimise the code).
...ANSWER
Answered 2019-Feb-24 at 12:02The runtime of your function will scale linearly (up to some constant overhead) with the number of iterations. So getting the number of iterations down is key to speeding up the algorithm. Whilst computing the HARMONIC_10MIL
beforehand is a smart idea, it actually leads to worse accuracy when you truncate the series; computing only part of the series turns out to give higher accuracy.
The code below is a modified version of the code posted above (although using cython
instead of numba
).
QUESTION
I'm using the toolbox psignifit to plot psychometric functions from a dataset. My code looks essentially like the following:
...ANSWER
Answered 2019-Jan-30 at 12:00I had the same problem while running the psignifit function with my 2AFC data.
The reason was very simple, the psignifit function causes this error message if you try to process data exceeding the 0 to 1 limits, for example 1.0000000001
QUESTION
My question is about numerical problems I am running into when implementing a probability function, and not about the probability/mathematics behind it. I'm also aware that my code below is probably not well-optimized (e.g. I could vectorize the first function if I use exact=False
in comb
). So I'm open to optimization suggestions, but it's not really my main concern right now.
I am trying to numerically verify the formula given here for "the probability of getting m unique values from [0,k) when choosing n times".
To do this, in Python 3.6.5, I am using numpy.ramdom.choice(k, n, replace=True)
to obtain a multiset, and then counting the unique values in the multiset, saving this number. And repeat.
For smallish values of k and n I get good agreement between the simulations and the formula, so I'm pretty happy that it is more-or-less correct. However, when k and n are slightly larger, I obtain negative values from the formula. I suspect this is because it includes products of tiny fractions and very large factorials, and so precision can be lost at some of these stages.
To try and combat this, I implemented the same formula but using logs wherever I could, before finally exponentiating. Annoyingly, it didn't really help, as can be seen in the output of my code given below.
My question is therefore, does anyone have a suggestion as to how I can continue to implement this formula for larger values of n and k? Am I right in thinking it's numerical weirdness introduced by the products of large and small numbers?
My code:
...ANSWER
Answered 2018-Jul-25 at 05:58You were right, it was some odd numeric reason.
Change this line:
total += (-1)**i * comb(m, i, exact=True) * ((m-i)/k)**n
to this:
total += (-1)**i * comb(m, i, exact=True) * ((m-i)**n)/(k**n)
For some reason, if you force a different operation order, things come out nicely.
You might have to spend some more time figuring out how to modify your "log'd" version, but given that the change above fixes things, you might just want to discard the "log'd" version altogether.
Hope it helps!
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install gammaln
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page