gammaln | Natural logarithm of the gamma function | Natural Language Processing library

 by   math-io JavaScript Version: Current License: MIT

kandi X-RAY | gammaln Summary

kandi X-RAY | gammaln Summary

gammaln is a JavaScript library typically used in Artificial Intelligence, Natural Language Processing applications. gammaln has no bugs, it has no vulnerabilities, it has a Permissive License and it has low support. You can install using 'npm i math-gammaln' or download it from GitHub, npm.

gammaln === [NPM version][npm-image]][npm-url] [Build Status][build-image]][build-url] [Coverage Status][coverage-image]][coverage-url] [Dependencies][dependencies-image]][dependencies-url].
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              gammaln has a low active ecosystem.
              It has 5 star(s) with 0 fork(s). There are 3 watchers for this library.
              OutlinedDot
              It had no major release in the last 6 months.
              There are 0 open issues and 1 have been closed. On average issues are closed in 32 days. There are no pull requests.
              It has a neutral sentiment in the developer community.
              The latest version of gammaln is current.

            kandi-Quality Quality

              gammaln has no bugs reported.

            kandi-Security Security

              gammaln has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.

            kandi-License License

              gammaln is licensed under the MIT License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              gammaln releases are not available. You will need to build from source code and install.
              Deployable package is available in npm.
              Installation instructions are not available. Examples and code snippets are available.

            Top functions reviewed by kandi - BETA

            kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.
            Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of gammaln
            Get all kandi verified functions for this library.

            gammaln Key Features

            No Key Features are available at this moment for gammaln.

            gammaln Examples and Code Snippets

            No Code Snippets are available at this moment for gammaln.

            Community Discussions

            QUESTION

            Fast way to calculate customized function on a multi-dimensional array?
            Asked 2021-Apr-28 at 14:53

            I was trying to evaluate a customized function over every point on an n-dimensional grid, after which I can marginalize and do corner plots.

            This is conceptually simple but I'm struggling to find a way to do it efficiently. I tried a loop regardless, and it is indeed too slow, especially considering that I will be needing this for more parameters (a1, a2, a3...) as well. I was wondering if there is a faster way or any reliable package that could help?

            EDITS: Sorry that my description of myfunction hasn't been very clear, since the function takes some specific external data. Nevertheless here's a sample function that demonstrates it:

            ...

            ANSWER

            Answered 2021-Apr-28 at 14:53

            Vectorising that loop won't save you any time and in fact may make things worse.

            Instead of looping through a1_array and a2_array to create pairs, you can generate all pairs from the get go by putting them in a 100x100x2 array. This operation takes an insignificant amount of time compared to python loops. But when you're actually in the function and you're broadcasting your arrays so that you can do your calculations on data, you're now suddenly dealing with 100x100x2x500x500 arrays. You won't have memory for this and if you rely on file swapping it makes the operations exponentially slower.

            Not only are you not saving any time (well, you do for very small arrays but it's the difference between 0.03 s vs 0.005 s), but with python loops you're only using a few 10s of MB of RAM, while with the vectorised approach it skyrockets into the GB.

            But out of curiosity, this is how it could be vectorised.

            Source https://stackoverflow.com/questions/67257975

            QUESTION

            Log of exp1 in scipy
            Asked 2021-Apr-08 at 15:41

            The following

            ...

            ANSWER

            Answered 2021-Mar-03 at 12:04

            For real x you can use the series expansion of log(Ei(x)) for x → ∞ (see here) which is highly accurate for large x.

            From some quick experimentation, 18 terms is enough for full float precision when x >= 50 (and that's around where scipy starts losing full precision). Also the series expansion is really nice, the coefficients being the factorial numbers, so we can use precise evaluation without catastrophic cancellation:

            Source https://stackoverflow.com/questions/66447134

            QUESTION

            Fit convergence failure in pyhf for small signal model
            Asked 2020-Feb-06 at 17:12

            (This is a question that we (the pyhf dev team) recently got and thought was good and worth sharing. So we're posting a modified version of it here.)

            I am trying to do a simple hypothesis test with pyhf v0.4.0. The model I am using has a small signal and so I need to scan signal strengths almost all the way out to mu=100. However, I am consistently getting a convergence problem. Why is the fit failing to converge?

            The following is my environment, the code I'm using, and my error.

            Environment ...

            ANSWER

            Answered 2020-Feb-06 at 07:44

            Looking at the model, the background estimate shouldn't be zero, so add an epsilon of 1e-7 to it and then an 1% background uncertainty. Though the issue here is that reasonable intervals for signal strength are between μ ∈ [0,10]. If your model is such that you aren't sensitive to a signal strength in this range then you should test a new signal model which is the original signal scaled by some scale factor.

            Environment

            For visualization purposes let's extend the environment a bit

            Source https://stackoverflow.com/questions/60089405

            QUESTION

            Improving accuracy in scipy.optimize.fsolve with equations involving integration
            Asked 2019-Nov-05 at 10:54

            I'm trying to solve an integral equation using the following code (irrelevant parts removed):

            ...

            ANSWER

            Answered 2019-Nov-05 at 10:54

            As an example, I tried 1 / x for the integration between 1 and alpha to retrieve the target integral 2.0. This

            Source https://stackoverflow.com/questions/42964776

            QUESTION

            Python error "AttributeError: module 'scipy.misc' has no attribute 'logsumexp' "
            Asked 2019-Oct-16 at 08:36

            I'm trying to use the lifetimes header to create the Recency, Frequency and T from a given data but it keeps showing the following error AttributeError: module 'scipy.misc' has no attribute 'logsumexp'

            ...

            ANSWER

            Answered 2019-May-30 at 08:16

            Downgrading to scipy==1.1.0 solves the issue

            Source https://stackoverflow.com/questions/56324165

            QUESTION

            scipy.optimize.minimize returning [inf]
            Asked 2019-Sep-24 at 17:27

            I am trying to call scipy.optimize.minimize to minimize a function poissonNegLogLikelihood, which is defined as follows:

            ...

            ANSWER

            Answered 2019-Sep-24 at 17:27

            You are computing np.log(LAM = beats = [0]) in poissonNegLogLikelihood() and log(0) is -inf. Therefore, it seems to me that your initial guess betas is the problem. You should test with adequate values.

            Source https://stackoverflow.com/questions/58085095

            QUESTION

            Why is this log gamma numba function slower than scipy for large arrays, but faster for single values?
            Asked 2019-Mar-08 at 10:13

            I have a function to calculate the log gamma function that I am decorating with numba.njit.

            ...

            ANSWER

            Answered 2019-Mar-08 at 10:13
            Implementing gammaln in Numba

            It can be quite some work to reimplement some often used functions, not only to reach the performance, but also to get a well defined level of precision. So the direct way would be to simply wrap a working implementation.

            In case of gammaln scipy- calls a C-implemntation of this function. Therefore the speed of the scipy-implementation also depends on the compiler and compilerflags used when compiling the scipy dependencies.

            It is also not very suprising that the performance results for one value can differ quite a lot from the results of larger arrays. In the first case the calling overhead (including type conversions, input checking,...) dominates, in the second case the performance of the implementation gets more and more important.

            Improving your implementation

            • Write explicit loops. In Numba vectorized operations are expanded to loops and after that Numba tries to join the loops. It is often better to write out and join this loops manually.
            • Think of the differences of basic arithmetic implementations. Python always checks for a division by 0 and raises an exception in such a case, which is very costly. Numba also uses this behaviour by default, but you can also switch to Numpy-error checking. In this case a division by 0 results in NaN. The way NaN and Inf -0/+0 is handled in further calculations is also influenced by the fast-math flag.

            Code

            Source https://stackoverflow.com/questions/55048299

            QUESTION

            Fast algorithm for log gamma function
            Asked 2019-Feb-24 at 19:53

            I am trying to write a fast algorithm to compute the log gamma function. Currently my implementation seems naive, and just iterates 10 million times to compute the log of the gamma function (I am also using numba to optimise the code).

            ...

            ANSWER

            Answered 2019-Feb-24 at 12:02

            The runtime of your function will scale linearly (up to some constant overhead) with the number of iterations. So getting the number of iterations down is key to speeding up the algorithm. Whilst computing the HARMONIC_10MIL beforehand is a smart idea, it actually leads to worse accuracy when you truncate the series; computing only part of the series turns out to give higher accuracy.

            The code below is a modified version of the code posted above (although using cython instead of numba).

            Source https://stackoverflow.com/questions/54850985

            QUESTION

            Matlab: Error using gammaln... while plotting psychometric functions
            Asked 2019-Jan-30 at 12:00

            I'm using the toolbox psignifit to plot psychometric functions from a dataset. My code looks essentially like the following:

            ...

            ANSWER

            Answered 2019-Jan-30 at 12:00

            I had the same problem while running the psignifit function with my 2AFC data.

            The reason was very simple, the psignifit function causes this error message if you try to process data exceeding the 0 to 1 limits, for example 1.0000000001

            Source https://stackoverflow.com/questions/48175531

            QUESTION

            Implementing the generalized birthday paradox in Python
            Asked 2018-Jul-25 at 06:37

            My question is about numerical problems I am running into when implementing a probability function, and not about the probability/mathematics behind it. I'm also aware that my code below is probably not well-optimized (e.g. I could vectorize the first function if I use exact=False in comb). So I'm open to optimization suggestions, but it's not really my main concern right now.

            I am trying to numerically verify the formula given here for "the probability of getting m unique values from [0,k) when choosing n times".

            To do this, in Python 3.6.5, I am using numpy.ramdom.choice(k, n, replace=True) to obtain a multiset, and then counting the unique values in the multiset, saving this number. And repeat.

            For smallish values of k and n I get good agreement between the simulations and the formula, so I'm pretty happy that it is more-or-less correct. However, when k and n are slightly larger, I obtain negative values from the formula. I suspect this is because it includes products of tiny fractions and very large factorials, and so precision can be lost at some of these stages.

            To try and combat this, I implemented the same formula but using logs wherever I could, before finally exponentiating. Annoyingly, it didn't really help, as can be seen in the output of my code given below.

            My question is therefore, does anyone have a suggestion as to how I can continue to implement this formula for larger values of n and k? Am I right in thinking it's numerical weirdness introduced by the products of large and small numbers?

            My code:

            ...

            ANSWER

            Answered 2018-Jul-25 at 05:58

            You were right, it was some odd numeric reason.

            Change this line:

            total += (-1)**i * comb(m, i, exact=True) * ((m-i)/k)**n

            to this:

            total += (-1)**i * comb(m, i, exact=True) * ((m-i)**n)/(k**n)

            For some reason, if you force a different operation order, things come out nicely.

            You might have to spend some more time figuring out how to modify your "log'd" version, but given that the change above fixes things, you might just want to discard the "log'd" version altogether.

            Hope it helps!

            Source https://stackoverflow.com/questions/51509892

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install gammaln

            You can install using 'npm i math-gammaln' or download it from GitHub, npm.

            Support

            This repository uses [Testling][testling] for browser testing. To run the tests in a (headless) local web browser, execute the following command in the top-level application directory:. To view the tests in a local web browser,. <!-- [![browser support][browsers-image]][browsers-url] -→.
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/math-io/gammaln.git

          • CLI

            gh repo clone math-io/gammaln

          • sshUrl

            git@github.com:math-io/gammaln.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link

            Consider Popular Natural Language Processing Libraries

            transformers

            by huggingface

            funNLP

            by fighting41love

            bert

            by google-research

            jieba

            by fxsjy

            Python

            by geekcomputers

            Try Top Libraries by math-io

            erfc

            by math-ioJavaScript

            riemann-zeta

            by math-ioJavaScript

            float64-copysign

            by math-ioJavaScript

            power

            by math-ioJavaScript

            gammaincinv

            by math-ioJavaScript