fastglm | Fast glm fitting via RcppEigen | Dataset library
kandi X-RAY | fastglm Summary
kandi X-RAY | fastglm Summary
The ‘fastglm’ package is a re-write of glm() using RcppEigen designed to be computationally efficient and algorithmically stable.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of fastglm
fastglm Key Features
fastglm Examples and Code Snippets
set.seed(1)
x <- matrix(rnorm(10000 * 100), ncol = 100)
y <- (exp(0.25 * x[,1] - 0.25 * x[,3] + 0.5 * x[,4] - 0.5 * x[,5] + rnorm(10000)) ) + 0.1
system.time(gfit1 <- fastglm(cbind(1, x), y, family = Gamma(link = "sqrt")))
## user sys
library(fastglm)
library(speedglm)
library(microbenchmark)
library(ggplot2)
set.seed(123)
n.obs <- 10000
n.vars <- 100
x <- matrix(rnorm(n.obs * n.vars, sd = 3), n.obs, n.vars)
Sigma <- 0.99 ^ abs(outer(1:n.vars, 1:n.vars, FUN = "-"))
Community Discussions
Trending Discussions on fastglm
QUESTION
I am writing a function to fit many glm
models. To just give you some ideas about the function, I include a small section of my code. With the help of several SO users, the function works for my analysis purpose now. However, sometimes, particularly when the sample size is relatively small, it can take quite long time to finish the whole process.
To reduce the time, I am considering changing some details of iterative maximization, such as maximum number of iterations. I have not found a way to do it, maybe because I am still not familiar with R
terminology. Any suggestions to do this or other ways to reduce time would be appreciated.
ANSWER
Answered 2019-Nov-03 at 14:35The IRLS algorithm typically used for fitting glms requires matrix inversion/decomposition at each iteration. fastglm
offers several different options for the decomposition and the default choice is a slower but more stable option (QR with column-pivoting). If your only interest is speed, then one of the two available Cholesky-type decompositions will improve the speed dramatically, which would be more advisable than just changing the number of IRLS iterations. Another notable difference between fastglm
and standard IRLS implementations is its careful use of half-steps in order to prevent divergence (IRLS can diverge in practice in a number of cases).
The method
argument of fastglm
allows one to change the decomposition. option 2 gives the vanilla Cholesky decomposition and option 3 gives a slightly more stable version of this. On my computer, the timings for your provided example are:
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install fastglm
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page