CausalImpact | An R package for causal inference in time series | Time Series Database library
kandi X-RAY | CausalImpact Summary
kandi X-RAY | CausalImpact Summary
An R package for causal inference in time series
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of CausalImpact
CausalImpact Key Features
CausalImpact Examples and Code Snippets
Community Discussions
Trending Discussions on CausalImpact
QUESTION
I have the following dataset.
...ANSWER
Answered 2020-Dec-10 at 21:39The problem is caused by having too many identical values in your X1
array. If you change any of your 10.0
to a, say, 11.0
, the problem disappears.
QUESTION
I am doing a causal impact analysis in Python. This kind of analysis helps in measuring the impact in the Treatment group post intervention when compared to a control group (A/B Testing). I read some literature from here: https://www.analytics-link.com/post/2017/11/03/causal-impact-analysis-in-r-and-now-python
Let's say my data is in following format:
The following code works perfectly:
...ANSWER
Answered 2020-Aug-06 at 22:29It looks like your data is a dataframe, but you are providing dates in the pre_period
and post_period
objects, which require your data be be a time series object instead. This is explained in the original R package documentation here.
To sum up: provide indices for dataframes, provide dates for time series.
QUESTION
Using the demo as per the causal impact documentation at https://google.github.io/CausalImpact/CausalImpact.html:
I have a time series generated by:
...ANSWER
Answered 2020-Jan-14 at 10:14This turned out to be a bug with CausalImpact
version 1.2.3. Since posting this question, a new version of CausalImpact
(1.2.4) has become available on CRAN. The bug can be fixed by
install.packages('CausalImpact')
It turned out that the impact_plot.R file was using
QUESTION
Using CausalImpact in R
When i use max(impact$series$point.effect) it returns the max effect, like so
...ANSWER
Answered 2020-Jan-03 at 19:17There are lots of ways with which you can do that, the simplest way is to convert the data into a data-frame and deal with it easily.
I did not try to use additional packages i.e. tidyverse and only used Base R so as not to complicate the solution.
Also I have reproduced an example using MarketMatching data;
QUESTION
I may not be using the terminology correctly here so please forgive me...
I have a case of one package 'overwriting' a function with the same name loaded by another package and thus changing the behavior (breaking) of a function.
The specific case:
...ANSWER
Answered 2019-Nov-23 at 04:47From R 3.6.0 onwards, there is a new option called "conflicts.policy" to handle this within an established framework. For small issues like this, you can use the new arguments to library()
. If you aren't yet to 3.6, the easiest solution might be to explicitly namespace CausalImpact when you need it, i.e. CausalImpact::CausalImpact
. That's a mouthful, so you could do causal_impact <- CausalImpact::CausalImpact
and use that alias.
QUESTION
We used a local level model to fit this data:
...ANSWER
Answered 2019-Aug-01 at 19:54This is an interesting question and comparison between the two packages. The difference is apparently coming from the different estimation methods: maximum likelihood in Statsmodels and Bayesian MCMC in bsts. It is not surprising that a difference would show up in a case like this, since the time series is so short.
The reason I say this is that, given sigma.obs
, sigma.level
, and coefficients
from the bsts output, for any iteration of their MCMC algorithm, I can replicate their one.step.prediction_errors and log.likelihood for that iteration by applying the Kalman filter to the local level + exog model using the parameter values from that iteration.
They do have one difference from Statsmodels, which is that they set the prior for the unobserved state based on the first observation of the dataset and the variance of the dataset, which is probably not optimal (but shouldn't be causing any major problems). Statsmodels instead uses a diffuse prior, which again shouldn't cause any major discrepancies. As I mentioned above, when I use their prior, I can match their filtering output.
So the difference must be in the estimation method, and this could have to do with the details of their MCMC algorithm and the priors that they set. You could follow up with them to see if they have any intuition about how their setup might be affecting results.
Discussion of residuals and loglikelihood computation
We decided then to compute the loglikelihood of the residuals while ignoring the level component (as its variance is relatively low) to see what would be the most appropriate value for irregular, like so:
QUESTION
I'm unable to install Boom 0.9 on Ubuntu 18.04, Boom 0.8 installs without issue. However, we need 0.9 as a pre-req for CausalImpact.
...ANSWER
Answered 2019-Jun-18 at 21:31I think someone else in your org may have already contacted me about this. We are mid-flight debugging.
Boom is a large package and can time out when building. The first thing to check is that you are able to build with multiple cores (i.e. you can pass the -j x flag to make).
As a diagnostic you can try building the package without involving R. Clone https://github.com/steve-the-bayesian/BOOM and build with either bazel (up to date) or make (not too far out of date). If this build succeeds then compare flags passed to the R build vs the native build.
To better understand where R is failing, download the Boom package from CRAN https://cran.r-project.org/src/contrib/Boom_0.9.1.tar.gz and try the following from the command line R CMD CHECK Boom_0.9.1.tar.gz
This will probably fail, but it will generate a directory called Boom.Rcheck, which contains a file 00install.out containing all the compiler output.
It is suspicious that the build above fails on the poisson_mixture_approximation_table, which is a large file that might be overflowing your stack. Or that might be a coincidence.
QUESTION
I'm currently trying to build a large Docker image and run a shiny application off of it so I can eventually deploy it to a Unix server. The image builds successfully; however, when I go to run the image, the app runs and totally ignores the specified port.
What's even more strange is I first built a small test app, and the instructions from this SO post (Shiny app docker container not loading in browser) worked. I copied the same style I used in the test app into the other Shiny application and now it is not working.
The structure of my Docker image follows a similar structure to what ShinyProxy used on their Github page: https://github.com/openanalytics/shinyproxy-template:
...ANSWER
Answered 2019-May-15 at 14:10Port 3838 is the default port for Shiny Server, but runApp()
chooses an available port. It appears R is not picking up your Rprofile.site
, so I would just specify the port in your call to runApp()
:
QUESTION
After fitting a local level model using UnobservedComponents
from statsmodels
, we are trying to find ways to simulate new time series with the results. Something like:
ANSWER
Answered 2018-Aug-28 at 10:06@Josef is correct and you did the right thing with:
QUESTION
In the CausalImpact package, the supplied covariates are independently selected with some prior probability M/J
where M
is the expected model size and J
is the number of covariates. However, on page 11 of the paper, they say get the values by "asking about the expected model size M." I checked the documentation for CausalImpact but was unable to find any more information. Where is this done in the package? Is there a parameter I can set in a function call to decide why my desired M
?
ANSWER
Answered 2018-May-28 at 05:29You are right, this is not directly possible with CausalImpact, but it is possible. CausalImpact uses bsts behind the scenes and this package allows to set the parameter. So you have to define you model using bsts first, set the parameter and then provide it to your CausalImpact call like this (modified example from the CausalImpact manual):
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install CausalImpact
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page