kandi background
Explore Kits

URT | Fast Unit Root Tests and OLS regression in C with wrappers for R and Python | GPU library

 by   olmallet81 C++ Version: Current License: MIT

 by   olmallet81 C++ Version: Current License: MIT

Download this library from

kandi X-RAY | URT Summary

URT is a C++ library typically used in Hardware, GPU applications. URT has no bugs, it has no vulnerabilities, it has a Permissive License and it has low support. You can download it from GitHub.
All URT classes and functions are within the namespace urt. As URT allows the use of three different linear algebra libraries, convienent typedefs Vector and Matrix have been defined for manipulating arrays:.
Support
Support
Quality
Quality
Security
Security
License
License
Reuse
Reuse

kandi-support Support

  • URT has a low active ecosystem.
  • It has 63 star(s) with 12 fork(s). There are 3 watchers for this library.
  • It had no major release in the last 12 months.
  • There are 8 open issues and 0 have been closed. There are no pull requests.
  • It has a neutral sentiment in the developer community.
  • The latest version of URT is current.
URT Support
Best in #GPU
Average in #GPU
URT Support
Best in #GPU
Average in #GPU

quality kandi Quality

  • URT has no bugs reported.
URT Quality
Best in #GPU
Average in #GPU
URT Quality
Best in #GPU
Average in #GPU

securitySecurity

  • URT has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
URT Security
Best in #GPU
Average in #GPU
URT Security
Best in #GPU
Average in #GPU

license License

  • URT is licensed under the MIT License. This license is Permissive.
  • Permissive licenses have the least restrictions, and you can use them in most projects.
URT License
Best in #GPU
Average in #GPU
URT License
Best in #GPU
Average in #GPU

buildReuse

  • URT releases are not available. You will need to build from source code and install.
  • Installation instructions are not available. Examples and code snippets are available.
URT Reuse
Best in #GPU
Average in #GPU
URT Reuse
Best in #GPU
Average in #GPU
Top functions reviewed by kandi - BETA

Coming Soon for all Libraries!

Currently covering the most popular Java, JavaScript and Python libraries. See a SAMPLE HERE.
kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.

URT Key Features

with Armadillo namespace urt { template <typename T> using Matrix = arma::Mat<T>; template <typename T> using Vector = arma::Col<T>; }

with Blaze namespace urt { template <typename T> using Matrix = blaze::DynamicMatrix<T, blaze::columnMajor>; template <typename T> using CMatrix = blaze::CustomMatrix<T, blaze::unaligned, blaze::unpadded, blaze::columnMajor>; template <typename T> using Vector = blaze::DynamicVector<T>; template <typename T> using CVector = blaze::CustomVector<T, blaze::unaligned, blaze::unpadded>; }

with Eigen namespace urt { template <typename T> using Matrix = Eigen::Matrix<T, Eigen::Dynamic, Eigen::Dynamic>; template <typename T> using Vector = Eigen::Matrix<T, Eigen::Dynamic, 1>; }

Example

copy iconCopydownload iconDownload
$ cd URT/build
$ make USE_OPENMP=1 USE_ARMA=1 USE_MKL=1
$ cd ../../

with Armadillo

copy iconCopydownload iconDownload
namespace urt {
   template <typename T>
   using Matrix = arma::Mat<T>;
   template <typename T>
   using Vector = arma::Col<T>;
}

with Blaze

copy iconCopydownload iconDownload
namespace urt {
   template <typename T>
   using Matrix = blaze::DynamicMatrix<T, blaze::columnMajor>;
   template <typename T>
   using CMatrix = blaze::CustomMatrix<T, blaze::unaligned, blaze::unpadded, blaze::columnMajor>;
   template <typename T>
   using Vector = blaze::DynamicVector<T>;
   template <typename T>
   using CVector = blaze::CustomVector<T, blaze::unaligned, blaze::unpadded>;
}

with Eigen

copy iconCopydownload iconDownload
namespace urt {
   template <typename T>
   using Matrix = Eigen::Matrix<T, Eigen::Dynamic, Eigen::Dynamic>;
   template <typename T>
   using Vector = Eigen::Matrix<T, Eigen::Dynamic, 1>;
}

Constructor

copy iconCopydownload iconDownload
OLS<T>::OLS(const Vector<T>& y, const Matrix<T>& x, bool stats = false)

Code example:

copy iconCopydownload iconDownload
// ./URT/examples/example1.cpp
#include "../include/URT.hpp"

int main()
{
   int nrows = 1000;
   int ncols = 10;

   // generating random arrays
   urt::Vector<double> y = urt::wiener_process<double>(nrows);
   urt::Matrix<double> x = urt::wiener_process<double>(nrows, ncols);

   // adding intercept to matrix of independent variables
   urt::add_intercept(x);

   // writting data to CSV files
   urt::WriteToCSV("./URT/data/y.csv", y);
   urt::WriteToCSV("./URT/data/x.csv", x);

   // running OLS regression
   urt::OLS<double> fit(y, x, true);

   // outputting regression results
   fit.show();
}   

Ouput:

copy iconCopydownload iconDownload
OLS Regression Results
======================

Coefficients
--------------------------------------------------
            Estimate  Std Error   t value  P(>|t|)
Intercept    4.51898     0.4968     9.097    0.000
       x1    0.10240     0.0304     3.366    0.001
       x2   -0.14263     0.0205    -6.959    0.000
       x3   -0.04654     0.0232    -2.003    0.046
       x4    0.00560     0.0367     0.153    0.879
       x5   -0.27260     0.0306    -8.895    0.000
       x6    0.41597     0.0270     15.42    0.000
       x7   -0.02540     0.0247    -1.027    0.305
       x8    0.18289     0.0217     8.433    0.000
       x9    0.07821     0.0303     2.581    0.010
      x10    0.15704     0.0243     6.466    0.000

Residuals
----------------------------------------
     Min      1Q  Median      3Q     Max
  -12.04   -2.19   -0.06    2.40    9.45

Dimensions
----------------------------------------
Number of observations              1000
Number of degrees of freedom         989
Number of variables                   10

Analysis
----------------------------------------
Residual mean                    1.3e-14
Residual standard error            3.594
Multiple R-squared               0.79603
Adjusted R-squared               0.79397
F-statistic (p-value)             385.98 (0.00)
Durbin-Watson statistic            0.106

Constructor for computing ADF test for a given number of lags

copy iconCopydownload iconDownload
ADF(const Vector<T>& data, int lags, const std::string& trend = "c", bool regression = false)

Constructor for computing ADF test with lag length optimization

copy iconCopydownload iconDownload
ADF(const Vector<T>& data, const std::string& method, const std::string& trend = "c", bool regression = false)

Code example:

copy iconCopydownload iconDownload
// ./URT/examples/example2.cpp
#include "../include/URT.hpp"

int main()
{
   int nobs = 1000;

   // generating non-stationary random data
   urt::Vector<double> data = urt::wiener_process<double>(nobs);

   // initializing ADF test with 10 lags and constant trend
   urt::ADF<double> test(data, 10, "ct");

   // outputting test results
   test.show();

   // switching to test with lag length optimization and p-value computation by bootstrap with 10000 iterations
   test.method = "AIC";
   test.bootstrap = true;
   test.niter = 10000;

   // outputting test results
   test.show();  
}

Ouput:

copy iconCopydownload iconDownload
Augmented Dickey-Fuller Test Results
====================================
Statistic                     -2.949
P-value                        0.152
Lags                              10
Trend                 constant trend
------------------------------------

Test Hypothesis
------------------------------------
H0: The process contains a unit root
H1: The process is weakly stationary

Critical Values
---------------
 1%      -3.956
 5%      -3.405
10%      -3.121

Test Conclusion
---------------
We cannot reject H0

Constructor for computing Dickey-Fuller GLS test for a given number of lags

copy iconCopydownload iconDownload
DFGLS(const Vector<T>& data, int lags, const std::string& trend = "c", bool regression = false)

Constructor for computing Dickey-Fuller GLS test with lag length optimization

copy iconCopydownload iconDownload
DFGLS(const Vector<T>& data, const std::string& method, const std::string& trend = "c", bool regression = false)

Code example:

copy iconCopydownload iconDownload
// ./URT/examples/example3.cpp
#include "../include/URT.hpp"

int main()
{
   int nobs = 1000;

   // generating non-stationary random data
   urt::Vector<double> data = urt::wiener_process<double>(nobs);

   // initializing DFGLS test with lag length optimization using BIC and constant term
   urt::DFGLS<double> test(data, "BIC");

   // outputting test results
   test.show();

   // switching to test with 10 lags and constant trend
   test.trend = "ct";
   test.lags = 10;

   // outputting test results
   test.show();
}

Ouput:

copy iconCopydownload iconDownload
    Dickey-Fuller GLS Test Results
====================================
Statistic                     -1.655
P-value                        0.098
Optimal Lags                       0
Criterion                        BIC
Trend                       constant
------------------------------------

Test Hypothesis
------------------------------------
H0: The process contains a unit root
H1: The process is weakly stationary

Critical Values
---------------
 1%      -2.586
 5%      -1.963
10%      -1.642

Test Conclusion
---------------
We can reject H0 at the 10% significance level

Constructor for computing Phillips-Perron test for a given number of lags

copy iconCopydownload iconDownload
PP(const Vector<T>& data, int lags, const std::string& trend = "c", const std::string& test_type = "tau", bool regression = false)

Constructor for computing Phillips-Perron test with a default number of lags (long or short)

copy iconCopydownload iconDownload
PP(const Vector<T>& data, const std::string& lags_type, const std::string& trend = "c", const std::string& test_type = "tau", bool regression = false)

Code example:

copy iconCopydownload iconDownload
// ./URT/examples/example4.cpp
#include "./URT/include/URT.hpp"

int main()
{
   int nobs = 1000;

   // generating non-stationary random data
   urt::Vector<float> data = urt::wiener_process<float>(nobs);

   // initializing Phillips-Perron normalized test with lags of type long and constant term
   urt::PP<float> test(data, "long", "c", "rho");

   // outputting test results
   test.show();

   // switching to t-statistic test 
   test.test_type = "tau";

   // outputting test results
   test.show();  
}

Ouput:

copy iconCopydownload iconDownload
Phillips-Perron Test Results (Z-rho)
====================================
Statistic                     -6.077
P-value                        0.371
Lags                              21
Trend                       constant
------------------------------------

Test Hypothesis
------------------------------------
H0: The process contains a unit root
H1: The process is weakly stationary

Critical Values
---------------
 1%     -20.548
 5%     -14.058
10%     -11.225

Test Conclusion
---------------
We cannot reject H0

Constructor for computing KPSS test for a given number of lags

copy iconCopydownload iconDownload
KPSS(const Vector<T>& data, int lags, const std::string& trend = "c")

Constructor for computing KPSS test with a default number of lags (long or short)

copy iconCopydownload iconDownload
KPSS(const Vector<T>& data, const std::string lags_type, const std::string& trend = "c")

Code example:

copy iconCopydownload iconDownload
// ./URT/examples/example5.cpp
#include "../include/URT.hpp"

int main()
{
   int nobs = 1000;

   // generating stationary random data
   urt::Vector<float> data = urt::gaussian_noise<float>(nobs);

   // initializing KPSS test with lags of type short and constant trend
   urt::KPSS<double> test(data, "short", "ct");

   // outputting test results
   test.show();

   // switching to test with 5 lags and constant term
   test.lags = 5;
   test.trend = "c";

   // outputting test results
   test.show();  
}

Ouput:

copy iconCopydownload iconDownload
    KPSS Test Results
====================================
Statistic                      0.029
P-value                        0.900
Lags                               7
Trend                 constant trend
------------------------------------

Test Hypothesis
------------------------------------
H0: The process is weakly stationary
H1: The process contains a unit root

Critical Values
---------------
 1%       0.213
 5%       0.147
10%       0.119

Test Conclusion
---------------
We cannot reject H0

CyURT: URT for Python

copy iconCopydownload iconDownload
$ make USE_BLAZE=1

RcppURT: URT for R

copy iconCopydownload iconDownload
$ R CMD build RcppURT

C++

copy iconCopydownload iconDownload
#include "../include/URT.hpp"

#ifndef USE_ARMA
  #include <armadillo>
#endif

// define USE_FLOAT when compiling to switch to single precision
#ifdef USE_FLOAT
  using T = float;
#else
  using T = double;
#endif

int main()
{
   int niter = 0;

   arma::wall_clock timer;

   std::vector<int> sizes = {100,150,200,250,300,350,400,450,500,1000,1500,2000,2500,3000,3500,4000,4500,5000};

   std::cout << std::fixed << std::setprecision(1);

   for (int i = 0; i < sizes.size(); ++i) {
   
      urt::Vector<T> data = urt::wiener_process<T>(sizes[i]);

      (sizes[i] < 1000) ? niter = 10000 : niter = 1000;

      timer.tic();
      for (int k = 0; k < niter; ++k) {
         urt::ADF<T> test(data, "AIC");
         test.statistic();
      }

      auto duration = timer.toc();

      std::cout << std::setw(8) << sizes[i];
      std::cout << std::setw(8) << duration << "\n";
   }
}

Cython wrapper

copy iconCopydownload iconDownload
import numpy as np
import CyURT as urt
from timeit import default_timer as timer 

if __name__ == "__main__":

    sizes = [100,150,200,250,300,350,400,450,500,1000,1500,2000,2500,3000,3500,4000,4500,5000]

    for i in range(len(sizes)):

        # generating Wiener process
        data = np.cumsum(np.random.normal(size=sizes[i]))
        # uncomment this line and comment the one above to switch to single precision
        #data = np.cumsum(np.random.normal(size=sizes[i])).astype(np.float32)

        if sizes[i] < 1000: niter = 10000
        else: niter = 1000
        
        start = timer()
        for k in range(niter):
            test = urt.ADF_d(data, method='AIC')
            # uncomment this line and comment the one above to switch to single precision
            #test = urt.ADF_f(data, method='AIC')
            test.statistic()
        end = timer()

        print '{:8d}'.format(sizes[i]), '{:8.1f}'.format(end - start)

Rcpp wrapper

copy iconCopydownload iconDownload
suppressMessages(library(RcppURT))

run <- function()
{
  sizes = c(100,150,200,250,300,350,400,450,500,1000,1500,2000,2500,3000,3500,4000,4500,5000)

  for (i in 1:length(sizes)) {

    # generating Wiener process
    data = cumsum(rnorm(n=sizes[i]))

    if (sizes[i] < 1000) niter = 10000
    else niter = 1000
        
    # with R6 classes
    start1 = Sys.time()
    for (k in 1:niter) {
        test = ADF_d$new(data, method='AIC')
        # uncomment this line and comment the one above to switch to single precision
        #test = ADF_f$new(data, method='AIC')
        test$statistic()
    }
    end1 = Sys.time()

    # with Rcpp functions
    start2 = Sys.time()
    for (k in 1:niter) {
        test = ADFtest_d(data, method='AIC')
        # uncomment this line and comment the one above to switch to single precision
        #test = ADFtest_f(data, method='AIC')
    }
    end2 = Sys.time()

    cat(sprintf("%8d", sizes[i]))
    cat(sprintf("%8.1f", end1 - start1))
    cat(sprintf("%8.1f\n", end2 - start2))
  }
}

alphaArc length reparameterization of a parametric curve represented as an expression in R

copy iconCopydownload iconDownload
f <- eval(parse(text=paste0("function(t) c(t*0+",paste(inputCurve,collapse=",t*0+"),")")))

Parse &lt;script type=“text/javascript” twitter python

copy iconCopydownload iconDownload
for script in b.find_all('script'):
    if 'window.__INITIAL_STATE__=' not in script.contents[0]:
        continue

    wis = script.contents[0].split('window.__INITIAL_STATE__=')
    data = json.loads(wis[1].split(';window.__META_DATA__')[0])
    print(data["settings"]["remote"]["settings"]["screen_name"])
    break
b = BeautifulSoup(html, 'html.parser')

for script in b.find_all('script'):
    if 'window.__INITIAL_STATE__=' not in script.contents[0]:
        continue

    wis = script.contents[0].split('window.__INITIAL_STATE__=')
    data = json.loads(wis[1].split(',"devices"')[0])
    print(data['featureSwitch']['config']['screen_name'])
    break
-----------------------
for script in b.find_all('script'):
    if 'window.__INITIAL_STATE__=' not in script.contents[0]:
        continue

    wis = script.contents[0].split('window.__INITIAL_STATE__=')
    data = json.loads(wis[1].split(';window.__META_DATA__')[0])
    print(data["settings"]["remote"]["settings"]["screen_name"])
    break
b = BeautifulSoup(html, 'html.parser')

for script in b.find_all('script'):
    if 'window.__INITIAL_STATE__=' not in script.contents[0]:
        continue

    wis = script.contents[0].split('window.__INITIAL_STATE__=')
    data = json.loads(wis[1].split(',"devices"')[0])
    print(data['featureSwitch']['config']['screen_name'])
    break

FFTW vs fourn subroutine result does not match

copy iconCopydownload iconDownload
...
urt(:) = 0  !<--- clear the entire urt(:) by zero...

do i=1,n
    urt(2*i-1) = real(inp(i))
    urt(2*i)   = aimag(inp(i))
enddo

! forward
!call fourn(urt, [n], 1, 1)  !<--- uses exp(+i ...) for "forward" transform in NR
call fourn(urt, [n], 1, -1)  !<--- uses exp(-i ...) for "backward" transform in NR
...

How to search value into a database after concatenate two attributes in Laravel

copy iconCopydownload iconDownload
<link href="https://cdn.jsdelivr.net/npm/select2@4.0.13/dist/css/select2.min.css" rel="stylesheet" />
<script src="https://cdn.jsdelivr.net/npm/select2@4.0.13/dist/js/select2.min.js"></script>
<select name="organizations" id="myorgselect" class="form-control" multiple="multiple">
    @foreach($organizations as $organization)
         <option value="{{$organization->organization_id}}">
              {{$organization->organization_name}}
         </option>
    @endforeach
</select>
$("#myorgselect").select2({
tags: true
})
-----------------------
<link href="https://cdn.jsdelivr.net/npm/select2@4.0.13/dist/css/select2.min.css" rel="stylesheet" />
<script src="https://cdn.jsdelivr.net/npm/select2@4.0.13/dist/js/select2.min.js"></script>
<select name="organizations" id="myorgselect" class="form-control" multiple="multiple">
    @foreach($organizations as $organization)
         <option value="{{$organization->organization_id}}">
              {{$organization->organization_name}}
         </option>
    @endforeach
</select>
$("#myorgselect").select2({
tags: true
})
-----------------------
<link href="https://cdn.jsdelivr.net/npm/select2@4.0.13/dist/css/select2.min.css" rel="stylesheet" />
<script src="https://cdn.jsdelivr.net/npm/select2@4.0.13/dist/js/select2.min.js"></script>
<select name="organizations" id="myorgselect" class="form-control" multiple="multiple">
    @foreach($organizations as $organization)
         <option value="{{$organization->organization_id}}">
              {{$organization->organization_name}}
         </option>
    @endforeach
</select>
$("#myorgselect").select2({
tags: true
})

Regular Expression to validate a string containing '=' and '~' as end character

copy iconCopydownload iconDownload
^(?!~)(?:(?:^|~)[^=~]+=[^=~]*)+$
^            Match beginning-of-input, i.e. matching must start at beginning
(?!~)        Input cannot start with `~`
(?:          Repeat 1 or more times:
  (?:^|~)      Match beginning of input or match '~', i.e. match nothing on
               first repetition, and match `~` on each subsequent repetition
  [^=~]+       Match KEY
  =            Match '='
  [^=~]*       Match VALUE (may be blank)
)+
$           Match end-of-input, i.e. matching must cover all input
-----------------------
^(?!~)(?:(?:^|~)[^=~]+=[^=~]*)+$
^            Match beginning-of-input, i.e. matching must start at beginning
(?!~)        Input cannot start with `~`
(?:          Repeat 1 or more times:
  (?:^|~)      Match beginning of input or match '~', i.e. match nothing on
               first repetition, and match `~` on each subsequent repetition
  [^=~]+       Match KEY
  =            Match '='
  [^=~]*       Match VALUE (may be blank)
)+
$           Match end-of-input, i.e. matching must cover all input

Logstash COMMONAPACHELOG pattern parsing problem

copy iconCopydownload iconDownload
"%{IPORHOST:clientip} %{HTTPDUSER:ident} %{USER:auth} \[%{HTTPDATE:timestamp}\] \[%{NOTSPACE:referrer}\] \"(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})\" %{NUMBER:response} (?:%{NUMBER:bytes}|-)"
{
"@version":"1",
"auth":"-",
"host":"******",
"message":"111.22.333.444 - - [08/Jan/2020:11:50:15 +0100] [https://awdasfe.asfeaf.cas:111] \"POST /VFQ3P/asfiheasfhe/v2/safiehjafe/check HTTP/1.1\" 204 0 \"-\" \"-\" (rt=0.555 urt=0.555 uct=0.122 uht=0.11)\r",
"timestamp":"08/Jan/2020:11:50:15 +0100",
"httpversion":"1.1",
"@timestamp":"2020-01-09T13:32:27.442Z",
"verb":"POST",
"response":"204",
"clientip":"111.22.333.444",
"referrer":"https://awdasfe.asfeaf.cas:111",
"ident":"-",
"request":"/VFQ3P/asfiheasfhe/v2/safiehjafe/check",
"bytes":"0"
}
-----------------------
"%{IPORHOST:clientip} %{HTTPDUSER:ident} %{USER:auth} \[%{HTTPDATE:timestamp}\] \[%{NOTSPACE:referrer}\] \"(?:%{WORD:verb} %{NOTSPACE:request}(?: HTTP/%{NUMBER:httpversion})?|%{DATA:rawrequest})\" %{NUMBER:response} (?:%{NUMBER:bytes}|-)"
{
"@version":"1",
"auth":"-",
"host":"******",
"message":"111.22.333.444 - - [08/Jan/2020:11:50:15 +0100] [https://awdasfe.asfeaf.cas:111] \"POST /VFQ3P/asfiheasfhe/v2/safiehjafe/check HTTP/1.1\" 204 0 \"-\" \"-\" (rt=0.555 urt=0.555 uct=0.122 uht=0.11)\r",
"timestamp":"08/Jan/2020:11:50:15 +0100",
"httpversion":"1.1",
"@timestamp":"2020-01-09T13:32:27.442Z",
"verb":"POST",
"response":"204",
"clientip":"111.22.333.444",
"referrer":"https://awdasfe.asfeaf.cas:111",
"ident":"-",
"request":"/VFQ3P/asfiheasfhe/v2/safiehjafe/check",
"bytes":"0"
}

how to create two independent drill down plot using Highcharter?

copy iconCopydownload iconDownload
cate<-c("Furniture","Furniture","Furniture","Furniture","Furniture","Furniture","Furniture","Furniture","Furniture","Furniture","Furniture","Furniture","drinks","drinks","groceries","groceries","groceries","dairy","dairy","dairy","dairy","groceries","technology","technology","technology","technology","technology","technology","technology","technology","groceries","Furniture","Furniture","Furniture","Furniture","Furniture","Furniture","Furniture","Furniture","Furniture","Furniture","Furniture","Furniture","drinks","drinks","groceries","groceries","groceries","dairy","dairy","dairy","dairy","groceries","technology","technology","technology","technology","technology","technology","technology","technology","groceries","Furniture","Furniture","Furniture","Furniture","Furniture","Furniture","Furniture","Furniture","Furniture","Furniture","Furniture","Furniture","drinks","drinks","groceries","groceries","groceries","dairy","dairy","dairy","dairy","groceries","technology","technology","technology","technology","technology","technology","technology","technology","groceries","Furniture","Furniture","Furniture","Furniture","Furniture","Furniture","Furniture","Furniture","Furniture","Furniture","Furniture","Furniture","drinks","drinks","groceries","groceries","groceries","dairy","dairy","dairy","dairy","groceries","technology","technology","technology","technology","technology","technology","technology","technology","groceries")
Sub_Product<-c("nov","nov","nov","nov","nov","nov","nov","nov","nov","nov","nov","nov","nov","nov","nov","nov","nov","nov","nov","nov","nov","nov","nov","nov","nov","nov","nov","nov","nov","nov","nov","oct","oct","oct","oct","oct","oct","oct","oct","oct","oct","oct","oct","oct","oct","oct","oct","oct","oct","oct","oct","oct","oct","oct","oct","oct","oct","oct","oct","oct","oct","oct","sept","sept","sept","sept","sept","sept","sept","sept","sept","sept","sept","sept","sept","sept","sept","sept","sept","sept","sept","sept","sept","sept","sept","sept","sept","sept","sept","sept","sept","sept","sept","aug","aug","aug","aug","aug","aug","aug","aug","aug","aug","aug","aug","aug","aug","aug","aug","aug","aug","aug","aug","aug","aug","aug","aug","aug","aug","aug","aug","aug","aug","aug")
Main_Product<-c("outdoor","indoor","outdoor","indoor","indoor","outdoor","indoor","indoor","indoor","indoor","outdoor","outdoor","n&o","n&o","indoor","indoor","indoor","indoor","outdoor","indoor","outdoor","outdoor","outdoor","indoor","outdoor","indoor","outdoor","outdoor","indoor","outdoor","n&o","outdoor","indoor","outdoor","indoor","indoor","outdoor","indoor","indoor","indoor","indoor","outdoor","outdoor","n&o","n&o","indoor","indoor","indoor","indoor","outdoor","indoor","outdoor","outdoor","outdoor","indoor","outdoor","indoor","outdoor","outdoor","indoor","outdoor","n&o","outdoor","indoor","outdoor","indoor","indoor","outdoor","indoor","indoor","indoor","indoor","outdoor","outdoor","n&o","n&o","indoor","indoor","indoor","indoor","outdoor","indoor","outdoor","outdoor","outdoor","indoor","outdoor","indoor","outdoor","outdoor","indoor","outdoor","n&o","outdoor","indoor","outdoor","indoor","indoor","outdoor","indoor","indoor","indoor","indoor","outdoor","outdoor","n&o","n&o","indoor","indoor","indoor","indoor","outdoor","indoor","outdoor","outdoor","outdoor","indoor","outdoor","indoor","outdoor","outdoor","indoor","outdoor","n&o")
Product<-c("abc","def","ghh","hig","lmn","opk","cba","dfw","ewr","csad","wer","casd","were","csad","rt","hgf","qeq","hgf","qer","qer2","erqerq","qdq","dwqer","qerqe","erqererq","e2342","ererq","qewrw","qrerqr","qreqw","qerqe","abc","def","ghh","hig","lmn","opk","cba","dfw","ewr","csad","wer","casd","were","csad","rt","hgf","qeq","hgf","qer","qer2","erqerq","qdq","dwqer","qerqe","erqererq","e2342","ererq","qewrw","qrerqr","qreqw","qerqe","abc","def","ghh","hig","lmn","opk","cba","dfw","ewr","csad","wer","casd","were","csad","rt","hgf","qeq","hgf","qer","qer2","erqerq","qdq","dwqer","qerqe","erqererq","e2342","ererq","qewrw","qrerqr","qreqw","qerqe","abc","def","ghh","hig","lmn","opk","cba","dfw","ewr","csad","wer","casd","were","csad","rt","hgf","qeq","hgf","qer","qer2","erqerq","qdq","dwqer","qerqe","erqererq","e2342","ererq","qewrw","qrerqr","qreqw","qerqe")
sum1<-c(43,90,135,125,87,4,23,120,4,127,70,68,129,63,131,90,67,110,90,119,81,68,15,29,49,11,76,82,65,83,25,43,90,135,125,87,4,23,120,4,127,70,68,129,63,131,90,67,110,90,119,81,68,15,29,49,11,76,82,65,83,25,43,90,135,125,87,4,23,120,4,127,70,68,129,63,131,90,67,110,90,119,81,68,15,29,49,11,76,82,65,83,25,43,90,135,125,87,4,23,120,4,127,70,68,129,63,131,90,67,110,90,119,81,68,15,29,49,11,76,82,65,83,25)
sum2<-c(14567,11111,3287,3563,9633,11162,3044,8437,4382,11250,3932,5587,4175,9708,4970,8388,10673,4301,12475,13494,12519,5632,3898,12472,4381,14085,10041,4276,12953,11143,12905,14567,11111,3287,3563,9633,11162,3044,8437,4382,11250,3932,5587,4175,9708,4970,8388,10673,4301,12475,13494,12519,5632,3898,12472,4381,14085,10041,4276,12953,11143,12905,14567,11111,3287,3563,9633,11162,3044,8437,4382,11250,3932,5587,4175,9708,4970,8388,10673,4301,12475,13494,12519,5632,3898,12472,4381,14085,10041,4276,12953,11143,12905,14567,11111,3287,3563,9633,11162,3044,8437,4382,11250,3932,5587,4175,9708,4970,8388,10673,4301,12475,13494,12519,5632,3898,12472,4381,14085,10041,4276,12953,11143,12905)
avg1<-c(48,132,115,83,84,77,111,102,113,96,136,97,89,97,66,18,123,29,37,118,66,87,52,11,97,25,144,21,40,6,36,48,132,115,83,84,77,111,102,113,96,136,97,89,97,66,18,123,29,37,118,66,87,52,11,97,25,144,21,40,6,36,48,132,115,83,84,77,111,102,113,96,136,97,89,97,66,18,123,29,37,118,66,87,52,11,97,25,144,21,40,6,36,48,132,115,83,84,77,111,102,113,96,136,97,89,97,66,18,123,29,37,118,66,87,52,11,97,25,144,21,40,6,36)
avg2<-c(6775,3142,3916,12828,9889,4025,11374,10594,4263,8871,11229,4787,7478,5316,5299,14068,3981,12993,12435,13845,4320,7472,14285,10221,11883,7783,13980,11426,13120,8632,14540,6775,3142,3916,12828,9889,4025,11374,10594,4263,8871,11229,4787,7478,5316,5299,14068,3981,12993,12435,13845,4320,7472,14285,10221,11883,7783,13980,11426,13120,8632,14540,6775,3142,3916,12828,9889,4025,11374,10594,4263,8871,11229,4787,7478,5316,5299,14068,3981,12993,12435,13845,4320,7472,14285,10221,11883,7783,13980,11426,13120,8632,14540,6775,3142,3916,12828,9889,4025,11374,10594,4263,8871,11229,4787,7478,5316,5299,14068,3981,12993,12435,13845,4320,7472,14285,10221,11883,7783,13980,11426,13120,8632,14540)

dat<-data.frame(cate,Sub_Product,Main_Product,Product,sum1,sum2,avg1,avg2, stringsAsFactors = FALSE)

ACClist<-c("sum1","sum2")
AVGlist<-c("avg1","avg2")

library (shinyjs)
library (tidyr)
library (data.table)
library (highcharter)
library (dplyr)
library (shinydashboard)
library (shiny)
library (shinyWidgets)

header <-dashboardHeader()
body <- dashboardBody(fluidRow(
  column(width = 12,
         radioGroupButtons(
           inputId = "l1PAD", label = NULL,size = "lg",
           choices = unique(dat$cate), justified = TRUE,
           individual = TRUE)
  )),
  fluidRow(
    box(
      title = "Summation of dataset", highchartOutput("accuPA",height = "300px")
    ),
    box(
      title = "Mean of dataset", highchartOutput("avgPA",height = "300px")
    )
  ))
sidebar <- dashboardSidebar(collapsed = T,
                            radioGroupButtons(
                              "accuselectPA","sum",choices=ACClist,
                              direction = "vertical",width = "100%",justified = TRUE
                            ),
                            br(),
                            radioGroupButtons(
                              "avgselectPA","Average ",choices=AVGlist,
                              direction = "vertical",width = "100%",justified = TRUE
                            ))
ui <- dashboardPage(header, sidebar, body)
server <- function(input, output, session) {

    #data set
    dat_filtered <- reactive({

      dat[dat$cate == input$l1PAD,]

    })

    #Acc/sum graph
    output$accuPA<-renderHighchart({

      #LEVEL 1
      datSum <- dat_filtered() %>%
        group_by(Main_Product) %>%
        summarize(Quantity = mean(get(input$accuselectPA)))

      datSum <- arrange(datSum,desc(Quantity))
      Lvl1dfStatus <- tibble(name = datSum$Main_Product, y = datSum$Quantity, drilldown = tolower(name))

      #LEVEL 2
      Level_2_Drilldowns <- lapply(unique(dat_filtered()$Main_Product), function(x_level) {

        datSum2 <- dat_filtered()[dat_filtered()$Main_Product == x_level,]

        datSum2 <- datSum2 %>%
          group_by(Product) %>%
          summarize(Quantity = mean(get(input$accuselectPA)))
        datSum2 <- arrange(datSum2,desc(Quantity))

        Lvl2dfStatus <- tibble(name = datSum2$Product,y = datSum2$Quantity, drilldown = tolower(paste(x_level, name, sep = "_")))
        list(id = tolower(x_level), type = "column", data = list_parse(Lvl2dfStatus))
      })

      #LEVEL 3
      Level_3_Drilldowns <- lapply(unique(dat_filtered()$Main_Product), function(x_level) {

        datSum2 <- dat_filtered()[dat_filtered()$Main_Product == x_level,]

        lapply(unique(datSum2$Product), function(y_level) {

          datSum3 <- datSum2[datSum2$Product == y_level,]

          datSum3 <- datSum3 %>%
            group_by(Sub_Product) %>%
            summarize(Quantity = mean(get(input$accuselectPA)))
          datSum3 <- arrange(datSum3,desc(Quantity))

          Lvl3dfStatus <- tibble(name = datSum3$Sub_Product,y = datSum3$Quantity)
          list(id = tolower(paste(x_level, y_level, sep = "_")), type = "column", data = list_parse2(Lvl3dfStatus))
        })
      }) %>% unlist(recursive = FALSE)

      highchart() %>%
        hc_xAxis(type = "category") %>%
        hc_add_series(Lvl1dfStatus, "column", hcaes(x = name, y = y), color = "#E4551F") %>%
        hc_plotOptions(column = list(stacking = "normal")) %>%
        hc_drilldown(
          allowPointDrilldown = TRUE,
          series = c(Level_2_Drilldowns, Level_3_Drilldowns)
        )
    })

    #Avg/Avg graph
    output$avgPA<-renderHighchart({

      #LEVEL 1
      datSum <- dat_filtered() %>%
        group_by(Main_Product) %>%
        summarize(Quantity = mean(get(input$avgselectPA)))

      datSum <- arrange(datSum,desc(Quantity))
      Lvl1dfStatus <- tibble(name = datSum$Main_Product, y = datSum$Quantity, drilldown = tolower(name))

      #LEVEL 2
      Level_2_Drilldowns <- lapply(unique(dat_filtered()$Main_Product), function(x_level) {

        datSum2 <- dat_filtered()[dat_filtered()$Main_Product == x_level,]

        datSum2 <- datSum2 %>%
          group_by(Product) %>%
          summarize(Quantity = mean(get(input$avgselectPA)))
        datSum2 <- arrange(datSum2,desc(Quantity))

        Lvl2dfStatus <- tibble(name = datSum2$Product,y = datSum2$Quantity, drilldown = tolower(paste(x_level, name, sep = "_")))
        list(id = tolower(x_level), type = "column", data = list_parse(Lvl2dfStatus))
      })

      #LEVEL 3
      Level_3_Drilldowns <- lapply(unique(dat_filtered()$Main_Product), function(x_level) {

        datSum2 <- dat_filtered()[dat_filtered()$Main_Product == x_level,]

        lapply(unique(datSum2$Product), function(y_level) {

          datSum3 <- datSum2[datSum2$Product == y_level,]

          datSum3 <- datSum3 %>%
            group_by(Sub_Product) %>%
            summarize(Quantity = mean(get(input$avgselectPA)))
          datSum3 <- arrange(datSum3,desc(Quantity))

          Lvl3dfStatus <- tibble(name = datSum3$Sub_Product,y = datSum3$Quantity)
          list(id = tolower(paste(x_level, y_level, sep = "_")), type = "column", data = list_parse2(Lvl3dfStatus))
        })
      }) %>% unlist(recursive = FALSE)

      highchart() %>%
        hc_xAxis(type = "category") %>%
        hc_add_series(Lvl1dfStatus, "column", hcaes(x = name, y = y), color = "#E4551F") %>%
        hc_plotOptions(column = list(stacking = "normal")) %>%
        hc_drilldown(
          allowPointDrilldown = TRUE,
          series = c(Level_2_Drilldowns, Level_3_Drilldowns)
        )
    })

  }
shinyApp(ui, server)

How to use dplyr to create a new variable based on a function result in a grouped data?

copy iconCopydownload iconDownload
library(dplyr)
library(urca)

long %>%
  group_by(variable) %>%
  mutate(ur.df_obj = list(summary(ur.df(value, type = "trend", selectlags = "BIC"))),
           URT = +(purrr::map_lgl(ur.df_obj, ~.x@teststat[1] < .x@cval[1,3])))


#   variable  value ur.df_obj   URT
#   <fct>     <dbl> <list>    <int>
# 1 a         2.29  <sumurca>     1
# 2 a        -1.20  <sumurca>     1
# 3 a        -0.694 <sumurca>     1
# 4 a        -0.412 <sumurca>     1
# 5 a        -0.971 <sumurca>     1
# 6 a        -0.947 <sumurca>     1
# 7 a         0.748 <sumurca>     1
# 8 a        -0.117 <sumurca>     1
# 9 a         0.153 <sumurca>     1
#10 a         2.19  <sumurca>     1
# … with 290 more rows

I want to dynamically change the form of django according to user information

copy iconCopydownload iconDownload
class RecordCreateForm(BaseModelForm):

    class Meta:
        model = URC
        fields = ('UPRC','URN','UET','URT',)

    def __init__(self, *args, **kwargs):
        user = kwargs.pop('user')
        super(RecordCreateForm,self).__init__(*args, **kwargs)
        for field in self.fields.values():
            field.widget.attrs['class'] = 'form-control'
        self.fields['URN'].queryset = UPRM.objects.filter(user=user)

Performance Comparison on Regex vs. String Operations for below specific case in python

copy iconCopydownload iconDownload
pattern = "((?:\d+[.]){3}\d+)\s*-\s*(\w+)\s*\[([^\]]*)\]\s*\"([^\"]*)\"\s*(\d+)[^\"]*\"-\"\s*\"([^\"]*)\".*$"
import re
def process_line(line):
    r = re.search(pattern, line)
    if r:
        ip, client_id, time_stamp, url, response_code, user_agent = r.groups()
    else:
        return "Error!"
206.92.168.224 
defcyfefydeecgqwfcecyqw 
11/Jul/2016:00:17:07 -0700 
POST /token? HTTP/1.1 
200 
Java/1.8.0_201
-----------------------
pattern = "((?:\d+[.]){3}\d+)\s*-\s*(\w+)\s*\[([^\]]*)\]\s*\"([^\"]*)\"\s*(\d+)[^\"]*\"-\"\s*\"([^\"]*)\".*$"
import re
def process_line(line):
    r = re.search(pattern, line)
    if r:
        ip, client_id, time_stamp, url, response_code, user_agent = r.groups()
    else:
        return "Error!"
206.92.168.224 
defcyfefydeecgqwfcecyqw 
11/Jul/2016:00:17:07 -0700 
POST /token? HTTP/1.1 
200 
Java/1.8.0_201
-----------------------
pattern = "((?:\d+[.]){3}\d+)\s*-\s*(\w+)\s*\[([^\]]*)\]\s*\"([^\"]*)\"\s*(\d+)[^\"]*\"-\"\s*\"([^\"]*)\".*$"
import re
def process_line(line):
    r = re.search(pattern, line)
    if r:
        ip, client_id, time_stamp, url, response_code, user_agent = r.groups()
    else:
        return "Error!"
206.92.168.224 
defcyfefydeecgqwfcecyqw 
11/Jul/2016:00:17:07 -0700 
POST /token? HTTP/1.1 
200 
Java/1.8.0_201

Community Discussions

Trending Discussions on URT
  • alphaArc length reparameterization of a parametric curve represented as an expression in R
  • nginx: [emerg] unknown &quot;upstream_connect_time&quot; variable
  • Parse &lt;script type=“text/javascript” twitter python
  • FFTW vs fourn subroutine result does not match
  • How to search value into a database after concatenate two attributes in Laravel
  • Regular Expression to validate a string containing '=' and '~' as end character
  • Logstash COMMONAPACHELOG pattern parsing problem
  • how to create two independent drill down plot using Highcharter?
  • How to use dplyr to create a new variable based on a function result in a grouped data?
  • I want to dynamically change the form of django according to user information
Trending Discussions on URT

QUESTION

alphaArc length reparameterization of a parametric curve represented as an expression in R

Asked 2021-May-20 at 05:09

I have been struggling for quite a while now with trying to implement pracma's arclength() function. The two errors I am getting based on what ive tried so far are:

Error in arclength(f, t1, t2) : 
  Argument 'f' must be a parametrized function. 

or

Error in arclength(f, t1, t2) : 
  Argument 'f' must be a vectorized function.

I have a parametric curve defined as a vector of 3 expressions. The curve is an ellipse lying on the plane z=-1 given by:

inputCurve = expression(0.5*cos(t), sin(t),-1)

My code which was taken directly from the documentation is here. The code ideally should reparametrize the inputCurve with respect to arcLength:

arcLengthUtil$arcLengthParametrize <- function(inputCurve){
  
  
  
  f <- function(t) c(eval(quote(inputCurve)))
  
  
  t1<-0; t2<- 2*pi
  a<-0;b<-arclength(f,t1,t2)$length
  
  fParam <-function(s){
    fct <- function(u) arclength(f,a,u)$length -s
    urt<- uniroot(fct, c(a, 2*pi))
    urt$root
  }
 

  return(fParam)
}

I am passing the expression into f as a vector, so I am not sure why I am getting this error. From my understanding eval() should return an expression that can be called within f. I have tried using the Vectorize() function on f and then passing it to arcLength(), but received the error that 'f' must be a parametrized function. I feel like this is a fairly straightforward problem, but if anyone could offer some advice it would be much appreciated. Thank you

##  Example: parametrized 3D-curve with t in 0..3*pi
f <- function(t) c(sin(2*t), cos(t), t)
arclength(f, 0, 3*pi)

This is one of the examples from the documentation, I am just wondering how I can pass my input curve defined by the vector of expressions into my function 'f' since I need the declaration of the curve to be in the form stated above.

ANSWER

Answered 2021-May-20 at 05:09

Some wonky code but this works if you need to use a vector of expressions.

f <- eval(parse(text=paste0("function(t) c(t*0+",paste(inputCurve,collapse=",t*0+"),")")))

This creates a function f for use in the arclength function with the same syntax as in the question. The t*0+ portion of the collapse argument is required for f to return a vector the same length as the input when not all of the expressions contain a variable input.

Source https://stackoverflow.com/questions/67613616

Community Discussions, Code Snippets contain sources that include Stack Exchange Network

Vulnerabilities

No vulnerabilities reported

Install URT

You can download it from GitHub.

Support

I have been developing algorithmic trading tools for a while and it is no secret that unit root tests are widely used in this domain to decide whether a time serie is (weakly) stationary or not and construct on this idea a profitable mean-reversion strategy. Nowadays you often have to look at smaller and smaller time frames as minute data to find such trading opportunities and that means on the back-testing side using more and more historical data to test whether the strategy can be profitable on the long term or not. I found frustrating that the available libraries under R and Python, interpreted languages commonly used in the first steps of building a trading algorithm, were too slow or did not offer enough flexibility. To that extent I wanted to develop a library that could be used under higher level languages to get a first idea on the profitability of a strategy and also when developping a more serious back-tester on a larger amount of historical data under a lower level language such as C++. In algorithmic trading we have to find the right sample size to test for stationarity. If we use a too short sample of historical data on a rolling window the back-testing will be faster but the test precision will be smaller and the results will be less reliable, on the contrary if we use a too large sample the back-testing will be slower but the test precision will be greater and the results will be more reliable. Hence, when testing for stationarity we have to always keep this tradeoff in mind. Sample sizes used are usually between 100 to 5000, leading to relatively small size arrays. I have then decided not to use parallelism for matrix and vector operations as it would not bring any speed improvement and on the contrary would slow down the code when applied on such small dimensions. Although Armadillo does not allow for parallelism yet, Blaze and Eigen do, I made sure to turn off this ability. However, parallelism is used to speed up the lag length optimization by information criterion minimization in ADF and DF-GLS tests by enabling OpenMP. All of these libraries are now using vectorization (from SSE to AVX), activating this feature greatly improves the general performance. During my experimentations I have tried to find the correct set up for each C++ linear algebra library (Armadillo, Blaze and Eigen compiled with either Intel MKL or OpenBLAS) in order to get the fastest results on a standard sample size of 1000. If anyone can find a faster configuration for any of them, or more generally, if anyone has anything to propose that could make the C++ code or the Cython and Rcpp wrappers faster, he is more than welcome to bring his contribution to this project.

DOWNLOAD this Library from

Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
over 430 million Knowledge Items
Find more libraries
Reuse Solution Kits and Libraries Curated by Popular Use Cases

Save this library and start creating your kit

Explore Related Topics

Share this Page

share link
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
over 430 million Knowledge Items
Find more libraries
Reuse Solution Kits and Libraries Curated by Popular Use Cases

Save this library and start creating your kit

  • © 2022 Open Weaver Inc.