kandi background
Explore Kits

rstudio | RStudio is an integrated development environment for R

 by   rstudio Java Version: Current License: Non-SPDX

 by   rstudio Java Version: Current License: Non-SPDX

Download this library from

kandi X-RAY | rstudio Summary

rstudio is a Java library typically used in Editor applications. rstudio has high support. However rstudio has 11906 bugs, it has 1 vulnerabilities, it build file is not available and it has a Non-SPDX License. You can download it from GitHub.
RStudio is an integrated development environment (IDE) for R
Support
Support
Quality
Quality
Security
Security
License
License
Reuse
Reuse

kandi-support Support

  • rstudio has a highly active ecosystem.
  • It has 3927 star(s) with 965 fork(s). There are 249 watchers for this library.
  • It had no major release in the last 12 months.
  • There are 1188 open issues and 5254 have been closed. On average issues are closed in 105 days. There are 10 open pull requests and 0 closed requests.
  • It has a negative sentiment in the developer community.
  • The latest version of rstudio is current.
rstudio Support
Best in #Java
Average in #Java
rstudio Support
Best in #Java
Average in #Java

quality kandi Quality

  • rstudio has 11906 bugs (0 blocker, 3 critical, 5585 major, 6318 minor) and 27695 code smells.
rstudio Quality
Best in #Java
Average in #Java
rstudio Quality
Best in #Java
Average in #Java

securitySecurity

  • rstudio has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
  • rstudio code analysis shows 1 unresolved vulnerabilities (1 blocker, 0 critical, 0 major, 0 minor).
  • There are 11 security hotspots that need review.
rstudio Security
Best in #Java
Average in #Java
rstudio Security
Best in #Java
Average in #Java

license License

  • rstudio has a Non-SPDX License.
  • Non-SPDX licenses can be open source with a non SPDX compliant license, or non open source licenses, and you need to review them closely before use.
rstudio License
Best in #Java
Average in #Java
rstudio License
Best in #Java
Average in #Java

buildReuse

  • rstudio releases are not available. You will need to build from source code and install.
  • rstudio has no build file. You will be need to create the build yourself to build the component from source.
  • Installation instructions are not available. Examples and code snippets are available.
rstudio Reuse
Best in #Java
Average in #Java
rstudio Reuse
Best in #Java
Average in #Java
Top functions reviewed by kandi - BETA

kandi has reviewed rstudio and discovered the below as its top functions. This is intended to give you an instant insight into rstudio implemented functionality, and help decide if they suit your requirements.

  • Sets a JSONArray value to a JSONArray object .
  • Called when a chunk was received .
  • Synchronize preferences from a layer .
  • Create a new project .
  • Called when a line widget is being displayed .
  • Gets the autocompletion context .
  • Parse a multi - line expression .
  • Initialize the workbench .
  • return null if invalid
  • Collects keyboard shortcuts .

rstudio Key Features

Customizable workbench with all of the tools required to work with R in one place (console, source, plots, workspace, help, history, etc.).

Syntax highlighting editor with code completion.

Execute code directly from the source editor (line, selection, or file).

Full support for authoring Sweave and TeX documents.

Runs on Windows, Mac, and Linux, and has a community-maintained [FreeBSD port](https://www.freshports.org/devel/RStudio/).

Can also be run as a server, enabling multiple users to access the RStudio IDE using a web browser.

COPYING - RStudio license (AGPLv3)

NOTICE - Additional open source software included with RStudio

SOURCE - How to obtain the source code for RStudio

INSTALL - How to build and install RStudio from source

VSCODE.md - How to get started with development using Visual Studio Code

default

copy iconCopydownload iconDownload
RStudio is licensed under the AGPLv3, the terms of which are included in
the file COPYING. You can find our source code repository on GitHub at [https://github.com/rstudio/rstudio](https://github.com/rstudio/rstudio).

Documentation

Convert multiple lines to single line using RStudio text editor

copy iconCopydownload iconDownload
dput(c(
"097",
"085",
"041",
"055"
))
#> c("097", "085", "041", "055")

Is it possible to break 1 line of code into multiple in atom IDE, just like in Rstudio

copy iconCopydownload iconDownload
julia> println("Hello \
       world")
Hello world


julia> 1 + 
       2 + 
       3 * 
       4
15

julia> DEPOT_PATH |>
       first |> 
       readdir
16-element Vector{String}:
 "artifacts"
 "bin"
  ⋮

julia> 1,
       2,
       3
(1, 2, 3)

-----------------------
julia> println("Hello \
       world")
Hello world


julia> 1 + 
       2 + 
       3 * 
       4
15

julia> DEPOT_PATH |>
       first |> 
       readdir
16-element Vector{String}:
 "artifacts"
 "bin"
  ⋮

julia> 1,
       2,
       3
(1, 2, 3)

-----------------------
julia> println("""This
       is
       a multi-line \
       text""")
This
is
a multi-line text

How can I make a Shiny app W3C compliant?

copy iconCopydownload iconDownload
# Reprex adapted from https://shiny.rstudio.com/gallery/tabsets.html
library(shiny)
library(htmltools)

# Define UI for random distribution app ----
ui <- fluidPage(
  
  # App title ----
  titlePanel("Tabsets"),
  
  # Sidebar layout with input and output definitions ----
  sidebarLayout(
    
    # Sidebar panel for inputs ----
    {querySidebarPanel <- tagQuery(sidebarPanel(
      # Input: Select the random distribution type ----
      radioButtons("dist", "Distribution type:",
                   c("Normal" = "norm",
                     "Uniform" = "unif",
                     "Log-normal" = "lnorm",
                     "Exponential" = "exp")),
      
      # br() element to introduce extra vertical spacing ----
      br(),
      
      # Input: Slider for the number of observations to generate ----
      sliderInput("n",
                  "Number of observations:",
                  value = 500,
                  min = 1,
                  max = 1000)
    ))
    querySidebarPanel$find(".well")$removeAttrs("role")$addAttrs("role" = "none")$allTags()},
    
    # Main panel for displaying outputs ----
    mainPanel(
      
      # Output: Tabset w/ plot, summary, and table ----
      tabsetPanel(type = "tabs",
                  tabPanel("Plot", plotOutput("plot")),
                  tabPanel("Summary", verbatimTextOutput("summary")),
                  tabPanel("Table", tableOutput("table"))
      )
      
    )
  )
)

# Define server logic for random distribution app ----
server <- function(input, output) {
  
  # Reactive expression to generate the requested distribution ----
  d <- reactive({
    dist <- switch(input$dist,
                   norm = rnorm,
                   unif = runif,
                   lnorm = rlnorm,
                   exp = rexp,
                   rnorm)
    
    dist(input$n)
  })
  
  # Generate a plot of the data ----
  output$plot <- renderPlot({
    dist <- input$dist
    n <- input$n
    
    hist(d(),
         main = paste("r", dist, "(", n, ")", sep = ""),
         col = "#75AADB", border = "white")
  })
}

# Create Shiny app ----
shinyApp(ui, server)

Fast method of getting all the descendants of a parent

copy iconCopydownload iconDownload
library(igraph)
df <- data.frame(parent_id = 1:3, child_id = 2:4)
g <- graph_from_data_frame(df)

setNames(
  rev(
    stack(
      Map(
        names,
        setNames(
          ego(g,
            order = vcount(g),
            mode = "out"
          ),
          names(V(g))
        )
      )
    )
  ),
  names(df)
)
   parent_id child_id
1          1        1
2          1        2
3          1        3
4          1        4
5          2        2
6          2        3
7          2        4
8          3        3
9          3        4
10         4        4
set.seed(23423)

microbenchmark::microbenchmark(
  sqldf = sqldf(sqlQuery),
  tidyigraph = map(V(df_g), ~ names(subcomponent(df_g, .x, mode = "out"))) %>%
    map_df(~ data.frame(child_id = .x), .id = "parent_id"),
  ego = setNames(
    rev(
      stack(
        Map(
          names,
          setNames(
            ego(df_g,
              order = vcount(df_g),
              mode = "out"
            ),
            names(V(df_g))
          )
        )
      )
    ),
    names(df)
  ),
  times = 5
)
Unit: milliseconds
       expr       min       lq      mean    median         uq        max neval
      sqldf 7156.2753 9072.155 9402.6904 9518.2796 10206.3683 11060.3738     5
 tidyigraph 2483.9943 2623.558 3136.7490 2689.8388  2879.5688  5006.7853     5
        ego  182.5941  219.151  307.2481  253.2171   325.8721   555.4064     5
g |>
  ego(order = vcount(g), mode = "out") |>
  setNames(names(V(g))) |>
  Map(f = names) |>
  stack() |>
  rev() |>
  setNames(names(df))
-----------------------
library(igraph)
df <- data.frame(parent_id = 1:3, child_id = 2:4)
g <- graph_from_data_frame(df)

setNames(
  rev(
    stack(
      Map(
        names,
        setNames(
          ego(g,
            order = vcount(g),
            mode = "out"
          ),
          names(V(g))
        )
      )
    )
  ),
  names(df)
)
   parent_id child_id
1          1        1
2          1        2
3          1        3
4          1        4
5          2        2
6          2        3
7          2        4
8          3        3
9          3        4
10         4        4
set.seed(23423)

microbenchmark::microbenchmark(
  sqldf = sqldf(sqlQuery),
  tidyigraph = map(V(df_g), ~ names(subcomponent(df_g, .x, mode = "out"))) %>%
    map_df(~ data.frame(child_id = .x), .id = "parent_id"),
  ego = setNames(
    rev(
      stack(
        Map(
          names,
          setNames(
            ego(df_g,
              order = vcount(df_g),
              mode = "out"
            ),
            names(V(df_g))
          )
        )
      )
    ),
    names(df)
  ),
  times = 5
)
Unit: milliseconds
       expr       min       lq      mean    median         uq        max neval
      sqldf 7156.2753 9072.155 9402.6904 9518.2796 10206.3683 11060.3738     5
 tidyigraph 2483.9943 2623.558 3136.7490 2689.8388  2879.5688  5006.7853     5
        ego  182.5941  219.151  307.2481  253.2171   325.8721   555.4064     5
g |>
  ego(order = vcount(g), mode = "out") |>
  setNames(names(V(g))) |>
  Map(f = names) |>
  stack() |>
  rev() |>
  setNames(names(df))
-----------------------
library(igraph)
df <- data.frame(parent_id = 1:3, child_id = 2:4)
g <- graph_from_data_frame(df)

setNames(
  rev(
    stack(
      Map(
        names,
        setNames(
          ego(g,
            order = vcount(g),
            mode = "out"
          ),
          names(V(g))
        )
      )
    )
  ),
  names(df)
)
   parent_id child_id
1          1        1
2          1        2
3          1        3
4          1        4
5          2        2
6          2        3
7          2        4
8          3        3
9          3        4
10         4        4
set.seed(23423)

microbenchmark::microbenchmark(
  sqldf = sqldf(sqlQuery),
  tidyigraph = map(V(df_g), ~ names(subcomponent(df_g, .x, mode = "out"))) %>%
    map_df(~ data.frame(child_id = .x), .id = "parent_id"),
  ego = setNames(
    rev(
      stack(
        Map(
          names,
          setNames(
            ego(df_g,
              order = vcount(df_g),
              mode = "out"
            ),
            names(V(df_g))
          )
        )
      )
    ),
    names(df)
  ),
  times = 5
)
Unit: milliseconds
       expr       min       lq      mean    median         uq        max neval
      sqldf 7156.2753 9072.155 9402.6904 9518.2796 10206.3683 11060.3738     5
 tidyigraph 2483.9943 2623.558 3136.7490 2689.8388  2879.5688  5006.7853     5
        ego  182.5941  219.151  307.2481  253.2171   325.8721   555.4064     5
g |>
  ego(order = vcount(g), mode = "out") |>
  setNames(names(V(g))) |>
  Map(f = names) |>
  stack() |>
  rev() |>
  setNames(names(df))
-----------------------
library(igraph)
df <- data.frame(parent_id = 1:3, child_id = 2:4)
g <- graph_from_data_frame(df)

setNames(
  rev(
    stack(
      Map(
        names,
        setNames(
          ego(g,
            order = vcount(g),
            mode = "out"
          ),
          names(V(g))
        )
      )
    )
  ),
  names(df)
)
   parent_id child_id
1          1        1
2          1        2
3          1        3
4          1        4
5          2        2
6          2        3
7          2        4
8          3        3
9          3        4
10         4        4
set.seed(23423)

microbenchmark::microbenchmark(
  sqldf = sqldf(sqlQuery),
  tidyigraph = map(V(df_g), ~ names(subcomponent(df_g, .x, mode = "out"))) %>%
    map_df(~ data.frame(child_id = .x), .id = "parent_id"),
  ego = setNames(
    rev(
      stack(
        Map(
          names,
          setNames(
            ego(df_g,
              order = vcount(df_g),
              mode = "out"
            ),
            names(V(df_g))
          )
        )
      )
    ),
    names(df)
  ),
  times = 5
)
Unit: milliseconds
       expr       min       lq      mean    median         uq        max neval
      sqldf 7156.2753 9072.155 9402.6904 9518.2796 10206.3683 11060.3738     5
 tidyigraph 2483.9943 2623.558 3136.7490 2689.8388  2879.5688  5006.7853     5
        ego  182.5941  219.151  307.2481  253.2171   325.8721   555.4064     5
g |>
  ego(order = vcount(g), mode = "out") |>
  setNames(names(V(g))) |>
  Map(f = names) |>
  stack() |>
  rev() |>
  setNames(names(df))
-----------------------
library(igraph)
df <- data.frame(parent_id = 1:3, child_id = 2:4)
g <- graph_from_data_frame(df)

setNames(
  rev(
    stack(
      Map(
        names,
        setNames(
          ego(g,
            order = vcount(g),
            mode = "out"
          ),
          names(V(g))
        )
      )
    )
  ),
  names(df)
)
   parent_id child_id
1          1        1
2          1        2
3          1        3
4          1        4
5          2        2
6          2        3
7          2        4
8          3        3
9          3        4
10         4        4
set.seed(23423)

microbenchmark::microbenchmark(
  sqldf = sqldf(sqlQuery),
  tidyigraph = map(V(df_g), ~ names(subcomponent(df_g, .x, mode = "out"))) %>%
    map_df(~ data.frame(child_id = .x), .id = "parent_id"),
  ego = setNames(
    rev(
      stack(
        Map(
          names,
          setNames(
            ego(df_g,
              order = vcount(df_g),
              mode = "out"
            ),
            names(V(df_g))
          )
        )
      )
    ),
    names(df)
  ),
  times = 5
)
Unit: milliseconds
       expr       min       lq      mean    median         uq        max neval
      sqldf 7156.2753 9072.155 9402.6904 9518.2796 10206.3683 11060.3738     5
 tidyigraph 2483.9943 2623.558 3136.7490 2689.8388  2879.5688  5006.7853     5
        ego  182.5941  219.151  307.2481  253.2171   325.8721   555.4064     5
g |>
  ego(order = vcount(g), mode = "out") |>
  setNames(names(V(g))) |>
  Map(f = names) |>
  stack() |>
  rev() |>
  setNames(names(df))
-----------------------
all_nodes <- unique(c(parent_id, child_id)  # all nodes
uid <- match(all_nodes, all_nodes)
pid <- match(parent_id, all_nodes)
cid <- match(child_id, all_nodes)
edge_list <- unname(split(cid, factor(pid, levels = uid)))
edge_lengths <- lengths(edge_list)
while (length(pid)) {
    pid <- rep(pid, edge_lengths[cid])
    cid <- unlist(edge_list[cid])
}
visitor <- function(uid, n_max = 3000) {
    n <- length(uid)
    if (n <= n_max) {
        ## over-allocate, to support `key = pid * n + cid`
        visited <- logical((n + 1L) * n) # FALSE on construction
    } else {
        stop("length(uid) greater than n_max = ", n_max)
    }
    function(pid, cid) {
        key <- pid * n + cid
        to_visit <- !(duplicated(key) | visited[key])
        visited[key[to_visit]] <<- TRUE  # update nodes that we will now visit
        to_visit
    }
}
> visit = visitor(1:10)
> visit(1:3, 2:4)
[1] TRUE TRUE TRUE
> visit(2:4, 3:5)
[1] FALSE FALSE  TRUE
visitor <- function(uid, n_max = 3000) {
    n <- length(uid)
    if (n <= n_max) {
        ## over-allocate, to support `key = pid * n + cid`
        visited <- logical((n + 1L) * n) # FALSE on construction
    } else {
        stop("length(uid) greater than n_max = ", n_max)
    }
    function(pid, cid) {
        key <- pid * n + cid
        to_visit <- !(duplicated(key) | visited[key])
        visited[key[to_visit]] <<- TRUE
        to_visit
    }
}

ancestor_descendant <- function(df) {
    ## encode parent and child to unique integer values
    ids <- unique(c(df$parent_id, df$child_id))
    uid <- match(ids, ids)
    pid <- match(df$parent_id, ids)
    cid <- match(df$child_id, ids)
    n <- length(uid)

    ## edge list of parent-offspring relationships, based on unique
    ## integer values; list is ordered by id, all ids are present, ids
    ## without children have zero-length elements. Use `unname()` so
    ## that edge_list is always indexed by integer
    edge_list <- unname(split(cid, factor(pid, levels = uid), drop = FALSE))
    edge_lengths <- lengths(edge_list)

    visit <- visitor(uid)
    keep <- visit(uid, uid) # all TRUE
    aid = did = list(uid) # results -- all uid's are there own ancestor / descendant
    i = 1L
   
    while (length(pid)) {
        ## only add new edges
        keep <- visit(pid, cid)
        ## record current generation ancestors and descendants
        pid <- pid[keep]
        cid <- cid[keep]
        i <- i + 1L
        aid[[i]] <- pid
        did[[i]] <- cid

        ## calculate next generation pid and cid.
        pid <- rep(pid, edge_lengths[cid])
        cid <- unlist(edge_list[cid])
    }
    ## decode results to original ids and clean up return value
    df <- data.frame(
        ancestor_id = ids[unlist(aid)],
        descendant_id = ids[unlist(did)]
    )
    df <- df[order(df$ancestor_id, df$descendant_id),]
    rownames(df) <- NULL
    df
}
## Original example
df <- data.frame(parent_id = 1:1000L)
df$child_id <- df$parent_id + 1L
df = df[sample(nrow(df)),]
system.time(result <- ancestor_descendant(df))
##  user  system elapsed 
## 0.243   0.001   0.245 
dim(result)
## [1] 501501      2

## updated example from comments
df <- data.frame(parent_id = 1:1000L)
df$child_id <- df$parent_id + 1L
df <- rbind(df, data.frame(parent_id = 1000L, child_id = 1002L))
system.time(result <- ancestor_descendant(df))
##  user  system elapsed 
## 0.195   0.001   0.195 
dim(result)
## [1] 502502      2

## problematic case from @jblood94
df <- data.frame(
    parent_id=c(1, 1, 2),
    child_id = c(2, 3, 3)
)
ancestor_descendant(df)
##   ancestor_id descendant_id
## 1           1             1
## 2           1             2
## 3           1             3
## 4           2             2
## 5           2             3
## 6           3             3

## previously failed without filtering re-visited nodes
df <- data.frame(
    parent_id = rep(1:100, each = 2),
    child_id = c(2, rep(3:101, each = 2), 102)
)
system.time(result <- ancestor_descendant(df))
##  user  system elapsed 
## 0.005   0.000   0.006 
dim(result)
## [1] 5252    2
-----------------------
all_nodes <- unique(c(parent_id, child_id)  # all nodes
uid <- match(all_nodes, all_nodes)
pid <- match(parent_id, all_nodes)
cid <- match(child_id, all_nodes)
edge_list <- unname(split(cid, factor(pid, levels = uid)))
edge_lengths <- lengths(edge_list)
while (length(pid)) {
    pid <- rep(pid, edge_lengths[cid])
    cid <- unlist(edge_list[cid])
}
visitor <- function(uid, n_max = 3000) {
    n <- length(uid)
    if (n <= n_max) {
        ## over-allocate, to support `key = pid * n + cid`
        visited <- logical((n + 1L) * n) # FALSE on construction
    } else {
        stop("length(uid) greater than n_max = ", n_max)
    }
    function(pid, cid) {
        key <- pid * n + cid
        to_visit <- !(duplicated(key) | visited[key])
        visited[key[to_visit]] <<- TRUE  # update nodes that we will now visit
        to_visit
    }
}
> visit = visitor(1:10)
> visit(1:3, 2:4)
[1] TRUE TRUE TRUE
> visit(2:4, 3:5)
[1] FALSE FALSE  TRUE
visitor <- function(uid, n_max = 3000) {
    n <- length(uid)
    if (n <= n_max) {
        ## over-allocate, to support `key = pid * n + cid`
        visited <- logical((n + 1L) * n) # FALSE on construction
    } else {
        stop("length(uid) greater than n_max = ", n_max)
    }
    function(pid, cid) {
        key <- pid * n + cid
        to_visit <- !(duplicated(key) | visited[key])
        visited[key[to_visit]] <<- TRUE
        to_visit
    }
}

ancestor_descendant <- function(df) {
    ## encode parent and child to unique integer values
    ids <- unique(c(df$parent_id, df$child_id))
    uid <- match(ids, ids)
    pid <- match(df$parent_id, ids)
    cid <- match(df$child_id, ids)
    n <- length(uid)

    ## edge list of parent-offspring relationships, based on unique
    ## integer values; list is ordered by id, all ids are present, ids
    ## without children have zero-length elements. Use `unname()` so
    ## that edge_list is always indexed by integer
    edge_list <- unname(split(cid, factor(pid, levels = uid), drop = FALSE))
    edge_lengths <- lengths(edge_list)

    visit <- visitor(uid)
    keep <- visit(uid, uid) # all TRUE
    aid = did = list(uid) # results -- all uid's are there own ancestor / descendant
    i = 1L
   
    while (length(pid)) {
        ## only add new edges
        keep <- visit(pid, cid)
        ## record current generation ancestors and descendants
        pid <- pid[keep]
        cid <- cid[keep]
        i <- i + 1L
        aid[[i]] <- pid
        did[[i]] <- cid

        ## calculate next generation pid and cid.
        pid <- rep(pid, edge_lengths[cid])
        cid <- unlist(edge_list[cid])
    }
    ## decode results to original ids and clean up return value
    df <- data.frame(
        ancestor_id = ids[unlist(aid)],
        descendant_id = ids[unlist(did)]
    )
    df <- df[order(df$ancestor_id, df$descendant_id),]
    rownames(df) <- NULL
    df
}
## Original example
df <- data.frame(parent_id = 1:1000L)
df$child_id <- df$parent_id + 1L
df = df[sample(nrow(df)),]
system.time(result <- ancestor_descendant(df))
##  user  system elapsed 
## 0.243   0.001   0.245 
dim(result)
## [1] 501501      2

## updated example from comments
df <- data.frame(parent_id = 1:1000L)
df$child_id <- df$parent_id + 1L
df <- rbind(df, data.frame(parent_id = 1000L, child_id = 1002L))
system.time(result <- ancestor_descendant(df))
##  user  system elapsed 
## 0.195   0.001   0.195 
dim(result)
## [1] 502502      2

## problematic case from @jblood94
df <- data.frame(
    parent_id=c(1, 1, 2),
    child_id = c(2, 3, 3)
)
ancestor_descendant(df)
##   ancestor_id descendant_id
## 1           1             1
## 2           1             2
## 3           1             3
## 4           2             2
## 5           2             3
## 6           3             3

## previously failed without filtering re-visited nodes
df <- data.frame(
    parent_id = rep(1:100, each = 2),
    child_id = c(2, rep(3:101, each = 2), 102)
)
system.time(result <- ancestor_descendant(df))
##  user  system elapsed 
## 0.005   0.000   0.006 
dim(result)
## [1] 5252    2
-----------------------
all_nodes <- unique(c(parent_id, child_id)  # all nodes
uid <- match(all_nodes, all_nodes)
pid <- match(parent_id, all_nodes)
cid <- match(child_id, all_nodes)
edge_list <- unname(split(cid, factor(pid, levels = uid)))
edge_lengths <- lengths(edge_list)
while (length(pid)) {
    pid <- rep(pid, edge_lengths[cid])
    cid <- unlist(edge_list[cid])
}
visitor <- function(uid, n_max = 3000) {
    n <- length(uid)
    if (n <= n_max) {
        ## over-allocate, to support `key = pid * n + cid`
        visited <- logical((n + 1L) * n) # FALSE on construction
    } else {
        stop("length(uid) greater than n_max = ", n_max)
    }
    function(pid, cid) {
        key <- pid * n + cid
        to_visit <- !(duplicated(key) | visited[key])
        visited[key[to_visit]] <<- TRUE  # update nodes that we will now visit
        to_visit
    }
}
> visit = visitor(1:10)
> visit(1:3, 2:4)
[1] TRUE TRUE TRUE
> visit(2:4, 3:5)
[1] FALSE FALSE  TRUE
visitor <- function(uid, n_max = 3000) {
    n <- length(uid)
    if (n <= n_max) {
        ## over-allocate, to support `key = pid * n + cid`
        visited <- logical((n + 1L) * n) # FALSE on construction
    } else {
        stop("length(uid) greater than n_max = ", n_max)
    }
    function(pid, cid) {
        key <- pid * n + cid
        to_visit <- !(duplicated(key) | visited[key])
        visited[key[to_visit]] <<- TRUE
        to_visit
    }
}

ancestor_descendant <- function(df) {
    ## encode parent and child to unique integer values
    ids <- unique(c(df$parent_id, df$child_id))
    uid <- match(ids, ids)
    pid <- match(df$parent_id, ids)
    cid <- match(df$child_id, ids)
    n <- length(uid)

    ## edge list of parent-offspring relationships, based on unique
    ## integer values; list is ordered by id, all ids are present, ids
    ## without children have zero-length elements. Use `unname()` so
    ## that edge_list is always indexed by integer
    edge_list <- unname(split(cid, factor(pid, levels = uid), drop = FALSE))
    edge_lengths <- lengths(edge_list)

    visit <- visitor(uid)
    keep <- visit(uid, uid) # all TRUE
    aid = did = list(uid) # results -- all uid's are there own ancestor / descendant
    i = 1L
   
    while (length(pid)) {
        ## only add new edges
        keep <- visit(pid, cid)
        ## record current generation ancestors and descendants
        pid <- pid[keep]
        cid <- cid[keep]
        i <- i + 1L
        aid[[i]] <- pid
        did[[i]] <- cid

        ## calculate next generation pid and cid.
        pid <- rep(pid, edge_lengths[cid])
        cid <- unlist(edge_list[cid])
    }
    ## decode results to original ids and clean up return value
    df <- data.frame(
        ancestor_id = ids[unlist(aid)],
        descendant_id = ids[unlist(did)]
    )
    df <- df[order(df$ancestor_id, df$descendant_id),]
    rownames(df) <- NULL
    df
}
## Original example
df <- data.frame(parent_id = 1:1000L)
df$child_id <- df$parent_id + 1L
df = df[sample(nrow(df)),]
system.time(result <- ancestor_descendant(df))
##  user  system elapsed 
## 0.243   0.001   0.245 
dim(result)
## [1] 501501      2

## updated example from comments
df <- data.frame(parent_id = 1:1000L)
df$child_id <- df$parent_id + 1L
df <- rbind(df, data.frame(parent_id = 1000L, child_id = 1002L))
system.time(result <- ancestor_descendant(df))
##  user  system elapsed 
## 0.195   0.001   0.195 
dim(result)
## [1] 502502      2

## problematic case from @jblood94
df <- data.frame(
    parent_id=c(1, 1, 2),
    child_id = c(2, 3, 3)
)
ancestor_descendant(df)
##   ancestor_id descendant_id
## 1           1             1
## 2           1             2
## 3           1             3
## 4           2             2
## 5           2             3
## 6           3             3

## previously failed without filtering re-visited nodes
df <- data.frame(
    parent_id = rep(1:100, each = 2),
    child_id = c(2, rep(3:101, each = 2), 102)
)
system.time(result <- ancestor_descendant(df))
##  user  system elapsed 
## 0.005   0.000   0.006 
dim(result)
## [1] 5252    2
-----------------------
all_nodes <- unique(c(parent_id, child_id)  # all nodes
uid <- match(all_nodes, all_nodes)
pid <- match(parent_id, all_nodes)
cid <- match(child_id, all_nodes)
edge_list <- unname(split(cid, factor(pid, levels = uid)))
edge_lengths <- lengths(edge_list)
while (length(pid)) {
    pid <- rep(pid, edge_lengths[cid])
    cid <- unlist(edge_list[cid])
}
visitor <- function(uid, n_max = 3000) {
    n <- length(uid)
    if (n <= n_max) {
        ## over-allocate, to support `key = pid * n + cid`
        visited <- logical((n + 1L) * n) # FALSE on construction
    } else {
        stop("length(uid) greater than n_max = ", n_max)
    }
    function(pid, cid) {
        key <- pid * n + cid
        to_visit <- !(duplicated(key) | visited[key])
        visited[key[to_visit]] <<- TRUE  # update nodes that we will now visit
        to_visit
    }
}
> visit = visitor(1:10)
> visit(1:3, 2:4)
[1] TRUE TRUE TRUE
> visit(2:4, 3:5)
[1] FALSE FALSE  TRUE
visitor <- function(uid, n_max = 3000) {
    n <- length(uid)
    if (n <= n_max) {
        ## over-allocate, to support `key = pid * n + cid`
        visited <- logical((n + 1L) * n) # FALSE on construction
    } else {
        stop("length(uid) greater than n_max = ", n_max)
    }
    function(pid, cid) {
        key <- pid * n + cid
        to_visit <- !(duplicated(key) | visited[key])
        visited[key[to_visit]] <<- TRUE
        to_visit
    }
}

ancestor_descendant <- function(df) {
    ## encode parent and child to unique integer values
    ids <- unique(c(df$parent_id, df$child_id))
    uid <- match(ids, ids)
    pid <- match(df$parent_id, ids)
    cid <- match(df$child_id, ids)
    n <- length(uid)

    ## edge list of parent-offspring relationships, based on unique
    ## integer values; list is ordered by id, all ids are present, ids
    ## without children have zero-length elements. Use `unname()` so
    ## that edge_list is always indexed by integer
    edge_list <- unname(split(cid, factor(pid, levels = uid), drop = FALSE))
    edge_lengths <- lengths(edge_list)

    visit <- visitor(uid)
    keep <- visit(uid, uid) # all TRUE
    aid = did = list(uid) # results -- all uid's are there own ancestor / descendant
    i = 1L
   
    while (length(pid)) {
        ## only add new edges
        keep <- visit(pid, cid)
        ## record current generation ancestors and descendants
        pid <- pid[keep]
        cid <- cid[keep]
        i <- i + 1L
        aid[[i]] <- pid
        did[[i]] <- cid

        ## calculate next generation pid and cid.
        pid <- rep(pid, edge_lengths[cid])
        cid <- unlist(edge_list[cid])
    }
    ## decode results to original ids and clean up return value
    df <- data.frame(
        ancestor_id = ids[unlist(aid)],
        descendant_id = ids[unlist(did)]
    )
    df <- df[order(df$ancestor_id, df$descendant_id),]
    rownames(df) <- NULL
    df
}
## Original example
df <- data.frame(parent_id = 1:1000L)
df$child_id <- df$parent_id + 1L
df = df[sample(nrow(df)),]
system.time(result <- ancestor_descendant(df))
##  user  system elapsed 
## 0.243   0.001   0.245 
dim(result)
## [1] 501501      2

## updated example from comments
df <- data.frame(parent_id = 1:1000L)
df$child_id <- df$parent_id + 1L
df <- rbind(df, data.frame(parent_id = 1000L, child_id = 1002L))
system.time(result <- ancestor_descendant(df))
##  user  system elapsed 
## 0.195   0.001   0.195 
dim(result)
## [1] 502502      2

## problematic case from @jblood94
df <- data.frame(
    parent_id=c(1, 1, 2),
    child_id = c(2, 3, 3)
)
ancestor_descendant(df)
##   ancestor_id descendant_id
## 1           1             1
## 2           1             2
## 3           1             3
## 4           2             2
## 5           2             3
## 6           3             3

## previously failed without filtering re-visited nodes
df <- data.frame(
    parent_id = rep(1:100, each = 2),
    child_id = c(2, rep(3:101, each = 2), 102)
)
system.time(result <- ancestor_descendant(df))
##  user  system elapsed 
## 0.005   0.000   0.006 
dim(result)
## [1] 5252    2
-----------------------
all_nodes <- unique(c(parent_id, child_id)  # all nodes
uid <- match(all_nodes, all_nodes)
pid <- match(parent_id, all_nodes)
cid <- match(child_id, all_nodes)
edge_list <- unname(split(cid, factor(pid, levels = uid)))
edge_lengths <- lengths(edge_list)
while (length(pid)) {
    pid <- rep(pid, edge_lengths[cid])
    cid <- unlist(edge_list[cid])
}
visitor <- function(uid, n_max = 3000) {
    n <- length(uid)
    if (n <= n_max) {
        ## over-allocate, to support `key = pid * n + cid`
        visited <- logical((n + 1L) * n) # FALSE on construction
    } else {
        stop("length(uid) greater than n_max = ", n_max)
    }
    function(pid, cid) {
        key <- pid * n + cid
        to_visit <- !(duplicated(key) | visited[key])
        visited[key[to_visit]] <<- TRUE  # update nodes that we will now visit
        to_visit
    }
}
> visit = visitor(1:10)
> visit(1:3, 2:4)
[1] TRUE TRUE TRUE
> visit(2:4, 3:5)
[1] FALSE FALSE  TRUE
visitor <- function(uid, n_max = 3000) {
    n <- length(uid)
    if (n <= n_max) {
        ## over-allocate, to support `key = pid * n + cid`
        visited <- logical((n + 1L) * n) # FALSE on construction
    } else {
        stop("length(uid) greater than n_max = ", n_max)
    }
    function(pid, cid) {
        key <- pid * n + cid
        to_visit <- !(duplicated(key) | visited[key])
        visited[key[to_visit]] <<- TRUE
        to_visit
    }
}

ancestor_descendant <- function(df) {
    ## encode parent and child to unique integer values
    ids <- unique(c(df$parent_id, df$child_id))
    uid <- match(ids, ids)
    pid <- match(df$parent_id, ids)
    cid <- match(df$child_id, ids)
    n <- length(uid)

    ## edge list of parent-offspring relationships, based on unique
    ## integer values; list is ordered by id, all ids are present, ids
    ## without children have zero-length elements. Use `unname()` so
    ## that edge_list is always indexed by integer
    edge_list <- unname(split(cid, factor(pid, levels = uid), drop = FALSE))
    edge_lengths <- lengths(edge_list)

    visit <- visitor(uid)
    keep <- visit(uid, uid) # all TRUE
    aid = did = list(uid) # results -- all uid's are there own ancestor / descendant
    i = 1L
   
    while (length(pid)) {
        ## only add new edges
        keep <- visit(pid, cid)
        ## record current generation ancestors and descendants
        pid <- pid[keep]
        cid <- cid[keep]
        i <- i + 1L
        aid[[i]] <- pid
        did[[i]] <- cid

        ## calculate next generation pid and cid.
        pid <- rep(pid, edge_lengths[cid])
        cid <- unlist(edge_list[cid])
    }
    ## decode results to original ids and clean up return value
    df <- data.frame(
        ancestor_id = ids[unlist(aid)],
        descendant_id = ids[unlist(did)]
    )
    df <- df[order(df$ancestor_id, df$descendant_id),]
    rownames(df) <- NULL
    df
}
## Original example
df <- data.frame(parent_id = 1:1000L)
df$child_id <- df$parent_id + 1L
df = df[sample(nrow(df)),]
system.time(result <- ancestor_descendant(df))
##  user  system elapsed 
## 0.243   0.001   0.245 
dim(result)
## [1] 501501      2

## updated example from comments
df <- data.frame(parent_id = 1:1000L)
df$child_id <- df$parent_id + 1L
df <- rbind(df, data.frame(parent_id = 1000L, child_id = 1002L))
system.time(result <- ancestor_descendant(df))
##  user  system elapsed 
## 0.195   0.001   0.195 
dim(result)
## [1] 502502      2

## problematic case from @jblood94
df <- data.frame(
    parent_id=c(1, 1, 2),
    child_id = c(2, 3, 3)
)
ancestor_descendant(df)
##   ancestor_id descendant_id
## 1           1             1
## 2           1             2
## 3           1             3
## 4           2             2
## 5           2             3
## 6           3             3

## previously failed without filtering re-visited nodes
df <- data.frame(
    parent_id = rep(1:100, each = 2),
    child_id = c(2, rep(3:101, each = 2), 102)
)
system.time(result <- ancestor_descendant(df))
##  user  system elapsed 
## 0.005   0.000   0.006 
dim(result)
## [1] 5252    2
-----------------------
all_nodes <- unique(c(parent_id, child_id)  # all nodes
uid <- match(all_nodes, all_nodes)
pid <- match(parent_id, all_nodes)
cid <- match(child_id, all_nodes)
edge_list <- unname(split(cid, factor(pid, levels = uid)))
edge_lengths <- lengths(edge_list)
while (length(pid)) {
    pid <- rep(pid, edge_lengths[cid])
    cid <- unlist(edge_list[cid])
}
visitor <- function(uid, n_max = 3000) {
    n <- length(uid)
    if (n <= n_max) {
        ## over-allocate, to support `key = pid * n + cid`
        visited <- logical((n + 1L) * n) # FALSE on construction
    } else {
        stop("length(uid) greater than n_max = ", n_max)
    }
    function(pid, cid) {
        key <- pid * n + cid
        to_visit <- !(duplicated(key) | visited[key])
        visited[key[to_visit]] <<- TRUE  # update nodes that we will now visit
        to_visit
    }
}
> visit = visitor(1:10)
> visit(1:3, 2:4)
[1] TRUE TRUE TRUE
> visit(2:4, 3:5)
[1] FALSE FALSE  TRUE
visitor <- function(uid, n_max = 3000) {
    n <- length(uid)
    if (n <= n_max) {
        ## over-allocate, to support `key = pid * n + cid`
        visited <- logical((n + 1L) * n) # FALSE on construction
    } else {
        stop("length(uid) greater than n_max = ", n_max)
    }
    function(pid, cid) {
        key <- pid * n + cid
        to_visit <- !(duplicated(key) | visited[key])
        visited[key[to_visit]] <<- TRUE
        to_visit
    }
}

ancestor_descendant <- function(df) {
    ## encode parent and child to unique integer values
    ids <- unique(c(df$parent_id, df$child_id))
    uid <- match(ids, ids)
    pid <- match(df$parent_id, ids)
    cid <- match(df$child_id, ids)
    n <- length(uid)

    ## edge list of parent-offspring relationships, based on unique
    ## integer values; list is ordered by id, all ids are present, ids
    ## without children have zero-length elements. Use `unname()` so
    ## that edge_list is always indexed by integer
    edge_list <- unname(split(cid, factor(pid, levels = uid), drop = FALSE))
    edge_lengths <- lengths(edge_list)

    visit <- visitor(uid)
    keep <- visit(uid, uid) # all TRUE
    aid = did = list(uid) # results -- all uid's are there own ancestor / descendant
    i = 1L
   
    while (length(pid)) {
        ## only add new edges
        keep <- visit(pid, cid)
        ## record current generation ancestors and descendants
        pid <- pid[keep]
        cid <- cid[keep]
        i <- i + 1L
        aid[[i]] <- pid
        did[[i]] <- cid

        ## calculate next generation pid and cid.
        pid <- rep(pid, edge_lengths[cid])
        cid <- unlist(edge_list[cid])
    }
    ## decode results to original ids and clean up return value
    df <- data.frame(
        ancestor_id = ids[unlist(aid)],
        descendant_id = ids[unlist(did)]
    )
    df <- df[order(df$ancestor_id, df$descendant_id),]
    rownames(df) <- NULL
    df
}
## Original example
df <- data.frame(parent_id = 1:1000L)
df$child_id <- df$parent_id + 1L
df = df[sample(nrow(df)),]
system.time(result <- ancestor_descendant(df))
##  user  system elapsed 
## 0.243   0.001   0.245 
dim(result)
## [1] 501501      2

## updated example from comments
df <- data.frame(parent_id = 1:1000L)
df$child_id <- df$parent_id + 1L
df <- rbind(df, data.frame(parent_id = 1000L, child_id = 1002L))
system.time(result <- ancestor_descendant(df))
##  user  system elapsed 
## 0.195   0.001   0.195 
dim(result)
## [1] 502502      2

## problematic case from @jblood94
df <- data.frame(
    parent_id=c(1, 1, 2),
    child_id = c(2, 3, 3)
)
ancestor_descendant(df)
##   ancestor_id descendant_id
## 1           1             1
## 2           1             2
## 3           1             3
## 4           2             2
## 5           2             3
## 6           3             3

## previously failed without filtering re-visited nodes
df <- data.frame(
    parent_id = rep(1:100, each = 2),
    child_id = c(2, rep(3:101, each = 2), 102)
)
system.time(result <- ancestor_descendant(df))
##  user  system elapsed 
## 0.005   0.000   0.006 
dim(result)
## [1] 5252    2
-----------------------
all_nodes <- unique(c(parent_id, child_id)  # all nodes
uid <- match(all_nodes, all_nodes)
pid <- match(parent_id, all_nodes)
cid <- match(child_id, all_nodes)
edge_list <- unname(split(cid, factor(pid, levels = uid)))
edge_lengths <- lengths(edge_list)
while (length(pid)) {
    pid <- rep(pid, edge_lengths[cid])
    cid <- unlist(edge_list[cid])
}
visitor <- function(uid, n_max = 3000) {
    n <- length(uid)
    if (n <= n_max) {
        ## over-allocate, to support `key = pid * n + cid`
        visited <- logical((n + 1L) * n) # FALSE on construction
    } else {
        stop("length(uid) greater than n_max = ", n_max)
    }
    function(pid, cid) {
        key <- pid * n + cid
        to_visit <- !(duplicated(key) | visited[key])
        visited[key[to_visit]] <<- TRUE  # update nodes that we will now visit
        to_visit
    }
}
> visit = visitor(1:10)
> visit(1:3, 2:4)
[1] TRUE TRUE TRUE
> visit(2:4, 3:5)
[1] FALSE FALSE  TRUE
visitor <- function(uid, n_max = 3000) {
    n <- length(uid)
    if (n <= n_max) {
        ## over-allocate, to support `key = pid * n + cid`
        visited <- logical((n + 1L) * n) # FALSE on construction
    } else {
        stop("length(uid) greater than n_max = ", n_max)
    }
    function(pid, cid) {
        key <- pid * n + cid
        to_visit <- !(duplicated(key) | visited[key])
        visited[key[to_visit]] <<- TRUE
        to_visit
    }
}

ancestor_descendant <- function(df) {
    ## encode parent and child to unique integer values
    ids <- unique(c(df$parent_id, df$child_id))
    uid <- match(ids, ids)
    pid <- match(df$parent_id, ids)
    cid <- match(df$child_id, ids)
    n <- length(uid)

    ## edge list of parent-offspring relationships, based on unique
    ## integer values; list is ordered by id, all ids are present, ids
    ## without children have zero-length elements. Use `unname()` so
    ## that edge_list is always indexed by integer
    edge_list <- unname(split(cid, factor(pid, levels = uid), drop = FALSE))
    edge_lengths <- lengths(edge_list)

    visit <- visitor(uid)
    keep <- visit(uid, uid) # all TRUE
    aid = did = list(uid) # results -- all uid's are there own ancestor / descendant
    i = 1L
   
    while (length(pid)) {
        ## only add new edges
        keep <- visit(pid, cid)
        ## record current generation ancestors and descendants
        pid <- pid[keep]
        cid <- cid[keep]
        i <- i + 1L
        aid[[i]] <- pid
        did[[i]] <- cid

        ## calculate next generation pid and cid.
        pid <- rep(pid, edge_lengths[cid])
        cid <- unlist(edge_list[cid])
    }
    ## decode results to original ids and clean up return value
    df <- data.frame(
        ancestor_id = ids[unlist(aid)],
        descendant_id = ids[unlist(did)]
    )
    df <- df[order(df$ancestor_id, df$descendant_id),]
    rownames(df) <- NULL
    df
}
## Original example
df <- data.frame(parent_id = 1:1000L)
df$child_id <- df$parent_id + 1L
df = df[sample(nrow(df)),]
system.time(result <- ancestor_descendant(df))
##  user  system elapsed 
## 0.243   0.001   0.245 
dim(result)
## [1] 501501      2

## updated example from comments
df <- data.frame(parent_id = 1:1000L)
df$child_id <- df$parent_id + 1L
df <- rbind(df, data.frame(parent_id = 1000L, child_id = 1002L))
system.time(result <- ancestor_descendant(df))
##  user  system elapsed 
## 0.195   0.001   0.195 
dim(result)
## [1] 502502      2

## problematic case from @jblood94
df <- data.frame(
    parent_id=c(1, 1, 2),
    child_id = c(2, 3, 3)
)
ancestor_descendant(df)
##   ancestor_id descendant_id
## 1           1             1
## 2           1             2
## 3           1             3
## 4           2             2
## 5           2             3
## 6           3             3

## previously failed without filtering re-visited nodes
df <- data.frame(
    parent_id = rep(1:100, each = 2),
    child_id = c(2, rep(3:101, each = 2), 102)
)
system.time(result <- ancestor_descendant(df))
##  user  system elapsed 
## 0.005   0.000   0.006 
dim(result)
## [1] 5252    2

How to extract all code from an RMarkdown (.Rmd) file?

copy iconCopydownload iconDownload
knitr::purl(input = "Report.Rmd", output = "Report.R",documentation = 0)
knitr::opts_chunk$set(echo = TRUE)

a = 1
print(a)
summary(cars)

plot(pressure)
-----------------------
knitr::purl(input = "Report.Rmd", output = "Report.R",documentation = 0)
knitr::opts_chunk$set(echo = TRUE)

a = 1
print(a)
summary(cars)

plot(pressure)
-----------------------
```{r setup, include=FALSE}
knitr::knit_hooks$set(purl = knitr::hook_purl)
knitr::opts_chunk$set(echo = TRUE)
```

Convincing R that the .dbf file associated with a .shp file is not an executable during command checks

copy iconCopydownload iconDownload
fn = "myfile.dbf"
sz = file.info(fn)$size
r = readBin(fn, raw(), sz)
r[2] = as.raw(121) ## make it 2021 instead of 2022
writeBin(r, fn)

Equivalent of the Rstudio `browser()` function in Julia for debugging

copy iconCopydownload iconDownload
julia> using Debugger

julia> @run for i in 1:5
           println(i)
           if i > 3
               @bp
           end
       end
1
2
3
4
Hit breakpoint:
In ##thunk#257() at REPL[4]:1
  9  │         Base.println(i)
 10  │   %10 = i > 3
 11  └──       goto #4 if not %10
●12  3 ─       nothing
>13  4 ┄       @_2 = Base.iterate(%1, %8)
 14  │   %14 = @_2 === nothing
 15  │   %15 = ($(QuoteNode(Core.Intrinsics.not_int)))(%14)
 16  └──       goto #6 if not %15
 17  5 ─       goto #2

About to run: (iterate)(1:5, 4)
1|debug>
-----------------------
julia> using Infiltrator

julia> for i in 1:5
         println(i)
         if i > 3
           @infiltrate
         end
       end
1
2
3
4
Infiltrating top-level scope at REPL[1]:4:

infil> i
4

Tensorflow setup on RStudio/ R | CentOS

copy iconCopydownload iconDownload
sudo yum install epel-release
sudo yum install R
sudo yum install libxml2-devel
sudo yum install openssl-devel
sudo yum install libcurl-devel
sudo yum install libXcomposite libXcursor libXi libXtst libXrandr alsa-lib mesa-libEGL libXdamage mesa-libGL libXScrnSaver
conda init
conda create --name tf
conda activate tf
conda install -c conda-forge tensorflow
sudo yum install centos-release-scl
sudo yum install devtoolset-7-gcc*
scl enable devtoolset-7 R
install.packages("remotes")
remotes::install_github('rstudio/reticulate')
reticulate::use_condaenv("tf", conda = "~/anaconda3/bin/conda")
reticulate::repl_python()
# This works as expected but the command "import tensorflow" crashes R
# Error: *** caught segfault *** address 0xf8, cause 'memory not mapped'

# Also tried:
install.packages("devtools")
devtools::install_github('rstudio/tensorflow')
devtools::install_github('rstudio/keras')
library(tensorflow)
install_tensorflow() # "successful"
tensorflow::tf_config()
# Error: *** caught segfault *** address 0xf8, cause 'memory not mapped'
devtools::install_github('rstudio/tensorflow@v2.4.0')
devtools::install_github('rstudio/keras@v2.4.0')
library(tensorflow)
tf_config()
# Error: *** caught segfault *** address 0xf8, cause 'memory not mapped'
# deactivate conda
sudo yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm 
export R_VERSION=4.0.0
curl -O https://cdn.rstudio.com/r/centos-7/pkgs/R-${R_VERSION}-1-1.x86_64.rpm
sudo yum install R-${R_VERSION}-1-1.x86_64.rpm

scl enable devtoolset-7 /opt/R/4.0.0/bin/R
install.packages("devtools")
devtools::install_github('rstudio/reticulate')
reticulate::use_condaenv("tf", conda = "~/anaconda3/bin/conda")
reticulate::repl_python()
# 'import tensorflow' resulted in "core dumped"
-----------------------
sudo yum install epel-release
sudo yum install R
sudo yum install libxml2-devel
sudo yum install openssl-devel
sudo yum install libcurl-devel
sudo yum install libXcomposite libXcursor libXi libXtst libXrandr alsa-lib mesa-libEGL libXdamage mesa-libGL libXScrnSaver
conda init
conda create --name tf
conda activate tf
conda install -c conda-forge tensorflow
sudo yum install centos-release-scl
sudo yum install devtoolset-7-gcc*
scl enable devtoolset-7 R
install.packages("remotes")
remotes::install_github('rstudio/reticulate')
reticulate::use_condaenv("tf", conda = "~/anaconda3/bin/conda")
reticulate::repl_python()
# This works as expected but the command "import tensorflow" crashes R
# Error: *** caught segfault *** address 0xf8, cause 'memory not mapped'

# Also tried:
install.packages("devtools")
devtools::install_github('rstudio/tensorflow')
devtools::install_github('rstudio/keras')
library(tensorflow)
install_tensorflow() # "successful"
tensorflow::tf_config()
# Error: *** caught segfault *** address 0xf8, cause 'memory not mapped'
devtools::install_github('rstudio/tensorflow@v2.4.0')
devtools::install_github('rstudio/keras@v2.4.0')
library(tensorflow)
tf_config()
# Error: *** caught segfault *** address 0xf8, cause 'memory not mapped'
# deactivate conda
sudo yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm 
export R_VERSION=4.0.0
curl -O https://cdn.rstudio.com/r/centos-7/pkgs/R-${R_VERSION}-1-1.x86_64.rpm
sudo yum install R-${R_VERSION}-1-1.x86_64.rpm

scl enable devtoolset-7 /opt/R/4.0.0/bin/R
install.packages("devtools")
devtools::install_github('rstudio/reticulate')
reticulate::use_condaenv("tf", conda = "~/anaconda3/bin/conda")
reticulate::repl_python()
# 'import tensorflow' resulted in "core dumped"
-----------------------
sudo yum install epel-release
sudo yum install R
sudo yum install libxml2-devel
sudo yum install openssl-devel
sudo yum install libcurl-devel
sudo yum install libXcomposite libXcursor libXi libXtst libXrandr alsa-lib mesa-libEGL libXdamage mesa-libGL libXScrnSaver
conda init
conda create --name tf
conda activate tf
conda install -c conda-forge tensorflow
sudo yum install centos-release-scl
sudo yum install devtoolset-7-gcc*
scl enable devtoolset-7 R
install.packages("remotes")
remotes::install_github('rstudio/reticulate')
reticulate::use_condaenv("tf", conda = "~/anaconda3/bin/conda")
reticulate::repl_python()
# This works as expected but the command "import tensorflow" crashes R
# Error: *** caught segfault *** address 0xf8, cause 'memory not mapped'

# Also tried:
install.packages("devtools")
devtools::install_github('rstudio/tensorflow')
devtools::install_github('rstudio/keras')
library(tensorflow)
install_tensorflow() # "successful"
tensorflow::tf_config()
# Error: *** caught segfault *** address 0xf8, cause 'memory not mapped'
devtools::install_github('rstudio/tensorflow@v2.4.0')
devtools::install_github('rstudio/keras@v2.4.0')
library(tensorflow)
tf_config()
# Error: *** caught segfault *** address 0xf8, cause 'memory not mapped'
# deactivate conda
sudo yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm 
export R_VERSION=4.0.0
curl -O https://cdn.rstudio.com/r/centos-7/pkgs/R-${R_VERSION}-1-1.x86_64.rpm
sudo yum install R-${R_VERSION}-1-1.x86_64.rpm

scl enable devtoolset-7 /opt/R/4.0.0/bin/R
install.packages("devtools")
devtools::install_github('rstudio/reticulate')
reticulate::use_condaenv("tf", conda = "~/anaconda3/bin/conda")
reticulate::repl_python()
# 'import tensorflow' resulted in "core dumped"
-----------------------
sudo yum install epel-release
sudo yum install R
sudo yum install libxml2-devel
sudo yum install openssl-devel
sudo yum install libcurl-devel
sudo yum install libXcomposite libXcursor libXi libXtst libXrandr alsa-lib mesa-libEGL libXdamage mesa-libGL libXScrnSaver
conda init
conda create --name tf
conda activate tf
conda install -c conda-forge tensorflow
sudo yum install centos-release-scl
sudo yum install devtoolset-7-gcc*
scl enable devtoolset-7 R
install.packages("remotes")
remotes::install_github('rstudio/reticulate')
reticulate::use_condaenv("tf", conda = "~/anaconda3/bin/conda")
reticulate::repl_python()
# This works as expected but the command "import tensorflow" crashes R
# Error: *** caught segfault *** address 0xf8, cause 'memory not mapped'

# Also tried:
install.packages("devtools")
devtools::install_github('rstudio/tensorflow')
devtools::install_github('rstudio/keras')
library(tensorflow)
install_tensorflow() # "successful"
tensorflow::tf_config()
# Error: *** caught segfault *** address 0xf8, cause 'memory not mapped'
devtools::install_github('rstudio/tensorflow@v2.4.0')
devtools::install_github('rstudio/keras@v2.4.0')
library(tensorflow)
tf_config()
# Error: *** caught segfault *** address 0xf8, cause 'memory not mapped'
# deactivate conda
sudo yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm 
export R_VERSION=4.0.0
curl -O https://cdn.rstudio.com/r/centos-7/pkgs/R-${R_VERSION}-1-1.x86_64.rpm
sudo yum install R-${R_VERSION}-1-1.x86_64.rpm

scl enable devtoolset-7 /opt/R/4.0.0/bin/R
install.packages("devtools")
devtools::install_github('rstudio/reticulate')
reticulate::use_condaenv("tf", conda = "~/anaconda3/bin/conda")
reticulate::repl_python()
# 'import tensorflow' resulted in "core dumped"
-----------------------
sudo yum install epel-release
sudo yum install R
sudo yum install libxml2-devel
sudo yum install openssl-devel
sudo yum install libcurl-devel
sudo yum install libXcomposite libXcursor libXi libXtst libXrandr alsa-lib mesa-libEGL libXdamage mesa-libGL libXScrnSaver
conda init
conda create --name tf
conda activate tf
conda install -c conda-forge tensorflow
sudo yum install centos-release-scl
sudo yum install devtoolset-7-gcc*
scl enable devtoolset-7 R
install.packages("remotes")
remotes::install_github('rstudio/reticulate')
reticulate::use_condaenv("tf", conda = "~/anaconda3/bin/conda")
reticulate::repl_python()
# This works as expected but the command "import tensorflow" crashes R
# Error: *** caught segfault *** address 0xf8, cause 'memory not mapped'

# Also tried:
install.packages("devtools")
devtools::install_github('rstudio/tensorflow')
devtools::install_github('rstudio/keras')
library(tensorflow)
install_tensorflow() # "successful"
tensorflow::tf_config()
# Error: *** caught segfault *** address 0xf8, cause 'memory not mapped'
devtools::install_github('rstudio/tensorflow@v2.4.0')
devtools::install_github('rstudio/keras@v2.4.0')
library(tensorflow)
tf_config()
# Error: *** caught segfault *** address 0xf8, cause 'memory not mapped'
# deactivate conda
sudo yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm 
export R_VERSION=4.0.0
curl -O https://cdn.rstudio.com/r/centos-7/pkgs/R-${R_VERSION}-1-1.x86_64.rpm
sudo yum install R-${R_VERSION}-1-1.x86_64.rpm

scl enable devtoolset-7 /opt/R/4.0.0/bin/R
install.packages("devtools")
devtools::install_github('rstudio/reticulate')
reticulate::use_condaenv("tf", conda = "~/anaconda3/bin/conda")
reticulate::repl_python()
# 'import tensorflow' resulted in "core dumped"
-----------------------
sudo yum install epel-release
sudo yum install R
sudo yum install libxml2-devel
sudo yum install openssl-devel
sudo yum install libcurl-devel
sudo yum install libXcomposite libXcursor libXi libXtst libXrandr alsa-lib mesa-libEGL libXdamage mesa-libGL libXScrnSaver
conda init
conda create --name tf
conda activate tf
conda install -c conda-forge tensorflow
sudo yum install centos-release-scl
sudo yum install devtoolset-7-gcc*
scl enable devtoolset-7 R
install.packages("remotes")
remotes::install_github('rstudio/reticulate')
reticulate::use_condaenv("tf", conda = "~/anaconda3/bin/conda")
reticulate::repl_python()
# This works as expected but the command "import tensorflow" crashes R
# Error: *** caught segfault *** address 0xf8, cause 'memory not mapped'

# Also tried:
install.packages("devtools")
devtools::install_github('rstudio/tensorflow')
devtools::install_github('rstudio/keras')
library(tensorflow)
install_tensorflow() # "successful"
tensorflow::tf_config()
# Error: *** caught segfault *** address 0xf8, cause 'memory not mapped'
devtools::install_github('rstudio/tensorflow@v2.4.0')
devtools::install_github('rstudio/keras@v2.4.0')
library(tensorflow)
tf_config()
# Error: *** caught segfault *** address 0xf8, cause 'memory not mapped'
# deactivate conda
sudo yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm 
export R_VERSION=4.0.0
curl -O https://cdn.rstudio.com/r/centos-7/pkgs/R-${R_VERSION}-1-1.x86_64.rpm
sudo yum install R-${R_VERSION}-1-1.x86_64.rpm

scl enable devtoolset-7 /opt/R/4.0.0/bin/R
install.packages("devtools")
devtools::install_github('rstudio/reticulate')
reticulate::use_condaenv("tf", conda = "~/anaconda3/bin/conda")
reticulate::repl_python()
# 'import tensorflow' resulted in "core dumped"
-----------------------
yum install epel-release
yum install R
yum install libxml2-devel
yum install openssl-devel
yum install libcurl-devel
yum install libXcomposite libXcursor libXi libXtst libXrandr alsa-lib mesa-libEGL libXdamage mesa-libGL libXScrnSaver
yum install conda
conda clean -a     # Clean cache and remove old packages, if you already have conda installed
# Install all the packages together and let conda handle versioning. It is important to give a Python version while setting up the environment. Since Tensorflow supports python 3.9.0, I have used this version 
conda create -y -n "tf" python=3.9.0 ipython tensorflow keras r-essentials r-reticulate r-tensorflow
conda activate tf
iptables -A INPUT -p tcp --dport 7878 -j ACCEPT
/sbin/service iptables save
/usr/lib/rstudio-server/bin/rserver \
   --server-daemonize=0 \
   --www-port 7878 \
   --rsession-which-r=$(which R) \
   --rsession-ld-library-path=$CONDA_PREFIX/lib
install.packages("reticulate")
install.packages("tensorflow")
library(reticulate)
library(tensorflow)
ts <- reticulate::import("tensorflow")
-----------------------
yum install epel-release
yum install R
yum install libxml2-devel
yum install openssl-devel
yum install libcurl-devel
yum install libXcomposite libXcursor libXi libXtst libXrandr alsa-lib mesa-libEGL libXdamage mesa-libGL libXScrnSaver
yum install conda
conda clean -a     # Clean cache and remove old packages, if you already have conda installed
# Install all the packages together and let conda handle versioning. It is important to give a Python version while setting up the environment. Since Tensorflow supports python 3.9.0, I have used this version 
conda create -y -n "tf" python=3.9.0 ipython tensorflow keras r-essentials r-reticulate r-tensorflow
conda activate tf
iptables -A INPUT -p tcp --dport 7878 -j ACCEPT
/sbin/service iptables save
/usr/lib/rstudio-server/bin/rserver \
   --server-daemonize=0 \
   --www-port 7878 \
   --rsession-which-r=$(which R) \
   --rsession-ld-library-path=$CONDA_PREFIX/lib
install.packages("reticulate")
install.packages("tensorflow")
library(reticulate)
library(tensorflow)
ts <- reticulate::import("tensorflow")
-----------------------
yum install epel-release
yum install R
yum install libxml2-devel
yum install openssl-devel
yum install libcurl-devel
yum install libXcomposite libXcursor libXi libXtst libXrandr alsa-lib mesa-libEGL libXdamage mesa-libGL libXScrnSaver
yum install conda
conda clean -a     # Clean cache and remove old packages, if you already have conda installed
# Install all the packages together and let conda handle versioning. It is important to give a Python version while setting up the environment. Since Tensorflow supports python 3.9.0, I have used this version 
conda create -y -n "tf" python=3.9.0 ipython tensorflow keras r-essentials r-reticulate r-tensorflow
conda activate tf
iptables -A INPUT -p tcp --dport 7878 -j ACCEPT
/sbin/service iptables save
/usr/lib/rstudio-server/bin/rserver \
   --server-daemonize=0 \
   --www-port 7878 \
   --rsession-which-r=$(which R) \
   --rsession-ld-library-path=$CONDA_PREFIX/lib
install.packages("reticulate")
install.packages("tensorflow")
library(reticulate)
library(tensorflow)
ts <- reticulate::import("tensorflow")
-----------------------
yum install epel-release
yum install R
yum install libxml2-devel
yum install openssl-devel
yum install libcurl-devel
yum install libXcomposite libXcursor libXi libXtst libXrandr alsa-lib mesa-libEGL libXdamage mesa-libGL libXScrnSaver
yum install conda
conda clean -a     # Clean cache and remove old packages, if you already have conda installed
# Install all the packages together and let conda handle versioning. It is important to give a Python version while setting up the environment. Since Tensorflow supports python 3.9.0, I have used this version 
conda create -y -n "tf" python=3.9.0 ipython tensorflow keras r-essentials r-reticulate r-tensorflow
conda activate tf
iptables -A INPUT -p tcp --dport 7878 -j ACCEPT
/sbin/service iptables save
/usr/lib/rstudio-server/bin/rserver \
   --server-daemonize=0 \
   --www-port 7878 \
   --rsession-which-r=$(which R) \
   --rsession-ld-library-path=$CONDA_PREFIX/lib
install.packages("reticulate")
install.packages("tensorflow")
library(reticulate)
library(tensorflow)
ts <- reticulate::import("tensorflow")
-----------------------
yum install epel-release
yum install R
yum install libxml2-devel
yum install openssl-devel
yum install libcurl-devel
yum install libXcomposite libXcursor libXi libXtst libXrandr alsa-lib mesa-libEGL libXdamage mesa-libGL libXScrnSaver
yum install conda
conda clean -a     # Clean cache and remove old packages, if you already have conda installed
# Install all the packages together and let conda handle versioning. It is important to give a Python version while setting up the environment. Since Tensorflow supports python 3.9.0, I have used this version 
conda create -y -n "tf" python=3.9.0 ipython tensorflow keras r-essentials r-reticulate r-tensorflow
conda activate tf
iptables -A INPUT -p tcp --dport 7878 -j ACCEPT
/sbin/service iptables save
/usr/lib/rstudio-server/bin/rserver \
   --server-daemonize=0 \
   --www-port 7878 \
   --rsession-which-r=$(which R) \
   --rsession-ld-library-path=$CONDA_PREFIX/lib
install.packages("reticulate")
install.packages("tensorflow")
library(reticulate)
library(tensorflow)
ts <- reticulate::import("tensorflow")

Configuring compilers on Mac M1 (Big Sur, Monterey) for Rcpp and other tools

copy iconCopydownload iconDownload
$ sudo xcode-select --install
$ wget https://mac.r-project.org/libs-arm64/gfortran-f51f1da0-darwin20.0-arm64.tar.gz
$ sudo tar xvf gfortran-f51f1da0-darwin20.0-arm64.tar.gz -C /
$ sudo ln -sfn $(xcrun --show-sdk-path) /opt/R/arm64/gfortran/SDK
$ wget https://mac.r-project.org/openmp/openmp-12.0.1-darwin20-Release.tar.gz
$ sudo tar xvf openmp-12.0.1-darwin20-Release.tar.gz -C /
/usr/local/lib/libomp.dylib
/usr/local/include/ompt.h
/usr/local/include/omp.h
/usr/local/include/omp-tools.h
CPPFLAGS+=-I/usr/local/include -Xclang -fopenmp
LDFLAGS+=-L/usr/local/lib -lomp

FC=/opt/R/arm64/gfortran/bin/gfortran -mtune=native
FLIBS=-L/opt/R/arm64/gfortran/lib/gcc/aarch64-apple-darwin20.2.0/11.0.0 -L/opt/R/arm64/gfortran/lib -lgfortran -lemutls_w -lm
if (!requireNamespace("RcppArmadillo", quietly = TRUE)) {
    install.packages("RcppArmadillo")
}
Rcpp::sourceCpp(code = '
#include <RcppArmadillo.h>
#ifdef _OPENMP
# include <omp.h>
#endif

// [[Rcpp::depends(RcppArmadillo)]]
// [[Rcpp::export]]
void omp_test()
{
#ifdef _OPENMP
    Rprintf("OpenMP threads available: %d\\n", omp_get_max_threads());
#else
    Rprintf("OpenMP not supported\\n");
#endif
}
')
omp_test()
OpenMP threads available: 8
-----------------------
$ sudo xcode-select --install
$ wget https://mac.r-project.org/libs-arm64/gfortran-f51f1da0-darwin20.0-arm64.tar.gz
$ sudo tar xvf gfortran-f51f1da0-darwin20.0-arm64.tar.gz -C /
$ sudo ln -sfn $(xcrun --show-sdk-path) /opt/R/arm64/gfortran/SDK
$ wget https://mac.r-project.org/openmp/openmp-12.0.1-darwin20-Release.tar.gz
$ sudo tar xvf openmp-12.0.1-darwin20-Release.tar.gz -C /
/usr/local/lib/libomp.dylib
/usr/local/include/ompt.h
/usr/local/include/omp.h
/usr/local/include/omp-tools.h
CPPFLAGS+=-I/usr/local/include -Xclang -fopenmp
LDFLAGS+=-L/usr/local/lib -lomp

FC=/opt/R/arm64/gfortran/bin/gfortran -mtune=native
FLIBS=-L/opt/R/arm64/gfortran/lib/gcc/aarch64-apple-darwin20.2.0/11.0.0 -L/opt/R/arm64/gfortran/lib -lgfortran -lemutls_w -lm
if (!requireNamespace("RcppArmadillo", quietly = TRUE)) {
    install.packages("RcppArmadillo")
}
Rcpp::sourceCpp(code = '
#include <RcppArmadillo.h>
#ifdef _OPENMP
# include <omp.h>
#endif

// [[Rcpp::depends(RcppArmadillo)]]
// [[Rcpp::export]]
void omp_test()
{
#ifdef _OPENMP
    Rprintf("OpenMP threads available: %d\\n", omp_get_max_threads());
#else
    Rprintf("OpenMP not supported\\n");
#endif
}
')
omp_test()
OpenMP threads available: 8
-----------------------
$ sudo xcode-select --install
$ wget https://mac.r-project.org/libs-arm64/gfortran-f51f1da0-darwin20.0-arm64.tar.gz
$ sudo tar xvf gfortran-f51f1da0-darwin20.0-arm64.tar.gz -C /
$ sudo ln -sfn $(xcrun --show-sdk-path) /opt/R/arm64/gfortran/SDK
$ wget https://mac.r-project.org/openmp/openmp-12.0.1-darwin20-Release.tar.gz
$ sudo tar xvf openmp-12.0.1-darwin20-Release.tar.gz -C /
/usr/local/lib/libomp.dylib
/usr/local/include/ompt.h
/usr/local/include/omp.h
/usr/local/include/omp-tools.h
CPPFLAGS+=-I/usr/local/include -Xclang -fopenmp
LDFLAGS+=-L/usr/local/lib -lomp

FC=/opt/R/arm64/gfortran/bin/gfortran -mtune=native
FLIBS=-L/opt/R/arm64/gfortran/lib/gcc/aarch64-apple-darwin20.2.0/11.0.0 -L/opt/R/arm64/gfortran/lib -lgfortran -lemutls_w -lm
if (!requireNamespace("RcppArmadillo", quietly = TRUE)) {
    install.packages("RcppArmadillo")
}
Rcpp::sourceCpp(code = '
#include <RcppArmadillo.h>
#ifdef _OPENMP
# include <omp.h>
#endif

// [[Rcpp::depends(RcppArmadillo)]]
// [[Rcpp::export]]
void omp_test()
{
#ifdef _OPENMP
    Rprintf("OpenMP threads available: %d\\n", omp_get_max_threads());
#else
    Rprintf("OpenMP not supported\\n");
#endif
}
')
omp_test()
OpenMP threads available: 8
-----------------------
$ sudo xcode-select --install
$ wget https://mac.r-project.org/libs-arm64/gfortran-f51f1da0-darwin20.0-arm64.tar.gz
$ sudo tar xvf gfortran-f51f1da0-darwin20.0-arm64.tar.gz -C /
$ sudo ln -sfn $(xcrun --show-sdk-path) /opt/R/arm64/gfortran/SDK
$ wget https://mac.r-project.org/openmp/openmp-12.0.1-darwin20-Release.tar.gz
$ sudo tar xvf openmp-12.0.1-darwin20-Release.tar.gz -C /
/usr/local/lib/libomp.dylib
/usr/local/include/ompt.h
/usr/local/include/omp.h
/usr/local/include/omp-tools.h
CPPFLAGS+=-I/usr/local/include -Xclang -fopenmp
LDFLAGS+=-L/usr/local/lib -lomp

FC=/opt/R/arm64/gfortran/bin/gfortran -mtune=native
FLIBS=-L/opt/R/arm64/gfortran/lib/gcc/aarch64-apple-darwin20.2.0/11.0.0 -L/opt/R/arm64/gfortran/lib -lgfortran -lemutls_w -lm
if (!requireNamespace("RcppArmadillo", quietly = TRUE)) {
    install.packages("RcppArmadillo")
}
Rcpp::sourceCpp(code = '
#include <RcppArmadillo.h>
#ifdef _OPENMP
# include <omp.h>
#endif

// [[Rcpp::depends(RcppArmadillo)]]
// [[Rcpp::export]]
void omp_test()
{
#ifdef _OPENMP
    Rprintf("OpenMP threads available: %d\\n", omp_get_max_threads());
#else
    Rprintf("OpenMP not supported\\n");
#endif
}
')
omp_test()
OpenMP threads available: 8
-----------------------
$ sudo xcode-select --install
$ wget https://mac.r-project.org/libs-arm64/gfortran-f51f1da0-darwin20.0-arm64.tar.gz
$ sudo tar xvf gfortran-f51f1da0-darwin20.0-arm64.tar.gz -C /
$ sudo ln -sfn $(xcrun --show-sdk-path) /opt/R/arm64/gfortran/SDK
$ wget https://mac.r-project.org/openmp/openmp-12.0.1-darwin20-Release.tar.gz
$ sudo tar xvf openmp-12.0.1-darwin20-Release.tar.gz -C /
/usr/local/lib/libomp.dylib
/usr/local/include/ompt.h
/usr/local/include/omp.h
/usr/local/include/omp-tools.h
CPPFLAGS+=-I/usr/local/include -Xclang -fopenmp
LDFLAGS+=-L/usr/local/lib -lomp

FC=/opt/R/arm64/gfortran/bin/gfortran -mtune=native
FLIBS=-L/opt/R/arm64/gfortran/lib/gcc/aarch64-apple-darwin20.2.0/11.0.0 -L/opt/R/arm64/gfortran/lib -lgfortran -lemutls_w -lm
if (!requireNamespace("RcppArmadillo", quietly = TRUE)) {
    install.packages("RcppArmadillo")
}
Rcpp::sourceCpp(code = '
#include <RcppArmadillo.h>
#ifdef _OPENMP
# include <omp.h>
#endif

// [[Rcpp::depends(RcppArmadillo)]]
// [[Rcpp::export]]
void omp_test()
{
#ifdef _OPENMP
    Rprintf("OpenMP threads available: %d\\n", omp_get_max_threads());
#else
    Rprintf("OpenMP not supported\\n");
#endif
}
')
omp_test()
OpenMP threads available: 8
-----------------------
$ sudo xcode-select --install
$ wget https://mac.r-project.org/libs-arm64/gfortran-f51f1da0-darwin20.0-arm64.tar.gz
$ sudo tar xvf gfortran-f51f1da0-darwin20.0-arm64.tar.gz -C /
$ sudo ln -sfn $(xcrun --show-sdk-path) /opt/R/arm64/gfortran/SDK
$ wget https://mac.r-project.org/openmp/openmp-12.0.1-darwin20-Release.tar.gz
$ sudo tar xvf openmp-12.0.1-darwin20-Release.tar.gz -C /
/usr/local/lib/libomp.dylib
/usr/local/include/ompt.h
/usr/local/include/omp.h
/usr/local/include/omp-tools.h
CPPFLAGS+=-I/usr/local/include -Xclang -fopenmp
LDFLAGS+=-L/usr/local/lib -lomp

FC=/opt/R/arm64/gfortran/bin/gfortran -mtune=native
FLIBS=-L/opt/R/arm64/gfortran/lib/gcc/aarch64-apple-darwin20.2.0/11.0.0 -L/opt/R/arm64/gfortran/lib -lgfortran -lemutls_w -lm
if (!requireNamespace("RcppArmadillo", quietly = TRUE)) {
    install.packages("RcppArmadillo")
}
Rcpp::sourceCpp(code = '
#include <RcppArmadillo.h>
#ifdef _OPENMP
# include <omp.h>
#endif

// [[Rcpp::depends(RcppArmadillo)]]
// [[Rcpp::export]]
void omp_test()
{
#ifdef _OPENMP
    Rprintf("OpenMP threads available: %d\\n", omp_get_max_threads());
#else
    Rprintf("OpenMP not supported\\n");
#endif
}
')
omp_test()
OpenMP threads available: 8
-----------------------
$ sudo xcode-select --install
$ wget https://mac.r-project.org/libs-arm64/gfortran-f51f1da0-darwin20.0-arm64.tar.gz
$ sudo tar xvf gfortran-f51f1da0-darwin20.0-arm64.tar.gz -C /
$ sudo ln -sfn $(xcrun --show-sdk-path) /opt/R/arm64/gfortran/SDK
$ wget https://mac.r-project.org/openmp/openmp-12.0.1-darwin20-Release.tar.gz
$ sudo tar xvf openmp-12.0.1-darwin20-Release.tar.gz -C /
/usr/local/lib/libomp.dylib
/usr/local/include/ompt.h
/usr/local/include/omp.h
/usr/local/include/omp-tools.h
CPPFLAGS+=-I/usr/local/include -Xclang -fopenmp
LDFLAGS+=-L/usr/local/lib -lomp

FC=/opt/R/arm64/gfortran/bin/gfortran -mtune=native
FLIBS=-L/opt/R/arm64/gfortran/lib/gcc/aarch64-apple-darwin20.2.0/11.0.0 -L/opt/R/arm64/gfortran/lib -lgfortran -lemutls_w -lm
if (!requireNamespace("RcppArmadillo", quietly = TRUE)) {
    install.packages("RcppArmadillo")
}
Rcpp::sourceCpp(code = '
#include <RcppArmadillo.h>
#ifdef _OPENMP
# include <omp.h>
#endif

// [[Rcpp::depends(RcppArmadillo)]]
// [[Rcpp::export]]
void omp_test()
{
#ifdef _OPENMP
    Rprintf("OpenMP threads available: %d\\n", omp_get_max_threads());
#else
    Rprintf("OpenMP not supported\\n");
#endif
}
')
omp_test()
OpenMP threads available: 8

Why is this task faster in Python than Julia?

copy iconCopydownload iconDownload
arr = reduce(vcat, eachrow(Matrix(string.(df[2:570, 1:3198]))))
-----------------------
function tostringvector(d)
    r, c = size(d)
    result = Vector{String}(undef, r*c)
    v = reshape(result, r, c)
    for (rcol, dcol) in zip(eachcol(v), eachcol(d))
        @inbounds rcol .= string.(dcol)
    end
    return result
end
tostringvector(d) = vec(Matrix(string.(d)))
-----------------------
function tostringvector(d)
    r, c = size(d)
    result = Vector{String}(undef, r*c)
    v = reshape(result, r, c)
    for (rcol, dcol) in zip(eachcol(v), eachcol(d))
        @inbounds rcol .= string.(dcol)
    end
    return result
end
tostringvector(d) = vec(Matrix(string.(d)))

Community Discussions

Trending Discussions on rstudio
  • Fastest way to edit multiple lines of code at the same time
  • Convert multiple lines to single line using RStudio text editor
  • Is it possible to break 1 line of code into multiple in atom IDE, just like in Rstudio
  • How can I make a Shiny app W3C compliant?
  • Fast method of getting all the descendants of a parent
  • How to extract all code from an RMarkdown (.Rmd) file?
  • Convincing R that the .dbf file associated with a .shp file is not an executable during command checks
  • Equivalent of the Rstudio `browser()` function in Julia for debugging
  • Tensorflow setup on RStudio/ R | CentOS
  • Configuring compilers on Mac M1 (Big Sur, Monterey) for Rcpp and other tools
Trending Discussions on rstudio

QUESTION

Fastest way to edit multiple lines of code at the same time

Asked 2022-Mar-18 at 15:52

What is the best way to do the same action across multiple lines of code in the RStudio source editor?

Example 1

Let's say that I copy a list from a text file and paste it into R (like the list below). Then, I want to add quotation marks around each word and add a comma to each line, so that I can make a vector.

Krista Hicks
Miriam Cummings
Ralph Lamb
Jaylene Gilbert
Jordon Sparks
Kenna Melton

Expected Output

"Krista Hicks",
"Miriam Cummings",
"Ralph Lamb",
"Jaylene Gilbert",
"Jordon Sparks",
"Kenna Melton"

Example 2

How can I add missing parentheses on multiple lines. For example, if I have an if statement, then how can I add the missing opening parentheses for names on line 1 and line 4.

if (!is.null(names pattern))) {
  vec <- FALSE
  replacement <- unname(pattern)
  pattern[] <- names pattern)
}

Expected Output

if (!is.null(names(pattern))) {
  vec <- FALSE
  replacement <- unname(pattern)
  pattern[] <- names(pattern)
}

*Note: These names are just from a random name generator.

ANSWER

Answered 2022-Mar-16 at 16:20

RStudio has support for multiple cursors, which allows you to write and edit multiple lines at the same time.

Example 1

You can simply click Alt on Windows/Linux (or option on Mac) and drag your mouse to make your selection, or you can use Alt+Shift to create a rectangular selection from the current location of the cursor to a clicked position.

enter image description here


Example 2

Another multiple cursor option is for selecting all matching instances of a term. So, you can select names and press Ctrl+Alt+Shift+M. Then, you can use the arrow keys to move the cursors to delete the space and add in the parentheses.

enter image description here

Source https://stackoverflow.com/questions/71472412

Community Discussions, Code Snippets contain sources that include Stack Exchange Network

Vulnerabilities

No vulnerabilities reported

Install rstudio

You can download it from GitHub.
You can use rstudio like any standard Java library. Please include the the jar files in your classpath. You can also use any IDE and you can run and debug the rstudio component as you would do with any other Java program. Best practice is to use a build tool that supports dependency management such as Maven or Gradle. For Maven installation, please refer maven.apache.org. For Gradle installation, please refer gradle.org .

Support

For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .

DOWNLOAD this Library from

Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
over 430 million Knowledge Items
Find more libraries
Reuse Solution Kits and Libraries Curated by Popular Use Cases

Save this library and start creating your kit

Explore Related Topics

Share this Page

share link
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
over 430 million Knowledge Items
Find more libraries
Reuse Solution Kits and Libraries Curated by Popular Use Cases

Save this library and start creating your kit

  • © 2022 Open Weaver Inc.