testthat | An R 📦 to make testing 😀 | Testing library
kandi X-RAY | testthat Summary
kandi X-RAY | testthat Summary
Testing your code can be painful and tedious, but it greatly increases the quality of your code. testthat tries to make testing as fun as possible, so that you get a visceral satisfaction from writing tests. Testing should be addictive, so you do it all the time. To make that happen, testthat:. testthat draws inspiration from the xUnit family of testing packages, as well as from many of the innovative ruby testing libraries, like rspec, testy, bacon and cucumber. testthat is the most popular unit testing package for R and is used by thousands of CRAN packages. If you’re not familiar with testthat, the testing chapter in R packages gives a good overview, along with workflow advice and concrete examples.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of testthat
testthat Key Features
testthat Examples and Code Snippets
Community Discussions
Trending Discussions on testthat
QUESTION
I use {cli} messages in one of my packages. I would like to hide these messages in my tests because they clutter the testthat results. Is there a way to do that?
I've seen that {cli} has a TESTTHAT environment variable but I don't know if it exists for this purpose, and I don't know how to use it. Note that I would prefer a solution that is simple to implement, such as a test global option. I don't want to manually edit all my tests or messages.
Reproducible example:
...ANSWER
Answered 2022-Mar-11 at 09:46One solution was given in this Github issue. Using withr::with_options()
or withr::local_options()
with option cli.default_handler = function(...) { }
seems to work.
QUESTION
I am trying to install factoextra
, but I gets stuck during the CMake part, in particular with error like:
CMake Error: The source directory "/tmp/..." does not exist.
(same when I try to install its dependencies: nloptr
, pbkrtest
, lme4
, car
, rstatix
, FactoMineR
, ggpubr
)
any idea?
thanks
ps:
- R version 4.0.0
- centos 7
last part of logs:
...ANSWER
Answered 2022-Mar-08 at 22:50I solved this problem by sudo apt-get install libnlopt-dev
.
QUESTION
I've written an import function that gets a single file from an aws s3-bucket.
That function itself is a wrapper around aws.s3::s3read_using()
which takes a reading function as its first argument.
Why do I wrap around aws.s3::s3read_using()
? Because I need to do some special error-handling and want the wrapping function to do some Recall()
up to a limit... but that's a different story.
Now that i've successfully build and tested my wrapping function i want to do another wrapping arround that:
I want to iterate n times over my wrapper to bind the downloaded files together. I now have the difficulty to hand the 'reading_function' to the FUN
argument of aws.s3::s3read_using()
.
I could do that by simply using ...
- BUT!
I want to make clear to the USER of my wrapping wrapper, that he needs to specify that argument.
So I've decided to use rlangs rlang::enexpr()
to capture the argument and to hand it over to my first wrapper via !!
- which in return captures that argument again with rlang::enexpr()
and hands it over - finally - to aws.s3::s3read_using()
via rlang::expr(aws.s3::s3read_using(FUN = !!reading_fn, object = s3_object))
That works perfectly fine and smooth. My Problem is with testing that function construct using testthat
and mockery
Here is some broadly simplyfied code:
...ANSWER
Answered 2021-Dec-09 at 20:11I think you're complicating things here, although maybe I'm not fully understanding your end goal. You can directly pass functions through arguments without any issue. Your example code above can be easily simplified to (keeping the loop just to match your test_that()
call):
QUESTION
I'm writing unit tests using testthat
to check some operations and these operations needs objects with srcref
attribute. srcref
attribute is added if during the package installation / building option keep.source
was set to TRUE
.
I see all my tests fail when using devtools::check()
if those tests expect srcref
attribute for object. This not happens when tests are performed interactively, i.e. using devtools:test()
. What could I do to keep these tests, run it and make them pass using devtools::check()
? I have tried devtools::check(args = "--with-keep.source")
but this argument is not recognized.
I'm using rlang::pkg_env("my-package")
to get objects with srcref
attribute, so tests look like this:
ANSWER
Answered 2022-Jan-23 at 14:48The option --with-keep.source
is for R CMD INSTALL
, not R CMD check
. To make sure that check
preserves sources when it installs your package, you need to run
QUESTION
I want to run all the tests and obtain the test results and produced warnings to programmatically create a markdown report showing test outcomes and potential warnings that occurred in the tested code.
But it seems there is no way to obtain or capture warnings during the test run! I understand that tests are executed in a closed environment, but is there really no way to let testthat provide me the thrown warnings?
In the following setup, the warn_list
variable is always empty.
Three files for the minimal example:
./tests/testthat.R
ANSWER
Answered 2021-Nov-29 at 15:17It seems the SummaryReporter
reporter object records warnings. As you mention in your comment, these are very minimally documented, but this seems to do it:
QUESTION
I work for an org that has a number of internal packages that were created many years ago. These are in the form of package zip archives that were compiled on Windows on R 3.x
. Therefore, they can't be installed on R 4.x
, and can't be used on Macs or Linux either without being recompiled. So everyone in the entire org is stuck on R 3.6
until this is resolved. I don't have access to the original package source files. They are lost to time....
I want to take these packages, extract the code and data, and update them for modern best practices (roxygen
, GitHub repos, testthat
etc.). What is the best way of doing this? I have a fair amount of experience with package development. I have already tackled one. I started a new RStudio package project, and going function by function, copying the function code to a new script file, getting and reformatting the help from the help browser as roxygen docs. I've done the same for any internal hidden functions that i could find (via pkg_name:::
mostly) , and also the internal datasets. That is all fairly straightforward, but very time consuming. It builds ok, but I haven't yet tested the actual functionality of the code.
I'm currently stuck because there are a couple of standardGeneric
method functions for custom S4 class objects. I am completely unfamiliar with these and haven't been able to figure out how to copy them over. Viewing the source code they are wrapped in new()
with "standardGeneric"
as the first argument (plus a lot more obviously), as opposed to just being a simple function
definition for all the other functions. Any help with how to recreate or copy these over would be very welcome.
But maybe I am going about this the wrong way in the first place. I haven't been able to find any helpful suggestions about how to "back engineer" R package source files from a compiled version.
Anyone any ideas?
...ANSWER
Answered 2021-Nov-15 at 15:23Check out if this works in R 3.6
.
Below script can automate least part of your problem by writing all function sources into separate and appropriately named .R
files. This code will also take care of hidden functions.
QUESTION
According to Carl Boettiger in this thread, "...when tests fail. Like Solaris, some of these failures can occur when an upstream dependency installs on the platform but does not actually run." My code fails numerically on M1mac, but not on other platforms, while using stats::integrate
on functions returning very small values.
Should I skip the test on M1mac (arm64)?
...ANSWER
Answered 2021-Nov-18 at 19:42I hesitate to tell about canonical packages, but there's a handful of pretty recognizable packages that skip certain operating systems for particular tests:
A couple of examples from the repositories above:
Example 1. Skips core functionality tests on Solaris platform.
Example 2. Skips installation tests for Linux (other tests in this project seemingly cover all operating systems).
This is an official article of the testthat
package. It clearly states the circumstances under which you might better skip the test; however, those statements tend to be more recommendatory rather than imperative:
You’re testing a web service that occasionally fails, and you don’t want to run the tests on CRAN. Or maybe the API requires authentication, and you can only run the tests when you’ve securely distributed some secrets.
You’re relying on features that not all operating systems possess, and want to make sure your code doesn’t run on a platform where it doesn’t work. This platform tends to be Windows, since amongst other things, it lacks full utf8 support.
You’re writing your tests for multiple versions of R or multiple versions of a dependency and you want to skip when a feature isn’t available. You generally don’t need to skip tests if a suggested package is not installed. This is only needed in exceptional circumstances, e.g. when a package is not available on some operating system.
I've highlighted everything that had been written regarding operating systems. From your question, I can conclude that your situation falls under the statement of "You’re relying on features that not all operating systems possess", since you've most likely encountered a bug in the M1 Mac operating system since macOS build has a slightly different way of calculating extended-precision floating-point numbers[1]:
What the ideology adheres toThe ‘native’ build is a little faster (and for some tasks, considerably so) but may give different numerical results from the far more common ‘x86_64’ platforms (on macOS and other OSes) as ARM hardware lacks extended-precision floating-point operations.
It's worth our time to recall why unit tests were invented in the first place.
In spite of all the talks about Wikipedia's unreliability, I believe this source is being considered "canonical" by the vast majority of my colleagues, which states:
Unit tests are typically automated tests written and run by software developers to ensure that a section of an application (known as the "unit") meets its design and behaves as intended.
In order to answer your question from the ideological perspective, we have to answer only one question: does your code currently work as intended?
If you consider your functionality to be comprehensive enough for the end user, even though it partially doesn't work properly on a particular OS, feel free to skip the test.
If not, it implies that your code contains bug that is to be fixed before going into the production. In this case, once you fix it, the tests should succeed with your problematic OS.
My own (probably biased) opinionMy code fails numerically on M1mac, but not on other platforms, while using stats::integrate on functions returning very small values.
Based on your words, it's hard to understand which one is the case, but I believe that if your package is viable for the 99.99% of the audience, go ahead and skip this annoying test for the 0.01 remaining percentile of the possible environments. Maybe you should note somewhere in the README.MD that your package has this issue.
This way other developers will be aware of it, and those are using M1Mac OS will most likely find a workaround or fix it themselves - in case you're creating an open-source project.
Notes:
[1]. Thanks to Roland's comment, I've updated my answer.
QUESTION
Since a few months, ggplot2 started to save png files with a transparent background. The code output in Rstudio and when saved as pdf looks great. It happens mainly with the use of themes when I omit the gray panel background. I tested it on my macbook with "Preview" and on a Windows Computer with the "foto viewer" there.
...ANSWER
Answered 2021-Nov-11 at 14:11Maybe indeed worth an answer for posterity...
Specify ggsave("test.png", dpi = 300, bg = "white")
Background (pun intended): the argument will be passed to grDevices::png
via the ...
argument. bg
controls the background of the device.
QUESTION
I have a dataset like the one below (actual dataset has 5M+ rows with no gaps), where I am trying to filter out rows where the sum of all numeric columns for the row itself and its previous and next rows is equal to zero.
N.B.
Time
is adttm
column in the actual data.- Number of consecutive zeros can be more than 3 rows and in that case multiple rows will be filtered out.
ANSWER
Answered 2021-Nov-01 at 05:11library(tidyverse)
df0 %>%
arrange(group, Time) %>% # EDIT to arrange by time (and group for clarity)
rowwise() %>%
mutate(sum = sum(c_across(Val1:Val2))) %>%
group_by(group) %>%
filter( !(sum == 0 & lag(sum, default = 1) == 0 & lead(sum, default = 1) == 0)) %>%
ungroup()
# A tibble: 11 x 5
group Time Val1 Val2 sum
1 A 1 0 0 0
2 A 3 0 0 0
3 A 4 0 0.1 0.1
4 A 5 0 0 0
5 A 7 0 0 0
6 B 1 0.1 0 0.1
7 B 2 0.2 0.2 0.4
8 B 3 0 0 0
9 B 4 0 0 0
10 B 5 0.1 0.2 0.3
11 B 6 0.1 0.5 0.6
QUESTION
I am writing some unit tests for an R package using testthat
. I would like to compare two objects where not all the details need to match, but they must maintain equivalence with respect to a set of functions of interest.
For a simple example, I want to use something like
...ANSWER
Answered 2021-Sep-17 at 08:53Looking at the documentation {testthat} has currently (third edition) no function like expect_equal_applied
. But, as you mention already, we can construct such a function easily:
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install testthat
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page