Non-standard assessment is really convenient when using dplyr verbs. But this can be problematic when using these verbs with function arguments. For example, let's say that I want to create a function that gives me the number of rows for a given view.
# Load packages and prepare data library(dplyr) library(lazyeval)
Example does not work
This function does not work as expected, because species interpreted in the context of the diaphragm data frame instead of being interpreted in the context of the function argument:
nrowspecies0 <- function(dtf, species){ dtf %>% filter(species == species) %>% nrow() } nrowspecies0(iris, species = "versicolor")
3 implementation examples
To get around the non-standard assessment, I usually add an argument with an underline:
nrowspecies1 <- function(dtf, species_){ dtf %>% filter(species == species_) %>% nrow() } nrowspecies1(iris, species_ = "versicolor")
This is not entirely satisfactory since it changes the name of the function argument to something less user friendly. Or he relies on autocomplete which, I'm afraid, is not good practice for programming. To keep a good argument name, I could do:
nrowspecies2 <- function(dtf, species){ species_ <- species dtf %>% filter(species == species_) %>% nrow() } nrowspecies2(iris, species = "versicolor")
Another way to work with custom assessment based on this answer . interp() interprets species in the context of a functional environment:
nrowspecies3 <- function(dtf, species){ dtf %>% filter_(interp(~species == with_species, with_species = species)) %>% nrow() } nrowspecies3(iris, species = "versicolor")
Given function 3 above, which one is the most reliable way to implement this filter function? Are there any other ways?