I seem to get inconsistent results when I use the R p.adjust function to calculate the rate of detection of false positives. Based on the document in the documentation, the
adjusted p value should be calculated as follows:
adjusted_p_at_index_i= p_at_index_i*(total_number_of_tests/i).
Now when I run p.adjust(c(0.0001, 0.0004, 0.0019),"fdr"), I get the expected results
c(0.0003, 0.0006, 0.0019)
but when i run p.adjust(c(0.517479039, 0.003657195, 0.006080152),"fdr")i get this
c(0.517479039, 0.009120228, 0.009120228)
Instead of the result, I calculate:
c(0.517479039, 0.010971585, 0.009120228)
What does R do for data that can account for both of these results?
source
share