How R calculates false detection rate

I seem to get inconsistent results when I use the R p.adjust function to calculate the rate of detection of false positives. Based on the document in the documentation, the adjusted p value should be calculated as follows:

adjusted_p_at_index_i= p_at_index_i*(total_number_of_tests/i).

Now when I run p.adjust(c(0.0001, 0.0004, 0.0019),"fdr"), I get the expected results

c(0.0003, 0.0006, 0.0019)

but when i run p.adjust(c(0.517479039, 0.003657195, 0.006080152),"fdr")i get this

c(0.517479039, 0.009120228, 0.009120228)

Instead of the result, I calculate:

c(0.517479039, 0.010971585, 0.009120228)

What does R do for data that can account for both of these results?

+4
source share
1 answer

, FDR , FDR p-. , , FDR.

p- 0.0006 FDR 0.010971585, p FDR. FDR 0.009120228, p- 0.0019, FDR.

, p.adjust:

...
}, BH = {
    i <- lp:1L
    o <- order(p, decreasing = TRUE)
    ro <- order(o)
    pmin(1, cummin(n/i * p[o]))[ro]

cummin , p.

-, , . 293, ( ):

k - i, P (i) <= i/m q *;

H_ (i) = 1, 2,..., k

+3

All Articles