I would like to implement ongoing training of my neural network as my entry continues. However, when I get new data, the normalized values โโwill change over time. Let's say that over time I get:
df <- "Factor1 Factor2 Factor3 Response 10 10000 0.4 99 15 10200 0 88 11 9200 1 99 13 10300 0.3 120" df <- read.table(text=df, header=TRUE) normalize <- function(x) { return ((x - min(x)) / (max(x) - min(x))) } dfNorm <- as.data.frame(lapply(df, normalize))
Then, as soon as the time comes:
df2 <- "Factor1 Factor2 Factor3 Response 12 10100 0.2 101 14 10900 -0.7 108 11 9800 0.8 120 11 10300 0.3 113" df2 <- read.table(text=df2, header=TRUE) Normalize all-time data in one shot dfNorm <- as.data.frame(lapply(df, normalize))
This is how I will train over time. However, I was wondering if there is any graceful way to reduce this biased attitude towards continuous learning, since normalized values โโinevitably change over time. Here, I assume that abnormal values โโmay be biased.
r neural-network training-data normalization
user3091668
source share