The simplest approach is probably to preserve some external state and the implementation of f affects its contents.
(define x 0)
(define (f n) (let ((tmp x)) (set! x n) tmp))
So x is initially 0, and each call to f returns the current value of x and saves the argument as a new value of x. Thus, (f 0) followed by (f 1), both return 0, leaving the final value of x equal to 1. When evaluating (f 1), followed by (f 0), there will be 0, and then 1, with finite x of 0.