Define the sequence offset as the difference between the maximum and minimum elements. Given a sequence of integers, find the maximum offset over all adjacent subsequences of length m.
For example, if our sequence is [1, 5, 7, 0, 2, -4] and m = 3,
- [1, 5, 7] has an offset of 6.
- [5, 7, 0] has an offset of 7.
- [7, 0, 2] has an offset of 7.
- [0, 2, -4] has an offset of 6.
- Thus, the maximum offset is 7.
If we denote by n the length of the input sequence, then my solution below is executed in O (nlog (m)) time. Is there any way to do better? I feel that there should be a linear time algorithm that I am missing. For the purposes of this question, all I care about is the asymptotic complexity of time.
#include <vector>
#include <set>
#include <iostream>
int find_max_displacement(std::vector<int> seq, int m){
std::multiset<int> subseq;
for (int i = 0; i < m; i++){
subseq.insert( seq[i] );
}
int max_disp = *subseq.rbegin() - *subseq.begin();
for (int i = 0; i < seq.size() - m; i++){
subseq.erase(subseq.find(seq[i]));
subseq.insert( seq[i+m] );
int new_disp = *subseq.rbegin() - *subseq.begin();
if (new_disp > max_disp){
max_disp = new_disp;
}
}
return max_disp;
}
int main(){
std::vector<int> arr {1, 5, 7, 0, 2, -4};
int max_disp = find_max_displacement(arr, 3);
std::cout << max_disp << std::endl;
return 0;
}
source
share