I was there and I ran into this problem. ANN was compiled for a fixed length of the feature vector, as well as many other classifiers, such as KNN, SVM, Bayesian, etc. that is, the input level must be clearly defined and not changed, this is a design problem. However, some researchers prefer to add zeros to fill in the missing gap, I personally think that this is not a good solution, because these zeros (unrealistic values) will affect the weights with which the network will converge. in addition, there may be a real signal ending in zeros.
ANN is not the only classifier; there is more and even better, for example, a random forest. this classifier is considered the best among researchers, it uses a small number of random functions, creating hundreds of decision trees, using package loading, this can work well, the number of selected functions is usually the square of the size of the function vector. these functions are random. each decision tree converges to a solution using majority rules, the most likely class will be chosen then.
Another solution is to use DTW to dynamically change the time, or it is even better to use HMM Hidden Markov.
Another solution is interpolation, interpolation (compensation of missing values โโalong a small signal), while all small signals are the same size as the maximum signal, interpolation methods include and are not limited to averaging, B-spline, cubic .....
Another solution is to use the function extraction method to use the best functions (the most distinctive), this time their fixed size, including PCA, LDA, etc.
another solution is to use function selection (usually after extracting the function), an easy way to select the best functions that give the best accuracy.
what at the moment, if not for those who worked for you, contact me.