I have to answer my question: I have limited knowledge of Parsi McParsefay. However, since no one answered, I hope I can add some value.
I think the main problem with most machine learning models is the lack of interpretability. This relates to your first question: "Why is this happening?" This is very difficult to say, because this tool is based on the black box model, namely the neural network. I will say that this seems extremely surprising, given the strong claims made against Parsi that an ordinary word like 'is' fools him consistently. Perhaps you made a mistake? It's hard to say without a code.
I assume that you did not make a mistake, and in this case, I think you could solve this problem (or mitigate it), using your observation that the word "is" seems to have dropped the model. You can simply check this sentence for the word "eat" and use GCloud (or another parser) in this case. Conveniently, as soon as you use both options, you can use GCloud as a backup for other cases where Parsey seems to fail if you find them in the future.
As for improving the basic model, if this suits you, you can recreate it using the original paper, and perhaps optimize your learning situation.
source share