I have four tests in my Capybara / Rspec suite that continue to fail (a real problem for deploying CI).
Worst of all, these tests are interrupted intermittently, and often only when the entire package is launched, which makes debugging difficult.
All of them are ajax requests, either send a remote form, or are deleted by a remote link, and then expect(page).to have_content 'My Flash Message' .
These tests are even interrupted intermittently during the same test cycle. For example, I have several models that behave the same, so I repeat them for testing.
eg, ['Country', 'State', 'City'].each do |object| let(:target) { create object.to_sym } it 'runs my frustrating test' do end end
Sometimes a country fails, sometimes they say, sometimes it all goes away.
I tried adding wait: 30 to the wait statement. I tried adding sleep 30 before the expect statement. I still get intermittent passages.
There is quite a lot of information describing subtle ajax tests, but I have not found much about how to debug and fix such problems.
I am very grateful for any advice or pointers from others before pulling all my hair out!
UPDATE
Thanks for all these excellent answers. It was helpful to see that others were facing similar problems and that I was not alone.
So is there a solution?
Recommendations for using debugging tools, such as the prug, byebug, Poltergeist debug functions (thanks @ Jay-Ar Polidario, @TomWalpole), were useful for confirming what I thought I already knew, namely, and how it was @ BM5K suggested), the functions work sequentially in the browser, and the errors lie in the tests.
I experimented with setting timeouts and retries (@ Jay-Ar Polidario, @ BM5K), and although the improvement was not yet a consistent solution. More importantly, this approach was similar to fixing holes, rather than correct fixing, and therefore I was not completely satisfied.
I ended up going with a census of these tests. This entailed the destruction of multi-stage functions and the setup and testing of each step individually. Although purists may argue that this is not really testing from the user's point of view, there is sufficient agreement between each test, which is convenient for me with the result.
After going through this process, I noticed that all these errors were related to βclicking on things or filling out forms,β as @BoraMa suggested. Although the experience was canceled in this case, we adopted the .trigger('click') syntax because capybara + poltergeist reported errors when clicking on elements using click_link or find(object).click , and these tests were problematic .
To avoid these problems, I removed JS from the tests as much as possible. that is, testing most functions without JS, and then creating very short target JS specifications to test for specific JS responses, functions, or user feedback.
Thus, in fact, there is not a single fix. Great refactoring, which, frankly, probably should have happened, was a valuable exercise. Tests lost some functions, breaking everything down into separate tests, but overall it made reading and maintaining tests easier.
There are several more tests that sometimes show red, and this will require another work. But overall a big improvement.
Thank you all for the great leadership and assured me that interaction in a test environment can be the main reason.