Puppeteer stochasticity and timely pauses
Donna Hogan
Posted on February 21, 2019
When only the watched tests are passing...
I'd written a few tests with Jest and Puppeteer for my SPA built with React. Very simple stuff: launch the browser, navigate to the page at localhost, enter some text and click a button, then wait for the next page to load and find a particular selector, that kinda thing.
I added a small feature to one of the views that caused some unexpected chaos with my tests. I hadn't even written any new tests to cover the new component, but I noticed that my tests were suddenly only passing if ran with {headless: false}
. Like it needed me to be watching in order for it to work.
I assumed it was a timing issue (and I was close!) so I kept bumping the timeout ever higher on different waitForSelector
s and on the suite as a whole, and throwing pauses in there (I was getting so close!), still no luck. When it ran in headless mode, it would fail inconsistently - not in the same place each time, but rather one of a few different places randomly.
I googled (one of my searches yielded only 9 hits!), frantically read the docs, joined the Puppeteer slack community (which is actually very quiet), read all (like, 3) questions in the GitHub forum from people who were seeing similar results, and they all seemed to be running into issues with headless browsers being blocked by some websites. Interesting, but definitely not the issue I was having. I'd hoped to find something like a {stochastic: false}
mode 😆
Reflecting on what I had to change to bring in that extra little component, I remembered I had needed to add a lifecycle method (componentDidUpdate
) to the same view where the tests tended to start failing.
So, in the tests, after landing on that page/view and identifying all the selectors and whatever (I had confirmed with screenshots that the page had sufficient time to load already), but before simulating the click event on one of the links, I added a 3 second pause for both instances where it occurred... and suddenly the tests were passing consistently! In only 9 seconds!
I know pauses aren't best practice in tests, and if I was writing these tests with Enzyme instead I would probably not need to worry about this, but in this case it seems to be the best way to let that lifecycle method resolve before interacting with the links on the page. And indeed, this more closely emulates the user's experience, since every time it was working with {headless: false}
I also had {SloMo: 250}
which was incidentally just enough time for the lifecycle method to resolve. This means I could probably safely decrease those pauses to a half-second or less (but this builds in plenty of margin for slowness on Travis CI's end).
What seemed like chaotic behavior of the tests was really a much simpler process (that I wasn't seeing) resolving just a bit slower than the observation period.
Maybe this blog post will be search result No. 10 and save someone else a little time debugging tests!
Posted on February 21, 2019
Join Our Newsletter. No Spam, Only the good stuff.
Sign up to receive the latest update from our blog.