A failure in the get_global_report tests causes pytest to wait forever
The broken API URLs (#94 (closed)) have led to a bunch of tests related to the arkindex process
subcommands to fail (see this job for an example). Pytest states the following at the end of the logs:
================== 4 failed, 120 passed, 1 warning in 16.86s ===================
Despite it saying it only took 17 seconds, GitLab says the job ran for an hour, which is the timeout on CI jobs. I can also reproduce this locally. I ran some traces using py-spy and found that there are multiple threads: the main thread is waiting for the other threads to stop, and the other threads are threads that were started by rich
to display progress bars.
I reduced the issue to this script:
#!/usr/bin/env python3
from rich.progress import track
a = track([42])
next(a)
raise ZeroDivisionError
Run this script, and you will see the expected ZeroDivisionError
traceback, but the script never stops. Rich's progress thread is still waiting for a progress update to happen. To avoid this issue, which could also occur when running the CLI normally and something goes wrong while fetching the ML reports, the generator iterator needs to always be closed using the close()
method.