You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Since the new way of uploading results into BigQuery happens as part of the crawl, and no longer processes HARs after the fact we are at risk of not being able to recover from a bad upload as much as we could before.
In theory we could run the old python pipeline (or even the Java one befofe it). In reality that code is no longer maintained, it's likely to break (and is already broken for the more recent, larger datasets), may not work on older datasets, and isn't on the tech stack we know or want to support.
It would be nice to have a "mini-crawler" which didn't actually run the tests, but instead just used the WPTAgent upload code to save a batch of historical HARs into BigQuery.
This would also allow us to handle missing code we never got round to fixing.
The text was updated successfully, but these errors were encountered:
Since the new way of uploading results into BigQuery happens as part of the crawl, and no longer processes HARs after the fact we are at risk of not being able to recover from a bad upload as much as we could before.
In theory we could run the old python pipeline (or even the Java one befofe it). In reality that code is no longer maintained, it's likely to break (and is already broken for the more recent, larger datasets), may not work on older datasets, and isn't on the tech stack we know or want to support.
It would be nice to have a "mini-crawler" which didn't actually run the tests, but instead just used the WPTAgent upload code to save a batch of historical HARs into BigQuery.
This would also allow us to handle missing code we never got round to fixing.
The text was updated successfully, but these errors were encountered: