Hydra usage question— jobsets vs evaluations?

My situation is that I have a repo with a machine-generated flake that will be locked and tagged many times per day . I would like to build each of these tags on my Hydra instance. As far as I can tell, there are sort of two ways this could happen:

  • Each time I tag the repo, I could create a new jobset for it, and then trigger that single jobset to evaluate once. I would have to have a corresponding process to retire the jobsets for old tags that are no longer needed.
  • I could have a single jobset for a whole branch of these tags, and I could trigger an evaluation whenever I update the branch to point to a new one of them.

The first option seems “purer” in the sense that each tag will be explicitly its own thing in Hydra, and there’s no race condition where I’m depending on Hydra to see the state of a branch at a particular time. But then, it’s clear that the vision for Hydra is not to have a zillion jobsets created dynamically like this— the “Couldn’t find an evaluation to compare to” message makes it clear that the expectation is that a jobset is evaluated many times and changes over time, not that it’s a one-and-done thing like it would if it was just a tag.

So my main questions are:

  • Is the first approach actually bad and going to lead to issues, or should I just stop worrying and learn to love it? How possible would it be to do things like patch out the “Couldn’t find” message, or pass each jobset at creation time a specific other jobset to use as its default comparison point?
  • If the second, is there a way that I could trigger an evaluation (even using the API) for a specific tag/hash, or is an evaluation always going to just be whatever is the truth at the time it runs? If I went this route, would there be a way to have the evaluation identified in the web UI by the tag rather than by an arbitrary integer?

Thanks, Hydra gurus!

1 Like