Reputation
Badges 1
662 × Eureka!Not sure I understand your comment - why not let the user start with an empty comparison page and add them from "Add Experiment" button as well?
I'm not sure what you mean by "entity", but honestly anything work. We're already monkey-patching our way 😄
Thanks! I'll wait for the release note/docs update 😁
Uhhh, not really unfortunately :white_frowning_face: . I have ~20 tasks happening in a single file, and it's quite random if/when this happens. I just noticed this tends to happen with the shorter tasks
I'm guessing that's not on pypi yet?
The Task.init is called at a later stage of the process, so I think this relates again to the whole setup process we've been discussing both here and in #340... I promise to try ;)
What's new in 1.1.6rc0?
Any updates @<1523701087100473344:profile|SuccessfulKoala55> ? 🫣
Dynamic pipelines in a notebook, so I don’t have to recreate a pipeline every time a step is changed 🤔
It's given as the second form you suggested in the mini config ( http://${...}:8080 ). The quotation marks are added later by pyhocon.
This could be relevant SuccessfulKoala55 ; might entail some serious bug in ClearML multiprocessing too - https://stackoverflow.com/questions/45665991/multiprocessing-returns-too-many-open-files-but-using-with-as-fixes-it-wh
Heh, well, John wrote that in the first reply in this thread 🙂
And in Task.init main documentation page (nowhere near the code), it says the following -
Any follow up thoughts SuccessfulKoala55 or CostlyOstrich36 ?
FWIW It’s also listed in other places @<1523704157695905792:profile|VivaciousBadger56> , e.g. None says:
In order to make sure we also automatically upload the model snapshot (instead of saving its local path), we need to pass a storage location for the model files to be uploaded to.
For example, upload all snapshots to an S3 bucket…
Thanks Alon. In the full/official documentation the clearml-data CLI is not mentioned anywhere, so perhaps it should be refreshed 😉
I think we're referring to different things here.
I won't be using the UI (and neither will my team).
But as mentioned, we've used DVC before and it adds a lot of junk metadata files to each GitHub PR (many dvc.yaml , dvc.lock and .gitignore files). We're trying to avoid that as much as possible, hence my question about GitHub pull...
Thanks! To clarify, all the agent does is then spawn new nodes to cover the tasks?
Is Task.create the way to go here? 🤔
I can also do this via Mongo directly, but I was hoping to skip the K8S interaction there.
I wouldn't mind going the requests route if I could find the API end point from the SDK?
Hmmm, what 😄
Hey @<1537605940121964544:profile|EnthusiasticShrimp49> ! You’re mostly correct. The Step classes will be predefined (of course developers are encouraged to add/modify as needed), but as in the DataTransformationStep , there may be user-defined functions specified. That’s not a problem though, I can provide these functions with the helper_functions argument.
- The
.add_function_stepis indeed a failing point. I can’t really create a task from the notebook because calling `Ta...
The tl;dr is that some of our users like poetry and others prefer pip . Since pip install git+.... stores the git data, it seems trivial to first try and install based on pip , and only later on poetry , since the pip would crash with poetry as it stores git data elsewhere (in poetry.lock )
I guess it's mixed. If #340 is resolved, then this initializer task will be a no-op: detach, and init-close new tasks as needed.
It's pulled from the remote repository, my best guess is that the uncommitted changes apply only after the environment is set up?
The network is configured correctly 🙂 But the newly spun up instances need to be set to the same VPC/Subnet somehow