Reputation
Badges 1
383 × Eureka!AgitatedDove14 - any thoughts?
Is it not possible to say just look at my requirements.txt file and the imports in the script?
I was having this confusion as well. Did behavior for execute_remote change that it used to be Draft is Aborted now?
Ok i did a pip install -r requirements.txt and NOW it picks them up correctly
Any specific use case for the required “draft” mode?
Nothing except that Draft makes sense feels like the task is being prepped and Aborted feels like something went wrong
This actually ties well with the next version of pipelines we are working on
Is there a way to see a roadmap on such things AgitatedDove14 ?
Chance of recording being available?
Yeah concerns make sense.
The underlying root issue is unnecessary models being added or at least what I think are unnecessary and even happening when you load a model to test.
Do people use ClearML with huggingface transformers? The code is std transformers code.
Will create an issue.
AgitatedDove14 - yeah wanted to see what’s happening before disabling as I wasn’t sure if this is what’s expected.
Ref of dvc doing about the same - https://github.com/iterative/dvc/blob/master/dvc/fs/s3.py#L127-L134
What could those triggers be?
how do you see things being used as the most normal way?
It completed after the max_job limit (10)
Doesn’t set the parent though 🤔
Would like to get to the Maturity Level 2 here
Sagemaker will make that easy, especially if I have sagemaker as the long tail choice. Granted at a higher cost
Got it. Never ran GPU workload in EKS before. Do you have any experience and things to watch out for?
So packages have to be installed and not just be mentioned in requirements / imported?
I am running from noebook and cell has returned
GrumpyPenguin23 both in general and clearml 🙂
I am doing Task.init but it’s not adding expected libraries imported in the script or from requirements.txt
More interested in some way for doing this ssytem wide
Does a pipeline step behave differently?
Thanks for the confirmation.
Yes using clearml-data.
Can I pass a s3 path to ds.add_files() essentially so that I can directly store a dataset without having to get the files to local and then upload again. Makes sense?
Will try it out. Pretty impressed 🙂