Reputation
Badges 1
662 × Eureka!I'd like to set up both with and without GPUs. I can use any region, preferably some EU one.
So the pipeline runs successfully, I can find all the different tasks, but I cannot see them in the Pipelines tabโฆ
The title is specified in the plot (see the example, even if small).
I'm just creating a figure normally with matplotlib and save it to disk.
nevermind! Found and answered (solution in the issue linked above)
Also full disclosure - I'm not part of the ClearML team and have only recently started using pipelines myself, so all of the above is just learnings from my own trials ๐
Great, thanks! Any idea about environment variables and/or other files (CSV)? I suppose I could use the task.upload_artifact for the CSVs. but I'm still unsure about the environment variables
No task, no dataset, just an empty container with no reference to the task it's attached.
It seems to me that it should not move the task if use_current_task=True ?
Sure CostlyOstrich36 , sorry it took me so long to reply. I minimized the window a bit here so everything will fill in nicely. Worth mentioning this happens on all pages of course, but I went to the profile page so you can also see the clearml server version.
Different AMI image/installing older Python instances that don't enforce this...
For future reference though, the environment variable should be PIP_USE_PEP517=false
Thanks for your help SuccessfulKoala55 ! Appreciate the patience ๐
Thanks AgitatedDove14 , I'll first have to prove viability with the free version :)
StorageManager.download_folder(remote_url=' s3://some_ip:9000/clearml/my_folder_of_interest ', local_folder='./') yields a new folder structure, ./clearml/my_folder_of_interest , rather than just ./my_folder_of_interest
It is installed on the pipeline creating the machine.
I have no idea why it did not automatically detect it ๐
I know, that should indeed be the default behaviour, but at least from my tests the use of --python ... was consistent, whereas for some reason this old virtualenv decided to use python2.7 otherwise ๐คจ
SuccessfulKoala55 TimelyPenguin76
After looking into it, I think it's because our AMI does not have docker, and that the default instance suggested by ClearML auto scaler example is outdated
Sorry AgitatedDove14 , forgot to get back to this.
I've been trying to convince my team to drop poetry ๐
Yes, thanks AgitatedDove14 ! It's just that the configuration object passed onwards was a bit confusing.
Is there a planned documentation overhaul? ๐ค
We have an internal mono-repo and some of the packages are required - theyโre all available correctly for the controller, only some are required for the individual tasks, but the โmagicโ doesnโt happen ๐
That is, the controller does not identify them as a requirement, so theyโre not installed in the tasks environment.
Ah it already exists https://github.com/allegroai/clearml-server/issues/134 , so I commented on it
Using an on-perm clearml server, latest published version
I mean, if I search for "model", will it automatically search for tasks containing "model" in their name?
I'll have a look, at least it seems to only use from clearml import Task , so unless mlflow changed their SDK, it might still work!
Sounds like a nice idea ๐
Follow-up; any ideas how to avoid PEP 517 with the auto scaler? ๐ค Takes a long time to build the wheels
Hah. Now it worked.
It's self-hosted TimelyPenguin76
SuccessfulKoala55 WebApp: 1.4.0-175 โข Server: 1.4.0-175 โข API: 2.18
DeterminedCrab71 not in this scenario, but I do have it occasionally, see my earlier thread asking how to increase session timeout time