FWIW Itβs also listed in other places @<1523704157695905792:profile|VivaciousBadger56> , e.g. None says:
In order to make sure we also automatically upload the model snapshot (instead of saving its local path), we need to pass a storage location for the model files to be uploaded to.
For example, upload all snapshots to an S3 bucketβ¦
I'm also getting the following warning, I guess it's some ClearML dependency?IPython could not be loaded!
AgitatedDove14
hmmm... they are important, but only when starting the process. any specific suggestion ?
(and they are deleted after the Task is done, so they are temp)
Ah, then no, sounds temporary. If they're only relevant when starting the process though, I would suggest deleting them immediately when they're no longer needed, and not wait for the end of the task (if possible, of course)
Is it currently broken? π€
Removing the PVC is just setting the state to absent AFAIK
Thanks for the reply CostlyOstrich36 !
Does the task read/use the cache_dir directly? It's fine for it to be a cache and then removed from the fileserver; if users want the data to stay they will use the ClearML Dataset π
The S3 solution is bad for us since we have to create a folder for each task (before the task is created), and hope it doesn't get overwritten by the time it executes.
Argument augmentation - say I run my code with python train.py my_config.yaml -e admin.env...
Any updates @<1523701087100473344:profile|SuccessfulKoala55> ? π
I'm running tests with pytest , it consumes/owns the stream
That's fine for the current use-case I believe.
Once the team is happy with the logging functionality, we'll move on to remote execution and things will update.
Yes; I tried running it both outside venv and inside a venv. No idea why it uses 2.7?
Any simple ways around this for now? @<1523701070390366208:profile|CostlyOstrich36>
Actually TimelyPenguin76 I get only the following as a "preview" -- I thought the preview for an image would be... the image itself..?
That's probably in the newer ClearML server pages then, I'll have to wait still π
yes, a lot of moving pieces here as we're trying to migrate to AWS and set up autoscaler and more π
Yes π I want ClearML to load and parse the config before that. But now I'm not even sure those settings in the config are even exposed as environment variables?
I will! (once our infra guy comes back from holiday and updates the install, for some reason they setup server 1.1.1???)
Meanwhile wondering where I got a random worker from
Can I query where the worker is running (IP)?
Thanks AgitatedDove14 , I'll first have to prove viability with the free version :)
Indeed. I'll open an issue, sure!
One more UI question TimelyPenguin76 , if I may -- it seems one cannot simply report single integers. The report_scalar feature creates a plot of a single data point (or single iteration).
For example if I want to report a scalar "final MAE" for easier comparison, it's kinda impossible π
We're not using the docker setup though. The CLI run by the autoscaler is python -m clearml_agent --config-file /root/clearml.conf daemon --queue aws_small , so no docker
Using the PipelineController with add_function_step
-
I guess? π€ I mean the same filter option one has for e.g. tags in the table view. In the "all experiments" project I think it would make sense for one to be able to select the projects of interest, or even filter for textual matches.
-
Sorry I meant the cards indeed :)
For example, can't interact with these two tasks from this view (got here from searching in the dashboard view; they're in different projects):
Also (sorry for all of these!) - could be nice to have a direct "task comparison" link in the UI somewhere, that would open a comparison with no tasks and the user can add them manually using the "add experiments" button. :)
Unfortunately I can't take a photo of not being able to compare tasks by navigating around the WebUI...
Does that clarify the issue CostlyOstrich36 ?