Reputation
Badges 1
25 × Eureka!you can also increase the limit here:
https://github.com/allegroai/clearml/blob/2e95881c76119964944eaa0289549617e8afeee9/docs/clearml.conf#L32
ohh sorry, weights_url=path
Basically url can be the local path to the weights file ๐
Try:task.update_requirements('\n'.join([".", ]))ย
I mean this blob is then saved on the fs
It can if you do:temp_file = task.connect_configuration('/path/to/config/file', name='configuration object is a config file')
Then temp_file is actually a local copy of the text coming from the Task.
When running in manual mode the content of '/path/to/config/file' is stored on the Task When running remotely by the agent, the content from the Task is dumped into a temp file and the path to the file is returned in temp_file
Hmm, so the way the configuration works is it loads the default configuration (equivalent to the example in the docs) then it adds the ~/clearml.conf on top. That means that you can tell your users to just copy paste the credentials from the UI into a template you make. How is that ?
Anyhow if the StorageManager.upload was fast, the upload_artifact is calling that exact function. So I don't think we actually have an issue here. What do you think?
I was just able to reproduce with "localhost"
Ohh, clearml is designed so that you should not worry about that, download_dataset = StorageManger.get_local_copy()
this is cashed, meaning the machine that runs that like the second time will not re download the path.
This means step 1 is redundant, no?
Usually when data is passed between components it is automatically uploaded as artifact to the Task (stored on the files server or object storage etc.) then downloaded and passed to the next steps.
How large is the data that you are wo...
Hi DrabCockroach54
Do we know if gpu_0_mem_usage and gpu_0_mem_used_gb, both shows current GPU usage?
the first is percentage used (memory % used at any specific moment) and the second is memory used GiB , both for the video memory
How to know from this how much GPU is reserved for the task if this task is in progress?
What do you mean by how much is reserved ? Are you running with an agent?
Hi VastShells92022-12-20 12:48:02,560 - clearml.automation.optimization - WARNING - Could not find requested hyper-parameters ['duration'] on base task a6262a151f3b454cba9e22a77f4861e3
Basically it is telling you it is setting a parameter it never found on the original Task you want to run the HPO o.
The parameter name should be (based on the screenshot) "Args/duration" (you have to add the section name to the HPO params). Make sense ?
Hi CourageousWhale20
Most documentation is here https://allegro.ai/docs
OhTask.get_project_object().default_output_destination = None
This has no effect on the backend, meaning this does not actually change the value.from clearml.backend_api.session.client import APIClient c = APIClient() c.projects.update(project="<project_id_here>", default_output_destination="s3://")
btw: how/what it is used for in your workflow ?
BroadSeaturtle49 agent RC is out with a fix:pip3 install clearml-agent==1.5.0rc0
Let me know if it solved the issue
So it is the automagic that is not working.
Can you print the following before calling Both Task.debug_simulate_remote_task
and Task.init
, Notice you have to call Task.initprint(os.environ)
Yeah, Curious - is a lot of clearml usecases not geared for notebooks?
That is somewhat correct, notebooks are not actually used with a lot of deep-learning projects as they require entire repository to support.
I guess generally speaking the workflow is, "test your code" (i.e. small scale with limited data), then clone and enqueue for remote execution.
That said, I think it will be great to expand the support.
TrickySheep9 I like the idea of context for Tasks, can you expand on how...
pip install clearml==1.0.6rc2
Did not work?!
Also, How do I make the files other than entry script visible to the job?
The assumption for clearml (regradless on how you create a Task) is that you code is either a standlone script (or jupyter notebook) or inside a git repository. In case of a git repository cleamrl-agent will clone the git repository of the code, apply the uncommitted changes and run your code.
SubstantialElk6 Ohh okay I see.
Let's start with background on how the agent works:
When the agent pulls a job (Task), it will clone the code based on the git credentials available on the host itself, or based on the git_user/git_pass configured in ~/clearml.conf
https://github.com/allegroai/clearml-agent/blob/77d6ff6630e97ec9a322e6d265cd874d0ab00c87/docs/clearml.conf#L18
The agent can work in two modes:
Virtual environment mode, where it will create a new venv for each experiment ba...
CleanPigeon16 Can you send also the "Configuration Object" "Pipeline" section ?
After it finishes the 1st Optimzation task, what's the next job which will be pulled ?
The one in the highest queue (if you have multiple queues)
If you use fairness it will pull in round robin from all queues, (obviously inside every queue it is based on the order of jobs).
fyi, you can reorder the jobs inside the queue from the UI ๐
DeliciousBluewhale87 wdyt?
ImmensePenguin78 this is probably for a different python version ...
Essentially the example provide just prints out ids to the log file,
What do mean?
Hi LudicrousParrot69
Not sure I follow, is this pyfunc running remotely ?
Or are you looking for interfacing with previously executed Tasks ?
Okay how do I reproduce it ?
Thanks GrievingTurkey78 !
It seems that under the hood they user argparser
See here:
https://github.com/google/python-fire/blob/c507c093fa6622ab5efee21709ffbf25974e4cf7/fire/parser.py
Which means it might just work?!
What do you think?
Hi OddAlligator72
itย
ย that they do not support PBT.
The optimization algorithm themselves are usually external (although the trivial stuff are in within Trains)
Do you have a specific PBT implementation you are considering ?
That said, it might be different backend, I'll test with the demoserver
So what youโre saying is to first kick off a new run and then rename the underlying Pipeline Task, which will cause that particular run to become a new pipeline name?
Correct, basically you are not changing the "pipeline" per-se but the execution name of the pipeline, if that makes sense
What would be most ideal would be to be able to right-click on a pipeline run and have a โcloneโ option, like you can with a task, where you can start a new run with a new name in a single step.
...