Hi QuaintJellyfish58
This is odd, this "undefined" project is also marked as "Example" which would explain why you cannot delete it, but not how you ended up with one
Any idea on what changed on your server ?
AbruptWorm50 my apologies I think I mislead you you, yes you can pass geenric arguments to the optimizer class, but specifically for optuna, this is disabled (not sure why)
Specifically to your case, the way it works is:
your code logs to tensorboard, clearml catches the data and moves it to the Task (on clearml-server), optuna optimization is running on another machine, trail valies are maanually updated (i.e. the clearml optimization pulls the Task reported metric from the server and updat...
Hi TeenyFly97
Can I super-impose the graphs while comparing experiments?
Hmm not at the moment, I think someone asked for the option to control it, in both comparison mode and "standalone" mode.
There is a long discussion on this feature here:
https://github.com/allegroai/trains/issues/81#issuecomment-645425450
Feel free to chime in ๐
I think that the latest agreement is a switch in the UI, separating or collecting (super-imposing) those graphs.
I am struggling with configuring ssh authentication in docker mode
GentleSwallow91 Basically the agent will automatically mount the .ssh into the container , just make sure you set the following in the clearml.conf:force_git_ssh_protocol: truehttps://github.com/allegroai/clearml-agent/blob/178af0dee84e22becb9eec8f81f343b9f2022630/docs/clearml.conf#L30
Hi SmallDeer34
Is the Dataset in clearml-data ? If it is then Dataset.get().get_local_copy() will get you a cached local copy of the entire dataset.
If it is not, then you can use StorageManager.get_local_copy(url_here) to download the dataset.
- Any Argparser is automatically logged (and later can be overridden from the UI). Specifically HfArgumentParser will be automatically logged https://github.com/huggingface/transformers/blob/e43e11260ff3c0a1b3cb0f4f39782d71a51c0191/examples/pytorc...
Hi SmallDeer34
Did you call Task.init ?
Damn, JitteryCoyote63 seems like a bug in the backend, it will not allow you to change the task type to the new types ๐
the time taken to upload halved. It is puzzling because as you say it's not that much to upload.
Maybe it was the load on the server? meaning dealing with multiple requests at the same time delayed the requests?!
For now I've whittled down the number of entries to a more select but useful few and that has solved the issue. If it crops up again I will try
connect_configuration
properly.
Thanks for your help!
My pleasure ๐
I guess we should have obfuscated the name better ๐
What do you mean? every Model has a unique ID, what do you consider a version?
Perhaps this is something that can be made clearer when updating the docu?
Hmm that is a good point, let's open a git issue and explain there, then update the docs, wdyt?
Is there an option to do this from a pipeline, from within theย
add_step
ย method? Can you link a reference to cloning and editing a task programmatically?
Hmm, I think there is an open GitHub issue requesting a similar ability , let me check on the progress ...
nope, it works well for the pipeline when not I don't choose to continue_pipeline
Could you send the full log please?
I can probably have a python script that checks if there are any tasks running/pending, and if not, run docker-compose down to stop the clearml-server, then use boto3 to trigger the creating of a snapshot of the EBS, then wait until it is finished, then restarts the clearml-server, wdyt?
I'm pretty sure there is a nice way, let me check soemthing
@<1524922424720625664:profile|TartLeopard58> @<1545216070686609408:profile|EnthusiasticCow4>
Notice that when you are spinning multiple agents on the same GPU, the Tasks should request the "correct" fractional GPU container, i.e. if they pick a "regular" no mem limit.
So something like
CLEARML_WORKER_NAME=host-gpu0a clearml-agent daemon --gpus 0 clearml/fractional-gpu:u22-cu12.3-2gb
CLEARML_WORKER_NAME=host-gpu0b clearml-agent daemon --gpus 0 clearml/fractional-gpu:u22-cu12.3-2gb
```...
VivaciousWalrus21 I took a look at your example from the github issue:
https://github.com/allegroai/clearml/issues/762#issuecomment-1237353476
It seems to do exactly what you expect. and stores its own last iteration as part of the checkpoint. When running the example with continue_last_task=int(0) you get exactly what you expect
(Do notice that TB visualizes these graphs in a very odd way, and it took me a few clicks to verify it...)
Hi CleanPigeon16
You need to be able access the machine running the agent, usually the default port will be 10022.
If you need further debug message, add --debug at the beginning of the clearml-session.clearml-session --debug ...To get all the debug print, please upgrade to clearml-session==0.3.3
ShallowCat10 so you mean like meta-data on top of the image? or another level of title series ?
because, the iteration field itself is an integer...
okay, wait I'll see if I can come up with something .
Oh if this is the case you can probably do
` import os
import subprocess
from clearml import Task
from clearml.backend_api.session.client import APIClient
client = APIClient()
queue_ids = client.queues.get_all(name="queue_name_here")
while True:
result = client.queues.get_next_task(queue=queue_ids[0].id)
if not result or not result.entry:
sleep(5)
continue
task_id = result.entry.task
client.tasks.started(task=task_id)
env = dict(**os.environ)
env['CLEARML_TASK_ID'] = ta...
Was going crazy for a short amount of time yelling to myself: I just installed clear-agent init!
oh noooooooooooooooooo
I can relate so much, happens to me too often that copy pasting into bash just uses the unicode character instead of the regular ascii one
I'll let the front-end guys know, so we do not make ppl go crazy ๐
Should work in all cases, plotly/matplotlib/scalar_rerport
Hi GrievingTurkey78
How can I check the server dashboard to make sure everything is working? I have tried to access the external ip through https but the browser is not able to connect.
What do you mean by the server dashboard ?
regrading (2) see here: https://allegro.ai/docs/faq/faq/#web-auth
(BTW: you can disable the auto-logging feature of joblib)Task.init(..., auto_connect_frameworks={'scikit': False})
I think it should be treated as failed,
I'm not sure where I stand on default behavior, it it could easily be an argument for the pipeline controller
Are you asking regrading the k8s integration ?
(This is not a must, you can run the clearml-agent bare-metal on any OS)