Hi @<1845635622748819456:profile|PetiteBat98> , metrics/scalars/console logs are not stored on the files server. They are all stored in Elastic/Mongo. Files server is not required to use. default_output_uri will point all artifacts to your Azure blob
Hi @<1794901326925139968:profile|SpicyShark55> ,you have it as part of the open source as code - None
Why do you manually use set_repo ?
RotundHedgehog76 ,
What do you mean regarding language? If I'm not mistaken ClearML should include Optuna args as well.
Also, what do you mean by commit hash? ClearML logs the commit itself but this can be changed by editing
You mean you want the new task created by add_step to take in certain parameters? Provided where/by who?
I think this is what you're looking for - the agent integration
None
I think if you copy all the data from original server and stick it in the new server it should transfer all data. Otherwise I think you would need to extract that through the API or copy mongo documents
@<1654294834359308288:profile|DistressedCentipede23> , can you please elaborate on the exact workflow you want to build?
When the agent starts running a task it will print out where the logs are being saved
In that case then yes, install the agent on top of the machine with the A100 with 8 GPUs
What do you mean by signature?
Can you add a screenshot of the execution section of the experiment when you add task.set_repo and when you don't (Just add some sample script in the same folder and run task.init and print hello world)?
Hi TimelyCrab1 , directing all your outputs to s3 is actually pretty easy. You simply need to configure api.files_server: <S3_BUCKET/SOME_DIR> in clearml.conf of all machines working on it.
Migrating existing data is more difficult since everywhere in the system everything is saved as links. I guess you could change the links in mongodb but I would advise against it.
I suggest reading the full doc page on this 🙂
You can clone it via the UI, enqueue it to a queue that has a worker running against that queue. You should get a perfect 1:1 reproduction
Hi @<1523701062857396224:profile|AttractiveShrimp45> , I'm afraid not. But you can always export these tables and plots into a report and add your custom data into the ClearML report as well
Please try the following:
` In [1]: from clearml.backend_api.session.client import APIClient
In [2]: client = APIClient()
In [3]: tasks = client.tasks.get_all()
In [4]: tasks[0]
Out[4]: <Task: id=0a27ca578723479a9d146358f6ad3abe, name="2D plots reporting">
In [5]: tasks[0].data
Out[5]:
<tasks.Task: {
"id": "0a27ca578723479a9d146358f6ad3abe",
"name": "2D plots reporting",
"user": "JohnC",
"company": "",
"type": "training",
"status": "published",
"comment": "Aut...
Hi @<1570583237065969664:profile|AdorableCrocodile14> , you can export a report as a PDF 🙂
Hi @<1556812506238816256:profile|LargeCormorant97> , I think you would need to go deeper and investigate each docker's environment and see what is run inside each container and what is the entrypoint since there are several containers each in charge of something else.
Is there a specific reason you need to deploy it without docker?
UnevenDolphin73 , how can you tell that ClearML is trying to initialize a task when get_task is called?
CluelessElephant89 , Hi 🙂
For ClearML to treat your artifact as a model you'd have to register it as a Model class like here:
https://clear.ml/docs/latest/docs/references/sdk/model_model
I'm guessing you'd want it as an output model, correct?
Do you want to register this artifact as both a model AND an artifact or would only having it as a model is enough?
Can you provide a code snippet that makes agent hang?
@<1523701137134325760:profile|CharmingStarfish14> ,interesting, so what are you suggesting? Creating Jira tasks from special tags on ClearML?
Before injecting anything into the instances you need to spin them up somehow. This is achieved by the application that is running and the credentials provided. So the credentials need to be provided to the AWS application somehow.
Hi @<1784754456546512896:profile|ConfusedSealion46> , in that case you can simply use add_external_files to the files that are already in your storage. Or am I missing something?
I would also suggest using pipelines if you want to do several actions with a task controlling the progress.