Hi @<1566596960691949568:profile|UpsetWalrus59> , I don't think you can pass it to clearml-agent init since this doesn't come up as any of the prompts, BUT, you can always create the file manually with all the fields filled in. What do you think?
Hi DefeatedMoth52 , where have you been using --find-links
tag? When you run the experiment how does the package show up in the ClearML UI?
Can you copy paste the error you got?
Hi, SmugTurtle78
Hi, Is there any manifest for the relevant polices needed for the AWS account (if we are using autoscaling)?
I'm not sure. From my experience the autoscaler requires access to spinning instances up & down, listing instances, checking tags and reading machine logs.
You should try running with those permissions. If something is missing, you'll see it in the log 🙂
Also, Is there a way to use Github deploy key instead of personal token?
Do you mean git user/...
GrievingTurkey78 , I'm not sure. Let me check.
Do you have cpu/gpu tracking through both pytorch lightning AND ClearML reported in your task?
Hi ShortElephant92 , how are you adding files currently? Code or CLI? You can specify the storage in both cases.
Via CLI using the--output-uri
tag.
https://clear.ml/docs/latest/docs/clearml_data/clearml_data_cli#create
Via code use the output_url
parameter during Dataset.create()
call
https://clear.ml/docs/latest/docs/clearml_data/clearml_data_sdk#datasetcreate
JitteryCoyote63 , well I'm happy to hear that everything worked out in the end 🙂
And as I mentioned the only difference is that it's being cloned & enqueued from different users basically?
@<1774245260931633152:profile|GloriousGoldfish63> , you can simply use Task.set_name
- None
I think this is what you're looking for
Hi. I'm having a bit of trouble understanding here. To have something pulled from the queue you need to have a worker running against that queue.
I'm not quite familiar with IBM LSF, is it kinda like Google Cloud Run?
What do you mean by 'submitting new agents' to the lsf system? Do you mean running new agents on the platform?
You can execute specific tasks via the command clearml-agent execute <TASK_ID>
Otherwise is there an option to start task locally that submit the task to the...
However it would be advisable to also add the following argument to your code : Task.init(..., output_uri=True)
SmugTurtle78 , I'll take a look at it shortly 🙂
BoredPigeon26 , you can scroll through iterations 🙂
Hi UnevenDolphin73 , when you say pipeline itself you mean the controller? The controller is only in charge of handling the components. Lets say you have a pipeline with many parts. If you have a global environment then it will force a lot of redundant installations through the pipeline. What is your use case?
Hi GiganticMole91 ,
Can you please elaborate on this part?I can easily make ClearML treat encoder_layers as a hyperparameter, but the variables are no longer linked when they hit ClearML. I would then see Args/model.encoder_layers: 12 and Args/model.decoder_layers: 12. Is there any way to link hyperparameters in the HyperParameterOptimizer ?
What are you seeing in the UI and what were you expecting to see?
Yeah, what is the version of the ClearML server. You can see it on the bottom right if you go into settings
Hi @<1748153283605696512:profile|GreasyPenguin24> , you certainly can. CLEARML_CONFIG_FILE is the environment variable that allows you to use different configuration files
Hi RattyLouse61 , what do you see in the log of the run?
I think that by default debug samples are usually saved on the fileserver. The following configuration should force the debug samples to upload to s3.
In clearml.conf
change:api.files_server:
s3://your_bucket
Hi PreciousParrot26 ,
Why are you running from gitlab runner - Are you interested in specific action triggers?
I'm not sure. But you can access the clearml.conf
file through code
Hi @<1790915053747179520:profile|KindParrot86> , currently Slack alerts are available as an example for the OS - None
You can write an adapter for it to send emails instead of Slack alerts
Hi @<1727497172041076736:profile|TightSheep99> , allegroai package is part of the enterprise and not available publicly, you must be looking at documentation related to the HyperDatasets
Hi @<1523701949617147904:profile|PricklyRaven28> , what code are you talking about, of the Slack alerts? You can also use standalone scripts if you don't want to run from a repo. Also, I don't think you should be running the agent in services mode yourself as it is a part of the ClearML setup.
What is the use case of accessing clearml.conf
during runtime?
does curl https://<WEBSITE>.<DOMAIN>/v2.14/debug/ping
work for you?
ScaryLeopard77 , Hi! Is there a specific reason to the aversion from pipelines? What is the use case?
"continue with this already created pipeline and add the currently run task to it"
I'm not sure I understand, can you please elaborate? (I'm pretty sure it's a pipelines feature)
Hi @<1614069770586427392:profile|FlutteringFrog26> , if I'm not mistaken ClearML doesn't support running from different repoes. You can only clone one code repository per task. Is there a specific reason these repoes are separate?
Hi @<1797800418953138176:profile|ScrawnyCrocodile51> , not sure what you're trying to do. Can you please elaborate?
And easier to manage without the need for such 'hacks' 😛
Hi @<1523701083040387072:profile|UnevenDolphin73> , can you please elaborate?