Thanks @<1523701601770934272:profile|GiganticMole91> !
(As usual MS decided to invent a new "standard")
I'll make sure the guys looks at it and get an RC with a fix
Another issue that might be the case, might be that I'm on ubuntu some of the packages might've been for windows thus the different versions not existing
Usually this is not the case, the version number match (implementation wise it might be a different file, but it is almost always a matching version)
@<1523711619815706624:profile|StrangePelican34> are you saying that after the " with
" block the task is marked completed? how is that possible? is this done manually ?
😂
I'm trying to create a task that is not in repository root folder.
JuicyFox94 If the Task is not in a repo folder, you mean in a remote repository right ?
This means the repo should be in the form of " https://github.com/ " or "ssh://"
It failed in deducing this is a remote repository (maybe we can improve the auto detection?!)
If this is how the repo links look like, do not set anything in the clearml.conf
It "should" use the ssh for the ssh links, and http for the http links.
What would be the best way to get all the models trained using a certain Task, I know we can use query_models to filter models based on Project and Task, but is it the best way?
On the Task object itself you have all the models.Task.get_task(task_id='aabb').models['output']
Hi ScaryLeopard77
Could that be solved with this PR?
https://github.com/allegroai/clearml/pull/548
(Just a thought, maybe we just need to combine Kedro-Viz ?)
Which works for my purposes. Not sure if there's a good way to automate it
Interesting, so if we bind to hydra.compose
it should solve the issue (and of course verify we are running on a jupyter notebook)
wdyt?
I see... In the triton pod, when you run it, it should print the combined pbtxt. Can you print both before/after ones? so that we could compare ?
Yes, that makes sense. Then you would need to use wither the AWS vault features, or the ClearML vault features ...
SuperiorDucks36 from code ? or UI?
(You can always clone an experiment and change the entire thing, the question is how will you get the data to fill in the experiment, i.e. repo / arguments / configuration etc)
There is a discussion here, I would love to hear another angle.
https://github.com/allegroai/trains/issues/230
I set up the alert rule on this metric by defining a threshold to trigger the alert. Did I understand correctly?
Yes exactly!
Or the new metric should...
basically combining the two, yes looks good.
Hi ConvolutedSealion94
Yes 🙂Task.set_random_seed(my_seed=123) # disable setting random number generators by passing None task = Task.init(...)
PompousBeetle71 the code is executed without arguments, in run-time trains / trains-agent will pass the arguments (as defined on the task) to the argparser. This means you that you get the ability to change them and also type checking 🙂
PompousBeetle71 if you are not using argparser how do you parse the arguments from sys.argv? manually?
If that's the case, post parsing, you can connect a dictionary to the Task and you will have the desired behavior
` task.connect(dict_with_arguments...
Hi CurvedHedgehog15
I would like to optimize hparams saved in Configuration objects.
Yes, this is a tough one.
Basically the easiest way to optimize is with hyperparameter sections as they are basically key/value you can control from the outside (see the HPO process)
Configuration objects are, well, blobs of data, that "someone" can parse. There is no real restriction on them, since there are many standards to store them (yaml,json.init, dot notation etc.)
The quickest way is to add...
Hi QuaintJellyfish58
You can always set it inside the function, withTask.current_task().output_uri = "s3://"
I have to ask, I would assume the agents are pre-configured with "default_output_uri" in the clearml.conf, why would you need to set it manually?
Hi @<1545216070686609408:profile|EnthusiasticCow4>
Many of the dataset we work with are generated by SQL query.
The main question in these scenarios is, are those DB stable.
By that I mean, generally speaking DB serve applications, and from time to time they undergo migration (i.e. change in schema, more/less data etc).
The most stable way is to create a script that runs the SQL query, and creates a clearml dateset from it (that script becomes part of the Dataset, to have full tracta...
If you need to change the values:config_obj.set(...)
You might want to edit the object on a copy, not the original 🙂
BTW: latest PyCharm plugin with 2022 support was just released:
https://github.com/allegroai/clearml-pycharm-plugin/releases/tag/1.1.0
its should logged all in the end as I understand
Hmm let me check the code for a minute
Should I map the poetry cache volume to a location on the host?
Yes, this will solve it! (maybe we should have that automatically if using poetry as package manager)
Could you maybe add a github issue, so we do not forget ?
Meanwhile you can add the mapping here:
https://github.com/allegroai/clearml-agent/blob/bd411a19843fbb1e063b131e830a4515233bdf04/docs/clearml.conf#L137
extra_docker_arguments: ["-v", "/mnt/cache/poetry:/root/poetry_cache_here"]
Hmmm that is a good use case to have (maybe we should have --stop get an argument ?)
Meanwhile you can do$ clearml-agent daemon --gpus 0 --queue default $ clearml-agent daemon --gpus 1 --queue default then to stop only the second one: $ clearml-agent daemon --gpus 1 --queue default --stop
wdyt?
Sure just setup clearml-agent
on any machine 🙂
(The app.community server is the control plane)
Maybe this one?
https://github.com/allegroai/clearml/issues/448
I think it is already there (i.e. 1.1.1)