Hi UpsetTurkey67
"General/my_parameter_name" so that only this part of the configuration will be updated?
I'm assuming this is a Hyperparameter not a configuration object (i.e. task.connect not task.connect_configuration), if this is the case then Yes 🙂
Hi @<1687653458951278592:profile|StrangeStork48>
I have good news, v1.0 is out with hashed passwords support.
Hmm, might be, check if your files server is running and configured properly
Follow up: I see that if I move an Experiment to a new project, it does not copy the associated model files and must be done manually. Once I moved the models to the new project, the query works as expected.
Correct 🙂
Nice catch!
I think it would be nicer if the CLI had a subcommand to show the content of
~/.clearml_data.json
.
Actually, it only stores the last dataset id at the moment, no not much 🙂
But maybe we should have a cmd line that just outputs the current datasetid, this means it will be easier to grab and pipe
WDYT?
CooperativeFox72 a bit of info on how it works:
In "manual" execution (i.e. without an agent)
path = task.connect_configuration(local_path, name=name
path = local_path , and the content of local_path is stored on the Task
In "remote" execution (i.e. agent)
path = task.connect_configuration(local_path, name=name
"local_path" is ignored, path is a temp file, and the content of the temp file is the content that is stored (or edited) on the Task configuration.
Make sense ?
oh dear ...
ScrawnyLion96 let me check with front-end guys 😞
Okay now let's try the final lines:$LOCAL_PYTHON -m virtualenv /root/venv /root/venv/bin/python3 -m pip install git+
AttractiveCockroach17 can I assume you are working with the hydra local launcher ?
I find it quite difficult to explain these ideas succinctly, did I make any sense to you?
Yep, I think we are totally on the same wavelength 🙂
However, it also seems to be not too prescriptive,
One last question, what do you mean by that?
Hi DilapidatedCow43
I'm assuming the returned object cannot be pickled (which is ClearML's way of serializing it)
You can upload it as a model with
` uploaded_model_url = Task.current_task().update_output_model(model_path="/path/to/local/model")
...
return uploaded_model_url `wdyt?
yes thanks , but if I do this, the packages will be installed for each step again, is it possible to use a single venv?
Notice that the venv is Cached on the clearml-agent host machine (if this is k8s glue, make sure to setup the Cache as a PV to achieve the same)
This means there is no need to worry about that and this is stable.
That said, if you have an existing VENV inside the container, just add docker_args="-e
CLEARML_AGENT_SKIP_PIP_VENV_INSTALL
=/path/to/bin/python"
Se...
Hi @<1541954607595393024:profile|BattyCrocodile47>
This looks like a docker issue running on mac m2
None
wdyt?
Thank you @<1523720500038078464:profile|MotionlessSeagull22> always great to hear 🙂
btw, if you feel like sharing your thoughts with us, consider filling our survey , it should not take more than 5min
Hi RipeGoose2
I just test the hydra example, seems to work when you add the offline right after the import:
` from clearml import Task
Task.set_offline(True) `
Ohhhh , okay as long as you know, they might fall on memory...
Hi ExcitedFish86
Good question, how do you "connect" the 3 nodes? (i.e. what the framework you are using)
Looking at the
supervisor
method of the base
AutoScaler
class, where are the worker IDs kept.
Is it in the class attribute
queues
?
Actually the supervisor is passing a fixed prefix, then it asks the clearml-server on workers starting with this name.
This way we can have a fixed init script for all agents, while we still can differentiate them from the other agent instances in the system. Make sense ?
Also, for a single parameter you can use:cloned_task.set_parameter(name="Args/artifact_name", value="test-artifact", description="my help text that will appear in the UI next to the value")
This way, you are not overwriting the other parameters, you are adding to them.
(Similar to update_parameters
, only for a single parameter)
so if the node went down and then some other node came up, the data is lost
That might be the case. where is the k8s running ? cloud service ?
PlainSquid19 No worries 🙂
btw: If you could see if the mangling of workings / script path happens with the latest trains, that will be appreciated, because if you were running the script in the first place from "stages/" then the trains should have caught it ...
however setting up the interpertier on pycharm is different on mac for some reason, and the video just didnt match what I see
MiniatureCrocodile39 Are you running on a remote machine (i.e. PyCharm + remote ssh) ?
FYI: if you need to query stuff you can always look directly in the RestAPI:
https://github.com/allegroai/clearml/blob/master/clearml/backend_api/services/v2_9/projects.py
https://allegro.ai/clearml/docs/rst/references/clearml_api_ref/index.html
clearml.conf is the file that
clearml-init
suppose to create, right?
Correct, specifically ~/clearml.conf
I guess that was never the intention of the function, it just returns the internal representation. Actually my question would be, how do you use it, and why? :)
Thanks GiganticTurtle0 !
I will try to reproduce with the example you provided. regardless I already took a look at the code, and I'm pretty sure I know what the issue is. We will be pushing a few fixes after the weekend, I'm hoping this one will be included as well 🙂
Hi GrievingTurkey78
How are you getting different version than what is used in run time? it analyzes the PYTHONPATH just as python does ? How can I reproduce it? Currently you can use Task.add_requirements(package_name, package_version=None)
This will not force it though, it is a recommendation (if it fails to find the package itself) maybe we can add force ?!What do you think?