I think it should have an id attribute so either model.id or maybe model.data.id if you fetch the model object
Take a look at the examples here:
https://github.com/allegroai/clearml/tree/master/examples/pipeline
Hi @<1523702439230836736:profile|HomelyShells16> , I'm afraid that's not really possible since the links themselves are saved on the backend
DeliciousBluewhale87 , Hi 🙂
You mean you created a dataset task on a certain server and you want to move that dataset task to another server?
TimelyPenguin76 , what do you think?
Hi @<1820993257639776256:profile|DeepOwl31> , are all workers on the same machine?
Hi @<1523707653782507520:profile|MelancholyElk85> , in Task.init() you have the auto_connect_frameworks Parameter.
Can you try specifying the ip explicitly in clearml.conf ?
I'm guessing that you've deployed ClearML server on http://unicorn , correct?
Hi @<1691983266761936896:profile|AstonishingOx62> , I'm not sure I understand what you're trying to do. You have some python code unrelated to ClearML. Does it run without issues? Did you afterwards add Task.init() to that code?
Are you using the PRO or a self hosted server?
Hi @<1523701868901961728:profile|ReassuredTiger98> , you can simply set up the token in clearml.conf of the agent so the agent will have the rights to clone. What do you think?
There are constant upgrades in the webUI, so there is always a good chance it can fix it
No it wouldn't since something would actually be going on and the python script haven't finished
Not one known to me, also, it's a good practice to implement (Think of automation) 🙂
Hi @<1635813046947418112:profile|FriendlyHedgehong10> , the pipeline basically creates tasks and pushes them into execution. You can click on each step and view the full details. In the info section you can see into which queue each step was pushed. I'm assuming there are no agents listening to the queue
GiganticTurtle0 let me check up on that for you then, thanks for the info 🙂
Hi @<1576381444509405184:profile|ManiacalLizard2> , it will be part of the Task object. It should be part of the task.data.runtime attribute
What is your use case though? I think the point of local/remote is that you can debug in local
Hi @<1558624430622511104:profile|PanickyBee11> , how are you doing the multi node training?
Hi @<1845635622748819456:profile|PetiteBat98> , metrics/scalars/console logs are not stored on the files server. They are all stored in Elastic/Mongo. Files server is not required to use. default_output_uri will point all artifacts to your Azure blob
How would you use the
user properties
as part of an experiment?
I'm guessing to get the properties. I'm guessing this really depends on your needs / use-case
SuperficialDolphin93 , looks like a strange issue. Can you maybe open a github issue for better tracking?
Hi OutrageousSheep60 , can you elaborate on how/when this happens?
whenever
preview
ing the dataset (which is in a parquet tabular format) the browser automatically downloads a copy of the preview file as a text file
Hi @<1759749707573235712:profile|PungentMouse21> , you should be able to access machine logs from the autoscaler, this should give you a place to search
JuicyFox94 , can you please assist? 🙂
Can you check the logs of the apiserver? Maybe something caused an internal error
Can you please add a larger chunk of the autoscaler log?