Reputation
Badges 1
25 × Eureka!but does that mean I have to unpack all the dictionary values as parameters of the pipeline function?
I was just suggesting a hack π the fix itself is transparent (I'm expecting it to be pushed tomorrow), basically it will make sure the sample pipeline will work as expected.
regardless and out of curiosity, if you only have one dict passed to the pipeline function, why not use named arguments ?
Hi TightElk12
One option will be to call task.close() at the end of each step and task.init at the beginning of another.
Will that do?
Yes exactly like a Task (pipeline is a type of task)
'''
clonedpipeline=Task.clone(pipeline_uid_here)
Task.enqueue(...)
'''
Hi AttractiveCockroach17
In your "Installed Packages" (when the task is in draft mode, you can edit it like any requirements.txt), you need to add:package @ git+
You can also make sure you have in in the first place bu addingTask.add_requirements("package", "@ git+
") task = Task.init(...)
There are also "completed, aborted, queued" .
Archived is actually a tag (system tag, not user tag). There is a "state machines" of moving from one state to the other. The special case is "published" that we probably should have called "locked". The idea is that if a Task/Model is published, you cannot reset it (and even deleting requires force flag).
I would use additional user tags (or even system-tags) to mark "deployed" state, wdyt?
basically PVC for all the DBs π
CostlyOstrich36 did you manage to reproduce it?
I tried conda w/ python3.9 on a clean Windows VM , and it worked as expected ....
I think CostlyOstrich36 managed to reproduce?!
Oh no need to specify one, this is optional configuration.
Basically follow these steps only:
https://clear.ml/docs/latest/docs/deploying_clearml/clearml_server_linux_mac
No should be fine... Let me see if I can get a windows box π
GrittyHawk31 by default any user can login (i.e. no need for password), if you want user/password access:
https://clear.ml/docs/latest/docs/deploying_clearml/clearml_server_config/#web-login-authentication
Notice no need to have anything else in the apiserver.conf
, just the user/pass section, everything else will just be the default values.
DefiantHippopotamus88 you can create a custom endpoint and do that, but it will be running I the same instance , is this what you are after? Notice that Triton actually supports it already, you can check the pytorch example
My use case is when I have a merge request for a model modification I need to provide several informations for our Quality Management System one is to show that the experiment is a success and the model has some improvement over the previous iteration.
Sounds likes good approach π
Obviously I don't want the reviewer to see all ...
Maybe move publish the experiment and move it to a dedicated folder ? Then even if they see all other experiments, they are under "development" p...
PricklyRaven28 did you set the iam role support in the conf?
https://github.com/allegroai/clearml/blob/0397f2b41e41325db2a191070e01b218251bc8b2/docs/clearml.conf#L86
Hi UnsightlyHorse88
Hmm, try adding to your clearml.conf file:agent.cpu_only = true
if that does not work try adding to the OS environmentexport CLEARML_CPU_ONLY=1
This looks good to me...
I will have to look into it, because it should not download it...
Hi JitteryRaven85
I have also deleted some hyper-params but they appear again when training starts.
Yes you cannot "delete" parameters, as any missing parameter is synced back (making sure you have a full log).
The problem is that when I clone an experiment and change the hyper params some change and some remain the same
Could you expand on which parameters stay the same ? (obviously this should not happen)
Now in case I needed to do it, can I add new parameters to cloned experiment or will these get deleted?
Adding new parameters is supported π
We just donβt want to pollute the server when debugging.
Why not ?
you can always remove it later (with Task.delete) ?
TenseOstrich47 as long as on the machine running the agent has credentials to your ECR, when the agent will run Any docker container, it will able to pull it. There is no need to manually change anything, notice the Task itself contains the name of the image it will use
but who exactly executes agent in this case?
with both execute
/ build
commands, you execute it on your machine, for debugging purposes. make sense ?
works seamlessly throughout and in our current on premise servers...
I'm assuming via something close to what I suggested above with .netrc ?
Hmm that is odd.
Can you verify with the latest from GitHub?
Is this reproducible with the pipeline example code?
MelancholyElk85 I'm assuming you have the agent setup and everything in the example code works, is that correct ?
Where is it failing on your pipelines ?
It will store everything locally, later you can import it back to the server, if you want.
1.0.1 is only for the cleaml python client, no need for a server upgrade (or agent)