If you run an agent in docker mode ( --docker ) the agent will run a docker run command and the task will be executed inside a container. In that scenario, I think, if you kill the daemon then the docker will stay up and finish the job (i think, haven't tested)
So, I went to the link
in order to use it like Postman. Testing the API without using Python. It was ChatGPT that directed me there, and it is kind of a nice way to validate the API
I would ignore anything that ChatGPT says about ClearML (and most other things too)
SparklingHedgehong28 , have you tried upgrading to pro? That is the easiest way to evaluate 🙂
Hi @<1797075640948625408:profile|MotionlessSeagull29> , you can get it with the following:
from clearml import Dataset
ds = Dataset.get(dataset_id="<SOME_ID>")
print(ds.project)
You can always use
dir(<PYTHON_OBJECT>)
to see it's different attributes/methods
Hi HarebrainedBaldeagle11 , not that I know of. Did you encounter any issues?
Does your image have ssh installed? Can you run ssh from inside the container?
From the log it looks like there is no ssh installed on the image:
cloning: git@bitbucket.org:pendulum-systems-inc/repo.git
ssh -oBatchMode=yes: 1: ssh: not found
fatal: Could not read from remote repository.
Hi @<1654294828365647872:profile|GorgeousShrimp11> , are you running in docker mode?
FreshKangaroo33 ,
On the top right of the experiments view you have a cog wheel, if you click on it, it will give you an option to add hyper parameters to the table. I think from the API calls from there you can figure something out 🙂
FreshKangaroo33 , what do you mean by syntax examples?
I think this should give you some context on usage 🙂
https://github.com/allegroai/clearml/blob/master/examples/reporting/hyper_parameters.py
Hi @<1546303277010784256:profile|LivelyBadger26> , I think this is what you are looking for
None
Can you check the logs of the apiserver? Maybe something caused an internal error
Hi NervousFrog58 , versions 1.1.1 seem to be quite old. I would suggest upgrading your server. Please note that since then there have been a couple of DB migrations, so make sure to follow all steps 🙂
I am not familiar with that. In the SDK you have the StorageManager to help you with downloading files.
None
GrievingTurkey78 , please try task.init( auto_resource_monitoring=False, ... )
SubstantialElk6 ,
We were trying with 'from task' at the moment. But the question apply to all methods.
You can specify this using add_function_step(..., execution_queue="<QUEUE>")
Make certain tasks in the pipeline run in the same container session, instead of spawning new container sessions? (To improve efficiency)
I'm not sure this is possible currently. This could a be nice feature request. Maybe open a github request?
@<1719524641879363584:profile|ThankfulClams64> , can you provide a small code snippet that reproduces this behaviour? Can you also test with the latest version of clearml ?
GiganticTurtle0 Hi 🙂
You could try saving them as OutputModel ( https://clear.ml/docs/latest/docs/references/sdk/model_outputmodel ) thus saving them 'outside' of the task object. Regarding if it's considered a good practice or not maybe AnxiousSeal95 can add up on that.
OddShrimp85 Hi 🙂
I think ClearML detects the packages that were in use during the script's run. Regarding the global packages, that's what the docker image is for, so it all comes pre-installed
Hi ScantCrab97 , please update it it worked 🙂
Hi NastySeahorse61 ,
It looks like deleting smaller tasks didn't make much of a dent. Do you have any tasks that ran for very long or were very intensive on reporting to the server?
Also try with!pip3 install clearml
Hi ScantCrab97 ,
What version of ClearML server are you using? \
Hi 🙂
Are you asking if you can share experiments between a self hosted server and http://app.clear.ml ?
I mean code wise. Also where is it saved locally?
Hi @<1729309137944186880:profile|GrittyBee73> , It has a python API used under the hood indeed:
None
examples on how to use it: None
[None](https://github.com/allegroai/clearml-serving/blob/724c99c605540cdae25e4ef504c09f705cd53503/clearml_serving/serving/model_request_proces...
Can you try it with clearml==1.6.0 please?
Also, can you list the exact commands you ran?
HI SubstantialElk6 ,
If I'm not mistaken the order is as goes:output_uri (Both code and CLI) Configurations vault default_output_uri in clearml.conf