Hi @<1544853721739956224:profile|QuizzicalFox36> ,
You can use StorageManager.download_file()
to easily fetch files.
None
Hi @<1561885921379356672:profile|GorgeousPuppy74> , you can get the key/secret pair from the ClearML UI.
I think this is the documentation you were looking for: None
You can access the settings page via the profile icon at the top right.
Hi JitteryCoyote63 , you can get around it using the auto_connect_frameworks
parameter in Task.init()
You'll need ES/Mongo to run the ClearML server
I think the set_default_upload_uri
is for all output models, while set_upload_destination
is for a specific model/file
However, now when I go in the Results -> Debug Samples tab, the s3 credential window pops up. Every time that I refresh the page
RattyLouse61 , What version of ClearML are you running, I think this issue was solved in 1.3.0 release
I meant writing a new pipeline controller that will incorporate the previous pipelines as steps. What is the error that you're getting? Can you provide a snippet?
VexedCat68 , you can iterate through all 'running' tasks in a project and abort them through the api. The endpoint is tasks.stop
@<1590514584836378624:profile|AmiableSeaturtle81> , I would suggest opening a github feature request then 🙂
You can add torch to the installed packages section manually to get it running but I'm curious why it wasn't logged. How did you create the original experiment?
Hi @<1523707653782507520:profile|MelancholyElk85> , in Task.init()
you have the auto_connect_frameworks
Parameter.
Hi @<1523702220866981888:profile|ShallowGoldfish8> , you can use the StorageManager
module to upload/download files
Hi ClumsyElephant70 ,
What about# pip cache folder mapped into docker, used for python package caching docker_pip_cache = ~/.clearml/pip-cache # apt cache folder mapped into docker, used for ubuntu package caching docker_apt_cache = ~/.clearml/apt-cache
Hi @<1544853695869489152:profile|NonchalantOx99> , how are you running the pipeline? What are the clearml
& server versions?
Do you have a snippet that reproduces this?
@Alex Finkelshtein, if the parameters you're using are like this:
parameters = { 'float': 2.2, 'string': 'my string', }
Then you can update the parameters as mentioned before:parameters = { 'float': 2.2, 'string': 'my string', } parameters = task.connect(parameters) parameters['new_param'] = 'this is new' parameters['float'] = '9.9'
Please note that parameters['float'] = '9.9' will update the parameter specifically. I don't think you can update the parameter en masse...
Hi @<1570220858075516928:profile|SlipperySheep79> , you can use pre & post execute callback functions that run on the controller. Is that what you're looking for?
Hi, in task.init()
you can define output_uri=<Fileserver>
Hi @<1523701083040387072:profile|UnevenDolphin73> , all the scalars plots etc are saved on Elastic
DepressedChimpanzee34 , the only way I see currently is to update manually each parameter
For example:parameters = { 'float': 2.2, 'string': 'my string', } parameters = task.connect(parameters) parameters['new_param'] = 'this is new' parameters['float'] = '9.9'
Does this help?
You can do it in one API call as follows:
https://clear.ml/docs/latest/docs/references/api/tasks#post-tasksget_all
@<1523701087100473344:profile|SuccessfulKoala55> , what is the intended behavior?
DepressedChimpanzee34 you can try naming the connected configuration differently. Let me see if there is some other more elegant solution 🙂
Hi @<1561885921379356672:profile|GorgeousPuppy74> , yes it should be possible
Hi FierceRabbit20 , I don't think there is such an option out of the box but you can simply add it to your startup of the machine or create a Cron job
I've never worked with JupyterHub and have little experience with notebooks. What does it do in relation to notebooks?