Hi @<1535069219354316800:profile|PerplexedRaccoon19> , I think you can take the existing example of AWS and modify it to use the relevant API/sdk of another provider
Hi @<1724960468822396928:profile|CumbersomeSealion22> , what was the structure that worked previously for you and what is the new structure?
I'm not sure what you mean by leaderboard, but you can add custom metrics to the smart dashboard and sort by that if this is what you're looking for
I am using v.1.3.2
The SDK I assume.
Is this a self hosted version? What is the server version?
PunyBee36 , in the self-hosted option you aren't limited to a number of users
Hi FrustratingShrimp3 , which framework would you like added?
Hi TimelyCrab1 , directing all your outputs to s3 is actually pretty easy. You simply need to configure api.files_server: <S3_BUCKET/SOME_DIR>
in clearml.conf
of all machines working on it.
Migrating existing data is more difficult since everywhere in the system everything is saved as links. I guess you could change the links in mongodb but I would advise against it.
I'll clarify - on the server you have two parts - the clearml folders where all the mongo/elastic/redis data sits and you have the dockers. So, downgrading would mean using previous version dockers. However if you don't have a backup of your data I don't suggest you do this since data might become corrupt (mismatching elastic versions is bad for elastic)
MelancholyElk85 , I think the upload()
function has got the parameter you need: output_uri
Hi @<1552101447716311040:profile|SteadySeahorse58> , if the experiment is still in pending mode it means that it wasn't picked up by any worker. Please note that in a pipeline you have the controller that usually runs on the services queue and then you have the steps where they all can run on different queues - depending on what you set
Hi @<1558986867771183104:profile|ShakyKangaroo32> , can you please open a GitHub issue to follow up on this? I think a fix should be issued shortly afterwards
What happens if you delete ~/.clearml
It's clearml's cache folder
Hi @<1523702932069945344:profile|CheerfulGorilla72> , I think you need to map out the relevant folders for the docker. You can add docker arguments to the task using Task.set_base_docker
JitteryCoyote63 , projects.get_all_ex
is fired when UI is loaded/pages are navigated to.
Regarding the self hosted version channel, usually these things are discussed and revealed in community talks that are done once in a while. Considering that the community server was recently updated I would give an educated guess of a week or two until ClearML self hosted version is released 🙂
JitteryCoyote63 , I'm afraid currently not and only available in docker mode.
What do you need it for if I may ask?
Hi FierceHamster54 , I'm taking a look 🙂
Hi VivaciousBadger56 , can you add the full error here?
SoreDragonfly16 You can disable this with the following argument in task.init() - auto_connect_frameworks=False for example:task = Task.init(..., auto_connect_frameworks={'pytorch': False})
You can refer to this documentation for further reading at this https://clear.ml/docs/latest/docs/references/sdk/task#taskinit 🙂
Is there anything special about the parent dataset?
Simply hover over one of the tags, and the small 'x' will come up 🙂
does curl https://<WEBSITE>.<DOMAIN>/v2.14/debug/ping
work for you?
And in what section are you setting the environment?
Hi AbruptWorm50 ,
You can use a stand alone file, this way the file will be saved to the backend and used every time without needing to clone the repo. What do you think?
Hi UnevenDolphin73 ,
If I look at a specific experiment (say, the Artifacts tab), and then click on another experiment in the experiment list, it used to automatically show the newly selected experiment's Artifacts tab. It still does this, but it now shows a blank page. I have to choose a different tab and switch back.I think they fixed it in the next version that should be released soon.
(Not sure if by design) When selecting an experiment in a (new) project, it used to automatically swit...
CurvedHedgehog15 , isn't the original experiment you selected to run against is the basic benchmark?
PanickyMoth78 , pipeline tasks are usually hidden. If you go to Settings -> Configuration you will have an option to show hidden projects. This way you can find the projects that the tasks reside in + the pipeline steps
Make sure to fetch the logger manually and not construct it yourself 🙂
ApprehensiveSeahorse83 , also try with Task.init(..., output_uri = "<GS_BUCKET>")