So you can download/view files on the cloud
You should check the status of that container
A workaround can be to set up a local Minio server or upload to s3 directly, this way there shouldn't be a limit
Hi @<1570220844972511232:profile|ObnoxiousBluewhale25> , I think the API server can delete things only from the files server currently. However the SDK certain has the capability to delete remote files
Yes, I think that would be the best solution.
Hi @<1531807732334596096:profile|ObliviousClams17> , I think for your specific use case it would be easiest to use the API - fetch a task, clone it as many times as needed and enqueue it into the relevant queues.
Fetch a task - None
Clone a task - None
Enqueue a task (or many) - [None](https://clear.ml/docs/latest/docs/references/api/ta...
What is your use case?
Hi @<1587977852635058176:profile|FloppyTurtle49> , yes same would be applicable. Regarding communication: It is one way communication from the agent to the clearml server done directly to the API server - basically what is defined in clearml.conf
Hope this clears things up
Hi @<1560798754280312832:profile|AntsyPenguin90> , I think you would need to wrap the C++ code in python for it to work, but conceptually shouldn't be any special issues
Hi, can you provide an example of how you report them?
SubstantialElk6 , can you please verify that you have all the required packages installed locally ? Also in your ~/clearml.conf
what is the setting of agent.package_manager.system_site_packages
MoodyCentipede68 , I'm sorry. I meant inject a preconfigured ~/clearml.conf
. Or as Jake mentioned, just use environment variables 🙂
VexedCat68 , what do you mean by trigger? You want some indication that a dataset whats published so you can move to the next step in your pipeline?
I understand. That's strange, column ordering etc should be stored in cookies per project. Maybe @<1523703436166565888:profile|DeterminedCrab71> , might have an idea
Can you please add a stand alone code snippet that reproduces this?
Just to make sure I understand the flow - you run an experiment and create it inside project 'my_example'
Afterwards you run a pipeline and you specify the controller 'my_example'.
This will make 'my_example' into a hidden project
Am I getting it right?
What exactly are you looking to set up?
MinuteGiraffe30 , Hi ! 🙂
What if you try to manually create such a folder?
Hi ClumsyElephant70 ,
What about# pip cache folder mapped into docker, used for python package caching docker_pip_cache = ~/.clearml/pip-cache # apt cache folder mapped into docker, used for ubuntu package caching docker_apt_cache = ~/.clearml/apt-cache
I think this is what you're looking for
Hi UnevenDolphin73 ,
I think you need to lunch multiple instances to use multiple creds.
Also, is it an AWS S3 or is it some similar storage solution like Minio?
What version of the server are you running? What version of the SDK also?
Hi, can you give the error that is printed out?
Hi @<1618418423996354560:profile|JealousMole49> , I think you would need to pull all related to data via the API and then register it again through the API.
Are you still having these issues? Did you check if it's maybe a connectivity issue?
Also, can you verify that you still have the clearml-agent process running? top
/ htop
I think the controller and steps need to be in the same repository
Very similar to a task, a project has also a unique identifier - the ID (Although I think project names are also unique)
You can get the project ID either from UI (If you go to a specific project, the project ID will be in the url) or from the api as documented in:
https://clear.ml/docs/latest/docs/references/api/projects#post-projectsget_all
or from the sdk as documented here:
https://clear.ml/docs/latest/docs/references/sdk/task#taskget_project_id
Plug that project ID into the filter ...