Hi @<1555362936292118528:profile|AdventurousElephant3> , if you clone/reset the task, you can change the logging level to 'debug'
Hi @<1597762318140182528:profile|EnchantingPenguin77> , do you have a code snippet that reproduces this? Where is that API call originating from?
Hi @<1533620191232004096:profile|NuttyLobster9> , are you self hosting ClearML?
Hi @<1774245260931633152:profile|GloriousGoldfish63> , what version did you deploy?
Hi @<1590514584836378624:profile|AmiableSeaturtle81> , you need add the agent command that you run into the bootup of your system
StickyCoyote36 , if I understand correctly due to your M1 chip limitation you run the script from a different machine and then you use the agent to run on the M1 chip machine and you want the requirements.txt in the repo to override the "installed packages" when running with agent, correct?
Hi, I think this is the default behavior but I think you can probably edit the source code ( output_uri parameter of Task.init would be a good lead).
In what format would you like it saved?
This is because Datasets have a new view now. Just under 'Projects' on the left bar you have a button for Datasets 🙂
And additionally does the
When executing a Task (experiment) remotely, this method has no effect).
part means that if it is executed in a remote worker inside a pipeline without the dataset downloaded the method will have no effect ?
I think this means the add tags specifically will have no effect
Is the pipelinecontroller also working on preemptible instances?
Hi VictoriousPenguin97 ,
I found this in regards to requirements:
https://clearml.slack.com/archives/CTK20V944/p1636545386390800?thread_ts=1636545006.390700&cid=CTK20V944
"
If you're not tight on resources, I would say "medium" requirements can be:
A decent amount of CPU cores (say at least 4) Enough RAM memory to comfortably accommodate ES (at least 16GB) Enough disk space for:File server (dedicated volume, say 1TB) Databases (ES/Mongo, dedicated volume), about 500GB (to be on the safe si...
Pipeline is a unique type of task, so it should detect it without issue
I would suggest adding print outs during the code to better understand when this happens
Hi @<1669152726245707776:profile|ManiacalParrot65> , is this a specific task or the controller?
If you get GPU-hours per project stats it would be really cool if you added this as a pull request
I think you're right. But it looks like an infrastructure issue related to Yolo
Hi @<1864479785686667264:profile|GrittyAnt2> , for that you would need to specify --output-uri in the create command - None
This will point all previews also to the storage of your choice. Note, however as a NAS is considered part of your local disks, the browser cannot access local disk and therefor previews will not work.
For local storage solutions I suggest using something like MinIO
What actions did you take exactly to get to this state?
Hi @<1748153283605696512:profile|GreasyPenguin24> , you certainly can. CLEARML_CONFIG_FILE is the environment variable that allows you to use different configuration files
Hi @<1539780284646428672:profile|PoisedElephant79> , I think you need to have the gitlab runner able to connect to your VPN. Otherwise, how do you expect it to connect to the server if only people on your VPN can connect to it?
Browser thinks it's the same backend because of the domain
You must perform Task.init() to have something reported 🙂
Hi @<1523701842515595264:profile|PleasantOwl46> , I think you can add a PR here - None
Hi DashingKoala39 , some people prefer to use S3 or other storage solutions instead of the integrated fileserver. You can configure it on the clearml.conf level and the task level
Hi SuperiorCockroach75 , yes you should be able to run it on a local setup as well 🙂
Also, can you verify that you still have the clearml-agent process running? top / htop
Hi @<1736919317200506880:profile|NastyStarfish19> , the services queue is for running the pipeline controller itself. I guess you are self hosting the OS?