Hi @<1810121608967229440:profile|NonchalantWhale65> , can you provide a short code snippet that reproduces the problematic behaviour?
Hi @<1523701295830011904:profile|CluelessFlamingo93> , I think this is what you're looking for:
None
Also, make sure to install virtualenv, I see there was a failure in the log on that as well
Hi UnevenDolphin73 ,
I don't think that ClearML exposes anything into env variables unless done so explicitly.
If I'm understanding correctly you're hoping for a scenario where ClearML would expose some contents of sdk.aws.s3 for example so you could use it later in boto3, correct?
If that's the case why not use env vars to begin with?
EnormousWorm79 , are you working from different browsers / private windows?
PanickyMoth78 , pipeline tasks are usually hidden. If you go to Settings -> Configuration you will have an option to show hidden projects. This way you can find the projects that the tasks reside in + the pipeline steps
A workaround can be to set up a local Minio server or upload to s3 directly, this way there shouldn't be a limit
From my understanding, they are 🙂
A big part of the way Datasets work is to turn the data into a parameter rather than be part of the code. You will be able to easily reproduce experiments 🙂
Hii @<1608271575964979200:profile|GiddyRaccoon10> , ClearGPT is a separate enterprise product 🙂
Hi 🙂
A task is the most basic object in the system in regards to experiments. A pipeline is a bunch of tasks that are controller by another task 🙂
DeliciousSeal67 , you need to update the docker image in the container section - like here:
Can you elaborate on how you did that?
Hi @<1702492411105644544:profile|YummyGrasshopper29> , I suggest doing it via the webUI with developer tools open so you can see what the webUI sends to the backend and then copy that.
wdyt?
Can you post a minimal example here? Does this always happen or only sometimes? Also how is the pipeline run? Using autoscaler or local machines?
Hi HugeArcticwolf77 , did you spin new agent versions? What did you have before and what do you have now? Can you check if you revert to the previous version it works out?
Can you try creating a new instance?
I think it depends on your implementation. How are you currently implementing top X checkpoints logic?
Hi @<1750327614469312512:profile|CrabbyParrot75> , why use the StorageManager module and not the Datasets to manage your data?
Can you guide me through how you got the credentials and then attempted to validate?
Also, in the Scalers section you can see the machine statistics to maybe get an idea. If the memory usage is high this might be the issue. If not then we can cancel out this hypothesis (probably)
Hi RoughTiger69 ,
Have you considered maybe cron jobs or using the task scheduler?
Another option is running a dedicated agent just for that - I'm guessing you can make it require very little compute power
ScaryBluewhale66 , Hi 🙂
Regarding your questions
I think you can just reset the task and enqueue it You can stop it either in the UI or programmatically I'm guessing the scheduler would show errors in it's log if for some reason it failed at something for some reason
Hi AdventurousButterfly15 , what version of clearml-agent are you using?
Hello FreshKangaroo33 ,
This is a very interesting idea if I understand it correctly! Let me elaborate on it a bit to see if I understand your idea - You want the ability to create pipeline steps in the controller simply by specifying the source control parameters + packages, correct?
I'm not sure if it's available now but you can give specific packages to be used in a pipeline and it's possible to implement your use case with the current tools through a workaround. How do you currently ru...
Hi, SuperiorPanda77 , this looks neat!
I could take a look on a windows machine if it helps 🙂
Are you using the PRO or a self hosted server?