Hi @<1720249421582569472:profile|NonchalantSeaanemone34> , can you please provide a full log of a run? Also do you have a full snippet that reproduces this behaviour?
Hi @<1570220858075516928:profile|SlipperySheep79> , you can set various cache limitations in clearml.conf . The issue you encountered is specifically regarding Datasets? If that is the case, I think this is the section that you're looking for - None
Can you copy paste the error you got?
Or do you have your own code snippet that reproduces this?
I don't think there is such an option currently but it does make sense. Please open a GitHub feature request for this 🙂
Long story short - You'll have to write a service to upload.
They way that datasets work - the SDK/CLI actually do the uploading itself. The REST API simply registers them on the backend
At 1 call per second for 12 hours you'll get to numbers close to that. I think you could try increasing the flush threshold - None
Hi @<1523701842515595264:profile|PleasantOwl46> , I think that is what happening. If server is down, code continues running as if nothing happened and ClearML will simply cache all results and flush them once server is back up
So when you run it standalone it works fine? How are you creating the pipeline?
Hi @<1523702786867335168:profile|AdventurousButterfly15> , do you have a short standalone script that reproduces this?
What version of clearml are you using? Is it a self hosted or the community server
Hi @<1671689442621919232:profile|ItchyDuck87> , did you manage to register directly via the SDK?
@<1587615463670550528:profile|DepravedDolphin12> , how did you create the dataset? Are you doing anything else? Do you have a code snippet that reproduces this behavior? i.e both for creating the dataset and fetching it..
VexedCat68 , do you mean does it track which version was fetched or does it track everytime a version is fetched?
Hi @<1529271085315395584:profile|AmusedCat74> , what are you trying to do in code? What version of clearml are you using?
You can add it pythonically to the start of your script but I think docker mode is what you need to use if you want to pre-install packages in an environment
MotionlessCoral18 , I think there is a new version out - 1.4, can you try upgrading to that?
DeliciousBluewhale87 , Hi!
I think you can have models/artifacts automatically copied to a location if the experiment is initialized withoutput_uri
For example:task = Task.init('examples', 'model test', output_uri=' ')What version of ClearML are you using? I'd suggest upgrading to the latest 🙂
Hi DangerousDragonfly8 , can you please elaborate on your use case? If you want only a single instance to exist at any time how do you expect to update it?
Regarding controlling the timeout - I think this is more of a pip configuration
By default the agent will try to install packages according to what was logged in the 'installed packages' section of the task in 'execution' tab
Hi @<1800699527066292224:profile|SucculentKitten7> , I think you're confusing the publish action to deployment. Publishing a model does not deploy it, it simply changes the state of the model to published so it cannot be changed anymore and also publishes the task that created it.
To deploy models you need to either use clearml-serving or the LLM deployment application
I would assume that your analysis is incomplete
Not necessarily, since the data is parked inside databases. In theory, if done perfectly, then a new deployment will have the volume as the databases.
StaleButterfly40 , I'm trying to get an estimate of what you have because if the content is too large the preview isn't shown....
UnevenDolphin73 , that's an interesting case. I'll see if I can reproduce it as well. Also can you please clarify step 4 a bit? Also on step 5 - what is "holding" it from spinning down?
ReassuredTiger98 , I played with it myself a little bit - It looks like this happens for me when an experiment is running and reporting images and changing metric does the trick - i.e reproduces it. Maybe open a github issue to follow this 🙂 ?