I think AnxiousSeal95 updates us when there is a new version or release 🙂
Why does the figure change so drastically? And how can I solve it?
What are you referring yo specifically? The data plots seem to be identical.
Sidenote: there seems to be a bug in the plot viewer, as the axis are a bit chaotic..
Do you mean the x/y intersection?
TartSeagull57 , what framework are you on? What version of ClearML are you using?
So does it mean you basically recreated your entire training environment just in production?
Regarding the questions:
I'm not sure I understand. If you don't expose code... What would be executed? I think this is something available only on the Scale/Enterprise level
SubstantialElk6 , I think this is what you're looking for:
https://clear.ml/docs/latest/docs/references/sdk/dataset#get_local_copyDataset.get_local_copy(..., part=X)
Hi DepressedFox45 ,
For the agent you'll need to run clearml-agent init
AbruptWorm50 , what optimization method are you using?
Does it give you any error while deleting the experiments?
Hi RoughTiger69 ,
Have you considered maybe cron jobs or using the task scheduler?
Another option is running a dedicated agent just for that - I'm guessing you can make it require very little compute power
That's an interesting question. I'm pretty sure file deltas aren't saved (Although you do get file sizes so you might see changes there)
Let me check if maybe they are saved somehow or if that information can be extrapolated somehow 🙂
What do you get when you call get_configuration_objects()
now?
Can you post a minimal example here? Does this always happen or only sometimes? Also how is the pipeline run? Using autoscaler or local machines?
Hi @<1580367711848894464:profile|ApprehensiveRaven81> , for a frontend application you basically need to build something that will have access to the serving solution.
Did you download it to the same folder or to some mounter folder?
Hi @<1523701283830108160:profile|UnsightlyBeetle11> , I think you can store txt artifacts so you can store the string there. If it's not too long, you can even fetch it from the preview
Hi @<1590514584836378624:profile|AmiableSeaturtle81> , you need to add the port to the credentials when you input them in the webUI
I think its possible there was an upgrade in Elastic, I'd suggest going over the release notes to see if this happened with the server
Hi @<1706478691208400896:profile|WearyPelican78> , you can set up a GCP autoscaler for this - None
Hi @<1722786138415960064:profile|BitterPuppy92> , I believe pre-defining queues via the helm chart is an Enterprise/Scale license feature only and not available in the open source
Hang on,
I just noticed that there's a "project compute time" on the dashboard? Do you know how that is calculated/what that is?
Are you referring to the to the example in services?
Hi @<1533619725983027200:profile|BattyHedgehong22> , does the package appear in the installed packages section of the experiment?
I think if you provide an absolute path it should work 🙂
That makes sense... If you turn auto_connect_streams to false this mean that auto reporting will be disabled as per the documentation.. If you turn it to True then logging should resume.
I don't think there is such an option currently but it does make sense. Please open a GitHub feature request for this 🙂
Hi SpotlessPenguin79 , can you please elaborate on this?
for non-aws cloud providers?
What exactly are you trying to do?
The DataOps feature will abstract your usage of data - None
Click on step_one and on Full details