Hi @<1673501397007470592:profile|RelievedDuck3> , there is some discussion of it in this video None
Hi @<1710827340621156352:profile|HungryFrog27> , can you provide a full log of the task?
Hi @<1724235687256920064:profile|LonelyFly9> , what data/information are you looking to get using the user id?
You can also just delete the installed packages section from the webUI and it will force it to use the requirements.txt
The functionality is basically the same as the GCP/AWS ones but since it is only in the Scale/Enterprise I don't think there is any documentation externally
Hi @<1639799308809146368:profile|TritePigeon86> , where in the documentation did you see these parameters active_duration, job_started and job_ended
?
The webUI uses the API for everything, I'd suggest using the webUI as a reference to how to approach this.
Hi @<1720249421582569472:profile|NonchalantSeaanemone34> , happy to hear you also enjoy using ClearML 😄
You are spot on, ClearML provides the full end to end solution for your MLOps needs, meaning you don't need to using DVC,MLrun, MLflow and many others as all these capabilities are covered in ClearML and more!
Are you currently looking for any specific capability?
Hi, can you add a log of the run? Also what version of ClearML Agent are you using
Hi @<1726772411946242048:profile|CynicalBlackbird36> , what you're looking at is the metrics storage, this is referring to all the console outputs, scalars, plots and debug samples.
This is saved in the backend of ClearML. There is no direct way to pull this but you can technically fetch all of this using the API.
wdyt?
You should check the status of that container
In the HPO application I see the following explanation:
'Maximum iterations per experiment after which it will be stopped. Iterations are based on the experiments' own reporting (for example, if experiments report every epoch, then iterations=epochs)'
Hi @<1739455989154844672:profile|SmarmyHamster62> , Are you sure about the version of ClearML? Can you share the entire log of the triton container?
AppVersion? Can you share a screenshot of where you see it?
Hi @<1736919317200506880:profile|NastyStarfish19> , the services queue is for running the pipeline controller itself. I guess you are self hosting the OS?
Hi @<1736919317200506880:profile|NastyStarfish19> , can you provide an example script that reproduces this behaviour? Also full log of the execution would be useful 🙂
Can you create a standalone script that reproduces this? @<1736194481398484992:profile|MoodySeaurchin62>
I suggest you see this example - None
And see how you can implement an if statement into this sample to basically create 'branches' in the pipeline 🙂
Should work if you fetch the Task object in the pipeline step itself
Hi @<1736194481398484992:profile|MoodySeaurchin62> , how are you currently reporting it? Are you reporting iterations?
ShakyJellyfish91 , Hi!
If I understand correctly you wish for the agent to take the latest commit in the repo while the task was ran at a previous commit?
Hi @<1722786138415960064:profile|BitterPuppy92> , I believe pre-defining queues via the helm chart is an Enterprise/Scale license feature only and not available in the open source
@<1722786138415960064:profile|BitterPuppy92> , we are more than happy to accept pull requests into our free open source 🙂
Hi @<1523707653782507520:profile|MelancholyElk85> , I assume you're running remotely?
How are the metrics being reported? directly via the Logger module or via automatical logging of some framework? Also how are iterations reported
If you run inside a repository, then yes
Yep, exactly 🙂
@<1719524641879363584:profile|ThankfulClams64> , if you set auto_connect_streams to false nothing will be reported from your frameworks. With what frameworks are you working, tensorboard?