If I'm not mistaken Task.get_last_iteration()
https://clear.ml/docs/latest/docs/references/sdk/task#get_last_iteration
reports the last iteration that was reported. However someone has to report that iteration. You either have to report it manually yourself during the script OR have something else like tensorflow/tensorboard do that reporting and ClearML should capture it
Does it make sense?
Hi @<1749965229388730368:profile|UnevenDeer21> , I think this is what you're looking for
None
Hi PreciousParrot26 ,
Why are you running from gitlab runner - Are you interested in specific action triggers?
@<1546303277010784256:profile|LivelyBadger26> , it is Nathan Belmore's thread just above yours in the community channel 🙂
StaleButterfly40 , it looks like there might be a good solution for your request. In the previous link I provided, there is a parameter '
continue_last_task'
that should work for you 🙂
Hi @<1524560082761682944:profile|MammothParrot39> , did you make sure to finalize the dataset you're trying to access?
However, now when I go in the Results -> Debug Samples tab, the s3 credential window pops up. Every time that I refresh the page
RattyLouse61 , What version of ClearML are you running, I think this issue was solved in 1.3.0 release
Hey GrievingTurkey78 ,
Please take a look here : https://clear.ml/docs/latest/docs/references/sdk/task#taskinit
I think what you're looking for is this:Task.init(..,
continue_last_task=True
)
Just search for this parameter for more more info 🙂
Hi @<1612982606469533696:profile|ZealousFlamingo93> , how exactly are you running the autoscaler?
Is this what you're running?
None
Hi @<1523701260895653888:profile|QuaintJellyfish58> , can you please provide a standalone snippet that reproduces this?
from src.net import Classifier
ModuleNotFoundError: No module named 'src'
Hi @<1714088832913117184:profile|MammothSnake38> , not sure I understand. Can you add a screenshot of how you currently play audio files?
Also, how did you register the data?
@<1808672054950498304:profile|ElatedRaven55> , what if you manually spin up the agent on the manually spun machine and then push the experiment for execution from there?
Hi @<1603198153677344768:profile|ExasperatedSeaurchin40> , I think this is what you're looking for - None
I think the 3rd one, let me know what worked for you
and if you revert it's all OK? Log wise there is nothing suspicious?
The chart already passes the --create-queue command line option to the agent, which means the agent will create the queue(s) it's passed. The open source chart simply doesn't allow you to define multiple queues in detail and provide override pod templates for them, however it does allow you to tell the agent to monitor multiple queues.
None
Hi TrickySheep9 , can you be a bit more specific?
What do you mean by verify? How are you currently running your HPO?
By applications I mean the applications (HPO, Autoscalers,...). Regarding the web UI - it's sending API calls as you browse. You can open dev tools (F12) to see the requests going out (Filter by XHR in network tab)
Hi @<1835851148938973184:profile|BattySwan0> , are you hosting your own server or are you using app.clear.ml
?
Hi @<1544853695869489152:profile|NonchalantOx99> , I think this is the environment variable you're looking for - CLEARML_AGENT_FORCE_SYSTEM_SITE_PACKAGES
None
You can also use agent.package_manager.system_site_packages: true
in your clearml.conf
Hi @<1590514584836378624:profile|AmiableSeaturtle81> , you need add the agent command that you run into the bootup of your system
Hi @<1852158596431745024:profile|HappyRaccoon38> , what did you do? What steps did you take?
CluelessElephant89 , I've added screenshots. Tell me if those help 🙂
Hi @<1841649351697371136:profile|PerplexedDog0> , I think what you're looking for is impersonation of the agent during execution. This can be achieved via the --use-owner-token
flag when running the agent - None
Hi StraightParrot3 , as SuccessfulKoala55 suggested you could maybe use tags for this as well.
In regards to creating views - If you predefine a certain view locally on your browser (with the extra column) I think you can just copy paste the URL and it should include the custom column for anyone using this URL
You can disable automatic model logging using auto_connect_frameworks
in Task.init()
https://clear.ml/docs/latest/docs/references/sdk/task#taskinit
This however will also disable automatic reporting of scalers. You can also manually force the upload of the final model with
https://clear.ml/docs/latest/docs/references/sdk/model_outputmodel#class-outputmodel