Hi @<1710827340621156352:profile|HungryFrog27> , I'd suggest running the agent with --debug
flag for more information. Can you provide a full log of both the HPO task and one of the children?
Regarding controlling the timeout - I think this is more of a pip configuration
Hi @<1639799308809146368:profile|TritePigeon86> , if I understand you correctly, you're basically looking for a switch in pipelines (per step) to say "even if step failed, continue the pipeline"?
Try this key pair on another machine, could be just invalid..
Hi @<1717350332247314432:profile|WittySeal70> , where are the debug samples stored? Have you recently moved the server?
@<1709740168430227456:profile|HomelyBluewhale47> , how did you set the output_uri
?
Hi SillyGoat67 ,
Hmmm. What if you run these in separate experiments and each experiment reports it's own result? This way you could use comparison between experiments to see the different results grouped together.
Also you can report different scalars for the same series so you can see something like this:
I think it depends on your code and the pipeline setup. You can also cache steps - avoiding the entire need to worry about artifacts.
Do you mean if they are shared between steps or if each step creates a duplicate?
CheerfulGorilla72 , can you point me to where in the script the reported scalars are?
I think this might be happening because you can't report None
for Logger.report_scalar()
so the auto logging assigns it some sort of value - 0. What is your use case? If the value of the scalar is None
then why log it?
Hi, SkinnyPanda43 , from what version did you upgrade to which version?
Can you give a bit more info of how you want the pipeline built and where you want to insert/extract the task id? Also how is the model related? Is it the start of the pipeline?
What's the use case?
If you killed all processes directly, there can't be any workers on that machine. It means that these two workers are running somewhere else...
What is your scenario? Can you elaborate?
Hi,
From the looks of it, it always returns a string. What is your use case for this? Do you have some conditionality on the type of variable the parameters are?
Hi! I think there is an example in the repo:
https://github.com/allegroai/clearml/blob/master/examples/reporting/scatter_hist_confusion_mat_reporting.py
🙂
What is your use case though? I think the point of local/remote is that you can debug in local
That's an interesting question. I think it's possible. Let me check 🙂
You're running your experiment from pycharm? Are you using the same environment in pycharm for all your experiments and you want the task to take packages from your 'agent' environment?
I'm not sure if I'm missing something, but why not use that environment in pycharm then?
Hi JumpyPig73 ,
It appears that only the AWS autoscalar is in the open version and other autoscalars are only in advanced tiers (Pro and onwards):
https://clear.ml/pricing/