TimelyPenguin76 , MammothGoat53 , I think you shouldn't call Task.init() more than once inside a script
DepravedSheep68 , do you mean when registering your data?
So just rescaling the graph to see 0-1 is what you're looking for?
Before injecting anything into the instances you need to spin them up somehow. This is achieved by the application that is running and the credentials provided. So the credentials need to be provided to the AWS application somehow.
Hi UnevenDolphin73 ,
I think you need to lunch multiple instances to use multiple creds.
Hi @<1623491856241266688:profile|TenseCrab59> , you need to mark output_uri = True in Task.init()
Hi JitteryCoyote63 , you can get around it using the auto_connect_frameworks parameter in Task.init()
Hmmmm I think you would need to change some configurations in the docker-compose to use https
Check the pre_execute_callback and post_execute_callback arguments of the component.
By applications I mean the applications (HPO, Autoscalers,...). Regarding the web UI - it's sending API calls as you browse. You can open dev tools (F12) to see the requests going out (Filter by XHR in network tab)
I would suggest googling that error
Can you add a snippet please?
AbruptWorm50 , by 'upload' you mean you're trying to run an optimization app?
How did you add the parameters to the pipeline? Did you refer to this example?
None
This is also used in automated scenarios and over possible network issues, the retry is built in and is a good compromise - basically making the SDK resilient to network issues. The error you're getting is a failure to connect, unrelated to the credentials...
ElatedChimpanzee91 , hi 🙂
I think you can enlarge the graph to see the entire thing OR maybe try adding \n in the title, maybe that would work
You can add it manually to the requirements
Hi @<1570220844972511232:profile|ObnoxiousBluewhale25> , I see that there is also no force flag for the SDK. Maybe open a github request to add the option to force or just allow deleting archived published tasks.
Currently through code you can use the API to delete those experiments.
Hi @<1523701099620470784:profile|ElegantCoyote26> , what happens if you define the cache size to be -1?
EcstaticMouse10 , this looks like the most relevant for you 🙂
Hi @<1523701842515595264:profile|PleasantOwl46> , I think that is what happening. If server is down, code continues running as if nothing happened and ClearML will simply cache all results and flush them once server is back up
RotundSquirrel78 , can you please try looking what returns when you're on the 'network' tab in F12 ?
Hyperparameters are connected to the experiment so your config will be right 🙂
@<1756488209237282816:profile|IdealCamel64> , to address your questions:
- Yes
- Yes, but as @<1576381444509405184:profile|ManiacalLizard2> said, let your users try and I'm sure they'll prefer ClearML 🙂
SarcasticSparrow10 , please note that during the upgrade you do NOT copy /opt/clearml/data/mongo into /opt/clearml/data/mongo_4 , you create the folder like in the instructions: sudo mkdir /opt/clearml/data/mongo_4
This is the reason that it is giving out errors - You've got old mongo data in your mongo 4 folder...
Please follow the instructions to the letter - this should work 🙂
BTW, are you using http://app.clear.ml or a self hosted server?
Also, I'm not sure I understand exactly what you're expecting to get and what you're getting
you deleted the model from the same directory as you ran from the code but you didn't delete it from the cache folder?