RotundSquirrel78 , you can go to localhost:8080/version.json
You can actually publish models, this is a model state in ClearML. You can also use tags as an extra layer of filtering
What about if you specify the repo user/pass in clearml.conf?
I think it removes the user/pass so it wouldn't be shown in the logs
Hi @<1523701083040387072:profile|UnevenDolphin73> , you're using a PRO account, right?
Hi GiganticMole91 ,
Can you please elaborate on what are you trying to do exactly?
Doesn't HyperParameterOptimizer change parameters out of the box?
That's an interesting question. I think it's possible. Let me check 🙂
GrittyKangaroo27 , I see no special reason why not, as long as you set the credentials correctly 🙂
Have you tried?
Are you using a self hosted server or the community server?
How do you currently save artifacts now?
@<1554638179657584640:profile|GlamorousParrot83> , can you add also the full log?
You can do it by comparing experiments, what is your use case? I think I might be missing something. Can you please elaborate?
You can set up username & password, it's in the documentation 🙂
Hi @<1634001106403069952:profile|DefeatedMole42> , the Pro plan is monthly payment according to usage. You can find more information here - None
Hi @<1523702932069945344:profile|CheerfulGorilla72> , it's possible. I see the web UI uses queues.move_task_to_front
.
I suggest using the webUI as a reference together with developer tools 🙂
Hi @<1625666182751195136:profile|MysteriousParrot48> , how did you run the original experiment? Can you add the full log and also the agent configuration?
Ah I see. I'm guessing UI is summing up runtimes of experiments in project.
I think maybe you're right. Let me double check. I might be confusing it with the previous version
Hi @<1523701949617147904:profile|PricklyRaven28> , I assume this is happening on the same instance? What if you put in like 20 sec sleep before or after the init call, does this behaviour reproduce?
If Elastic isn't crashing then it should be good. Once I get a confirmation I'll update you 🙂
Hi MoodySheep3 ,
Can you please provide screenshots from the experiment - how the configuration looks like
Hi @<1736194481398484992:profile|MoodySeaurchin62> , how are you currently reporting it? Are you reporting iterations?
@Alex Finkelshtein, if the parameters you're using are like this:
parameters = { 'float': 2.2, 'string': 'my string', }
Then you can update the parameters as mentioned before:parameters = { 'float': 2.2, 'string': 'my string', } parameters = task.connect(parameters) parameters['new_param'] = 'this is new' parameters['float'] = '9.9'
Please note that parameters['float'] = '9.9' will update the parameter specifically. I don't think you can update the parameter en masse...
Hi IcyFish32 , I don't think comparing more than 10 experiments. I think this ability will be added soon.
Hi @<1691258549901987840:profile|PoisedDove36> , not sure I understand. Can you please elaborate with screenshots maybe?
DeliciousStarfish67 , are you running your ClearML server on the aws instance?
GiganticTurtle0 Hi!
Currently the usage shows the resources usage of the entire machine and not configurable otherwise. Regarding the refresh button - Currently it always auto refresh the graph every few seconds 🙂
I'll see if there is a way to make it report only the agent usage 🙂
GiganticTurtle0 , just monitor the memory usage system wide
Hi 🙂
A task is the most basic object in the system in regards to experiments. A pipeline is a bunch of tasks that are controller by another task 🙂
AlertCrow40 , by the way. ClearML already has an integrated tool to work on a jupyter notebook.
In a couple of lines it will open a jupyter notebook for you to work with. Further reading here: https://clear.ml/docs/latest/docs/apps/clearml_session/
🙂