Reputation
Badges 1
662 × Eureka!I guess in theory I could write a run_step.py
, similarly to how the pipeline in ClearML worksβ¦ π€ And then use Task.create()
etc?
But to be fair, I've also tried with python3.X -m pip install poetry
etc. I get the same error.
Either one would be nice to have. I kinda like the instant search option, but could live with an ENTER to search.
I opened this meanwhile - https://github.com/allegroai/clearml-server/issues/138
Generally, it would also be good if the pop-up presented some hints about what went wrong with fetching the experiments. Here, I know the pattern is incomplete and invalid. A less advanced user might not understand what's up.
The only thing I could think of is that the output of pip freeze would be a URL?
The Task.init
is called at a later stage of the process, so I think this relates again to the whole setup process we've been discussing both here and in #340... I promise to try ;)
It can also log generate a log file with this method, it does not have to output it to CONSOLE tab.
I... did not, ashamed to admit. The documentation says only boolean values.
Yes, exactly. I have not yet had a chance to try this out -- should it work?
I can't seem to manage the first way around. If I select tasks in different projects, I don't get the bottom bar offering to compare between them
For example, can't interact with these two tasks from this view (got here from searching in the dashboard view; they're in different projects):
Heh, good @<1523704157695905792:profile|VivaciousBadger56> π
I was just repeating what @<1523701070390366208:profile|CostlyOstrich36> suggested, credits to him
I realized it might work too, but looking for a more definitive answer π Has no-one attempted this? π€
I think now there's the following:
Resource type Queue (name) defines resource + max instancesAnd I'm looking for:
Resource type "pool" of resources (type + max instances) A pool can be shared among queues
That sounds about right FrothyDog40
Is there a way to accomplish this right now FrothyDog40 ? π€
yes, a lot of moving pieces here as we're trying to migrate to AWS and set up autoscaler and more π
I will! (once our infra guy comes back from holiday and updates the install, for some reason they setup server 1.1.1???)
Meanwhile wondering where I got a random worker from
That's probably in the newer ClearML server pages then, I'll have to wait still π
Can I query where the worker is running (IP)?
And agent too, I hope..?
I'd be happy to join a #releases channel just for these!
Just randomly decided to check and saw there's a server 1.4 ready π
On an unrelated note, when cloning an experiment via the WebUI, shouldn't the cloned experiment have the original experiment as a parent? It seems to be empty
Unfortunately not, each task defines and constructs its own dataset. I want cloned task to save that link π€
Aw you deleted your response fast CostlyOstrich36 xD
Indeed it does not appear in ps aux
so I cannot simply kill it (or at least, find it).
I was wondering if it's maybe just a zombie in the server API or similar
Running a self-hosted server indeed. It's part of a code that simply adds or uploads an artifact π€
I wouldn't mind going the requests
route if I could find the API end point from the SDK?