Hi @<1716263152817016832:profile|TartHare54> , is the module part of some repository?
Hi @<1736919317200506880:profile|NastyStarfish19> , the default behaviour of the agent will install everything in the 'installed packages' section in the execution tab. You can also specify packages manually using Task.set_packages - None
Hi 🙂
How do you provide the package path to the agent? Also can you attach the log of the agent?
VexedCat68 , can you try accessing it as
192.168.15.118:8080/login first?
How many tasks do you figure this thing is iterating through the project?
No need, you can set multiple compute resources per single autoscaler instance
Hi @<1545216070686609408:profile|EnthusiasticCow4> , in the PRO plan you are limited to a certain max amount of parallel application instances. If you kill some running applications, your HPO application will start running
Did you check permissions?
Hi CrabbyKoala94 ,
Are you running a self hosted server?
Hi @<1552101458927685632:profile|FreshGoldfish34> , the Scale & Enterprise versions indeed also have different features from what is in the self hosted.
You can see a more detailed comparison here , especially if you scroll down.
I suggest reading all of them, starting with pipeline from tasks 🙂
Hi @<1547028031053238272:profile|MassiveGoldfish6> , I think you can disable the auto logging of lightning artifacts automatically using the auto_connect_frameworks parameter of the component.
@<1587615463670550528:profile|DepravedDolphin12> , how did you create the dataset? Are you doing anything else? Do you have a code snippet that reproduces this behavior? i.e both for creating the dataset and fetching it..
--status Print the worker's schedule (uptime properties, server's runtime properties and listening queues)
Hi @<1664079296102141952:profile|DangerousStarfish38> , can you add a log of the execution?
SmugTurtle78 , I'll take a look at it shortly 🙂
Hi GorgeousMole24 , I think for this your best option would be using the API to extract this information.
` from clearml.backend_api.session.client import APIClient
client = APIClient() `is the pythonic usage
I think you can simply save the creating task ID as a configuration parameter (I'm just thinking out loud)
@<1523701295830011904:profile|CluelessFlamingo93> , I'm not sure what you mean. Whenever you run a pipeline code (pipeline from decorators) if it's from a repository that repo will be logged. Where are you importing "train" from? What if you import entire package & point to the specific module?
VexedCat68 , what errors are you getting? What exactly is not working, the webserver or apiserver? Are you trying to access the server from the machine you set it up on or remotely?
Did you download it to the same folder or to some mounter folder?
Hi HurtWoodpecker30
Did you change clearml version? What version are you using?
I meant that maybe you ran it with a newer version of the SDK
Hi @<1702130048917573632:profile|BlushingHedgehong95> , I would suggest the following few tests:
- Run some mock task that uploads an artifact to the files server. Once done, verify you can download the artifact via the web UI - there should be a link to it. Save that link. Then delete the task and mark to delete all artifacts. Test the link again to see that it fails to delete
- Please repeat the same with a dataset
Hi UnevenDolphin73 , I think this is analyzed in the code
JitteryCoyote63 Does it happen to you also with 1.1.1?