Reputation
Badges 1
662 × Eureka!I think so, it was just missing from the official documentation ๐ Thanks!
How or why is this the issue? I great something is getting lost in translation :D
On the local machine, we have all the packages needed. The code gets sent for remote execution, and all the local packages are frozen correctly with pip.
The pipeline controller task is then generated and executed remotely, and it has all the relevant packages.
Each component it launches, however, is missing the internal packages available earlier :(
See e None @<1523701087100473344:profile|SuccessfulKoala55>
I can only say Iโve found ClearML to be very helpful, even given the documentation issue.
I think theyโve been working on upgrading it for a while, hopefully something new comes out soon.
Maybe @<1523701205467926528:profile|AgitatedDove14> has further info ๐
Dynamic pipelines in a notebook, so I donโt have to recreate a pipeline every time a step is changed ๐ค
Oh nono, more like:
- Create a pipeline
- Add N steps to it
- Run the pipeline
- It fails/succeeds, the user does something with the output
- The user would like to add/modify some steps based on the results now (after closer inspection).I wonder if at (5), do I have to recreate the pipeline every time? ๐ค
This is related to my other thread, so Iโll provide an example there -->
Yeah I was basically trying to avoid clutter in the Pipelines
page. But see my other thread for the background, maybe you have some good input there? ๐
Yeah I will probably end up archiving them for the time being (or deleting if possible?).
Otherwise (regarding the code question), I think itโs better if we continue the original thread, as it has a sample code snippet to illustrate what Iโm trying to do.
So basically I'm wondering if it's possible to add some kind of small hierarchy in the artifacts, be it sections, groupings, tabs, folders, whatever.
TimelyPenguin76 that would have been nice but I'd like to upload files as artifacts (rather than parameters).
AgitatedDove14 I mean like a grouping in the artifact. If I add e.g. foo/bar
to my artifact name, it will be uploaded as foo/bar
.
The SDK is fine as it is - I'm more looking at the WebUI at this point
The network is configured correctly ๐ But the newly spun up instances need to be set to the same VPC/Subnet somehow
SuccessfulKoala55 TimelyPenguin76
After looking into it, I think it's because our AMI does not have docker, and that the default instance suggested by ClearML auto scaler example is outdated
I'll have some reports tomorrow I hope TimelyPenguin76 SuccessfulKoala55 !
... and any way to define the VPC is missing too ๐ค
From our IT dept:
Not really, when you launch the instance, the launch has to already be in the right VPC/Subnet. Configuration tools are irrelevant here.
Yup, latest version of ClearML SDK, and we're deployed on AWS using K8s helm
No it does not show up. The instance spins up and then does nothing.
FWIW, we prefer to set it in the agentโs configuration file, then itโs all automatic
Well you could start by setting the output_uri
to True
in Task.init
.
Heh, good @<1523704157695905792:profile|VivaciousBadger56> ๐
I was just repeating what @<1523701070390366208:profile|CostlyOstrich36> suggested, credits to him
It should store it on the fileserver, perhaps you're missing a configuration option somewhere?
@<1523704157695905792:profile|VivaciousBadger56> It seems like whatever you pickled in the zip file relies on some additional files that are not pickled.
Also, creating from functions allows dynamic pipeline creation without requiring the tasks to pre-exist in ClearML, which is IMO the strongest point to make about it
- in the second scenario, I might have not changed the results of the step, but my refactoring changed the speed considerably and this is something I measure.
- in the third scenario, I might have not changed the results of the step and my refactoring just cleaned the code, but besides that, nothing substantially was changed. Thus I do not want a rerun.Well, I would say then that in the second scenario itโs just rerunning the pipeline, and in the third itโs not running it at all ๐
(I ...
We just redeployed to use the 1.1.4 version as Jake suggested, so the logs are gone ๐