Hi @<1709740168430227456:profile|HomelyBluewhale47> , dynamic env variables are supported. Please see here - None
with the combination of None :port/bucket
for --storage
?
Yeah, I missed that you defined the storage with --storage
please try with the port as well there
With what host config were you trying the last attempts?
@<1709740168430227456:profile|HomelyBluewhale47> , how did you set the output_uri
?
Hi @<1612982606469533696:profile|ZealousFlamingo93> , how exactly are you running the autoscaler?
Is this what you're running?
None
Also, how did you register the data?
Hi @<1710827348800049152:profile|ScantChicken68> , I'd suggest first reviewing the onboarding videos on youtube:
None
None
After that, I'd suggest just adding the Task.init()
to your existing code to see what you're getting reported. After you're familiar with the basics then I'd suggest going into the orchestration/pipelines features 🙂
EnviousPanda91 , which framework isn't being logged? Can you provide a small code snippet?
Hi @<1637624975324090368:profile|ElatedBat21> , do you have a code snippet that reproduces this? You can also manually log a model to the system using the OutputModel
- None
Hi @<1560073997809356800:profile|RotundPigeon65> , I think this is what you're looking for 🙂
None
If you're running on GCP I think using the autoscaler is a far easier and also cost efficient solution. The autoscaler can spin up and down instances on GCP according to your needs.
Hi, How did you deploy?
Regarding viewing the datasets - Can you give an example? I'm not sure I understand how you'd like to view it
Regarding Publish vs Finalize - I think finalize uploads all the files and prepares it for publish. Once published, it should be accessible to other parts(tasks) in the system
Hi @<1590514584836378624:profile|AmiableSeaturtle81> , the hotfix should right around the corner 🙂
Hi @<1533619716533260288:profile|SmallPigeon24> , can you please elaborate on your usecase ?
Hi JitteryCoyote63 , can I assume you can ssh into the machine directly?
No, but I think it would make sense to actually share reports outside of your workspace, similar to experiments. I'd suggest opening a GitHub feature request
I mean in the execution section of the task - under container section
Regarding project move - Do you move it between subprojects within a project and after F5 you see the experiment again?
Did you see any other errors in the server logs? Is the artifact very large by chance?
SwankySeaurchin41 , I don't think pipelines were mentioned in the video. Are you looking for something specific?
This is unrelated to your routers. There are two things at play here. The configuration of WHERE the data will go - output_uri
and the other is the clearml.conf
that you need to setup with credentials. I am telling you, you are setting it wrong. Please follow documentation.
Hi @<1523701523954012160:profile|ShallowCormorant89> , you can specify the clearml.conf
to use using the CLEARML_CONFIG_FILE
env var - None
Hi @<1545216070686609408:profile|EnthusiasticCow4> , I suggest you try ClearML-Serving
None
What cloud provider are you using? I think you should open a github issue to request this behavior 🙂