Hi @<1639799308809146368:profile|TritePigeon86>
Sounds awesome, how can we help?
Hi CooperativeFox72 ,
From the backend guys, long story short, upgrade your machine => more cpu cores , more processes , it is that easy 🙂
I understand that it uses time in seconds when there is no report being logged..but, it has already logged three times..
Hmm could it be the reporting started 3 min after the Task started ?
SuperiorPanda77 I have to admit, not sure what would cause the slowness only on GCP ... (if anything I would expect the network infrastructure would be faster)
DeterminedToad86 were you running a jupyter notebook or a jupyter console ?
this sounds like docker build issue on macos M1
https://pythonspeed.com/articles/docker-build-problems-mac/
what is user properties
Think of them as parameters you can add post execution, that you can also add to the Task table (i.e. customize columns)
how can I add parameters
task.set_user_properties([{"name": "backbone", "description": "network type", "value": "great"},]
JitteryCoyote63 try to add the prefix to the parameter name, e.g. instead of "artifact_name" use "Args/artifact_name"
Before this line, call Task.init
Is there a way to move existing pipelines between projects?
You should be able to, go to your settings page and turn on "show hidden folders"
Then go to your project, you should see " .pipeline
" sub project there, right click it and move it to another folder.
If possible, i would like all together prevent the fileserver and write everything to S3 (without needing every user to change their config)
There is no current way to "globally" change the default files server (I think this is part of the enterprise version, alongside vault etc.).
What you can do is use an OS environment to override the conf file:CLEARML_FILES_HOST="
"
PricklyRaven28 wdyt?
preinstalled in the environment (e.g. nvidia docker). These packages may not be available via pip, so the run will fail.
Okay that's the part that I'm missing, how come in the first run the package existed and in the cloned Task they are missing? I'm assuming agents are configured basically the same (i.e. docker mode with the same network access). What did I miss here ?
ZanyPig66 what do you mean with "git integration " ? So what would be two ways of calling the function, where one works and the other does not?
I would say 4vCPUs and 512GB storage , but it really depends on the load you will put on it
SubstantialElk6 "Execution Tab" scroll down you should have "Installed Packages" section, what do you have there?
Internally we use blob.upload_from_file
it has a default 60sec timeout on the connection (I'm assuming the upload could take longer).
LovelyHamster1 what do you mean by "assume the permissions of a specific IAM Role" ?
In order to spin an ec2 instance (aws autoscaler) you have to have correct credentials, to pass those credentials you must create a key/secret pair to pass to the autoscaler. There is no direct support for IAM Role. Make sense ?
Hi JitteryCoyote63
Just making sure, the package itself it installed as part of the "Installed packages", and it also installs a command line utility ?
BitingKangaroo95 nice work 🎊
I think that what did it was:
change the sshd_config
so that it allows port forwarding
, agent forwarding
and x11 forwarding
But just in case, it might be there was a pre existing SSH identifier on your machine, and hence the error.
clear known_hosts under ~/.ssh was also something I would try 🙂
Hi @<1792364603552829440:profile|TestyBeetle31>
Yeah so sorry we finally changed the repository name:
None
Where is the broken this link coming from, we will fix it (we are working on it, and some of the services do not auto forward
yes, looks like. Is it possible?
Sounds odd...
Whats the exact project/task name?
And what is the output_uri?
FileNotFoundError: [Errno 2] No such file or directory: '/home/user/.clearml/cache/storage_manager/datasets/.lock.000.ds_38e9acc8d56441999e806815abddee82.clearml'
Let me check this issue, it seems like the locking mechanism should have figured that there is no lock...
Hi SpicyCrab51 ,
Hmm, how exactly is the Dataset opened?
If the Dataset object is alive for 30h it will keep the dataset alive, why isn't it being closed ?
Thanks for the ping ConvolutedChicken69 , I missed it 😞
from what i see in the docs it's only for Jupyter / VS Code, i didn't see anything about pycharm
PyCharm is basically SSH, which is supported 🙂
(Maybe we should mention it in the docs?)
i keep getting an failed getting token error
MiniatureCrocodile39 what's the server you are using ?
PompousHawk82 what do you mean by ?
but the thing is that i can only use master to log everything
BitterStarfish58 I would suspect the upload was corrupted (I think this is the discrepancy between the files size logged, to the actual file size uploaded)