AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY AWS_DEFAULT_REGION
so when inside the docker, I donโt see the git repo and thatโs why ClearML doesnโt see it
Correct ...
I could map the root folder of the repo into the container, but that would mean everything ends up in there
This is the easiest, you can put it on the ENV variable :
None
BoredGoat1
Hmm, that means it should have worked with Trains as well.
Could you run the attached script, see if it works?
I... did not, ashamed to admit.
UnevenDolphin73 ๐ I actually think you are correct, meaning I "think" that what you are asking is the low level logging (for example debug that usually is not printed to console) to also log? is that correct ?
ConvolutedChicken69
basically the cleamrl-data needs to store an immutable copy of the delta changes per version, if the files are already uploaded, there is a good chance they could be modified...
So in order to make sure we you have a clean immutable copy, it will always upload the data (notice it also packages everything into a single zip file, so it is easy to manage).
And when runningย
get
ย the files on the parent dataset will be available as links.
BTW: if you call get_mutable_copy() the files will be copied, so you can work on them directly (if you need)
By your description it seems to make no difference whether I added the files via sync or add, since I will have to create a new dataset either way.
Sync is design to take a local folder/s and add/remove files from a dataset based on the local changes (it does that automatically based on file existence / content)
The changes (i.e. added files) are uploaded as delta changes relative to the parent version, this means we are not always uploading all files.
Add on the other hand means you...
Correct ๐
Failing when passing the diff to the git command...
WackyRabbit7 I'll make sure it is fixed
Import Error sounds so out of place it should not be a problem :)
The only workaround I can think of is :series = series + 'IoU>X'
It doesn't look that bad ๐
AgitatedTurtle16 from the screenshot, it seems the Task is stuck in the queue. which means there is no agent running to actual run the interactive session.
Basic setup:
A machine running clearml-agent
(this is the "remote machine") A machine running cleaml-session (let's call it laptop ๐ )You need to first start the agent on the "remote machine" (basically call clearml-agent daemon --docker --queue default
), Once the agent is running on the remote machine, from your laptop ru...
Check the examples on the github page, I think this is what you are looking for ๐
https://github.com/allegroai/trains-agent#running-the-trains-agent
DeliciousKoala34 any chance you are using PyCharm 2022 ?
I see, so basically pull a fixed set of configuration for everyone from the server.
Currently only the scale/enterprise version supports such a feature ๐
Hi BroadSeaturtle49torchvision!=0.13.0,>=0.8.1
is this what you have in the requirements ?
The clearml-agent is parsing the requested version and tries to match it to the version found/supported by the installed cuda
There is the possibility the combinarion wither does not exist or fore some reason the parsing (i.e. clearml-agent's parsing) fails
can you maybe provide the Task's full log?
is it possible to change an existing model's URL?
Edit the DBs ... That's basically the only way ๐
Well it seems we forgot that one ๐ I'll quickly make sure it is there.
As a quick solution (no need to upgrade)task.models["output"]._models.keys()
Hi SkinnyPanda43
No idea what the ImageId actually is.
That's the ami image string that the new EC2 will be started with, make sense ?
Hi ShortElephant92
No, this is opt-in, so other then checking for updates once in a while, no traffic at all
PompousParrot44 the fundamental difference is that artifacts are uploaded manually (i.e. a user will specifically "ask" to upload an artifact), models are logged automatically and a user might not want them uploaded (imagine debugging sessions, or testing).
By adding the 'upload_uri' arguments, you can specify to trains that you want all models to be automatically uploaded (not just logged).
Now here is the nice thing, when running using the trains-agent, you can have:
Always upload the mod...
Hi WickedGoat98
This sounds like a great design (obviously you have scale in mind ๐ ) Feel free to ask "stupid" questions, based on what you already wrote I doubt they will be
A few questions that come to mind (probably a few others after):
You mentioned FS synchronization, from where? i.e. what is the single source of truth ? K8s (Rancher 2.0 is basically k8s manager) can take care of mounting volumes, so no need to sync, is this a valid solution ?
BTW : (you can drag and drop an i...
The data I'm syncing by an data provider wich supports only an ftp connection....
Right ... that makes sense :)
No worries WickedGoat98 , feel free to post questions when they arise. BTW: we are now improving the k8s glue, so by the time you get there the integration will be even easier ๐
RipeGoose2 you can put ut before/after the Task.init, the idea is for you to set it before any of the real training starts.
As for not effecting anything,
Try to add the callback and just have it returning None (which means skip over the model log process) let me know if this one works
Hi @<1542316991337992192:profile|AverageMoth57>
is this a follow up of this thread? None