And where are these login/pass env vars are used?
Hi @<1717350332247314432:profile|WittySeal70> , to address your questions:
Yes, but then you need to manually inject those environment variables when running the agent
Hi AbruptHedgehog21 , it looks like you need to parameters.dataset_id
on step data_creation
Moving objects between steps is usually done via the artifacts mechanism. How are you building the pipeline, with decorators?
Hi @<1523704157695905792:profile|VivaciousBadger56> , can you add a screenshot of what you're talking about?
SubstantialElk6 , either that or the one mounted outside 🙂
Hi @<1709740168430227456:profile|HomelyBluewhale47> , dynamic env variables are supported. Please see here - None
I'm not sure I understand. Can you give a specific example of what you have VS what you'd like it to be?
Are you still having these issues? Did you check if it's maybe a connectivity issue?
Hi @<1523701083040387072:profile|UnevenDolphin73> , does it happen with the latest version? Can you add a snippet that reproduces this?
It really depends on how you want to work. The --docker
tag will make the agent run in docker mode, allowing it to spin docker containers to run the jobs inside
Hi ComfortableShark77 ,
So if I understand correctly you'd like the values of the configurations hidden when viewing in the UI?
Hi @<1523702496097210368:profile|ScantChimpanzee51> , your steps look ok but the error pretty much indicates that there is a folder permissions issue. Please navigate manually to /opt/clearml/data folder and check "ls -al" command what are the user and permissions for the "elastic_7" folder and then enter the elastic_7 folder and check the same for its "nodes" subfolder. If the permissions are correct try restarting the docker and checking if it helps.
Hi UnevenDolphin73 ,can you add the full log here? What version of the agent are you using?
No, it's all together. I suggest getting the onboarding recordings from your colleagues and watching them
I see. You need to run the agent with --cpu-only
flag in that case
Hi @<1768447000723853312:profile|RipeSeaanemone60> , can you please provide the full log? Is it the pipeline controller that is getting stuck or some step?
Hi UnevenDolphin73 , maybe JuicyFox94 or SuccessfulKoala55 can assist
Hi @<1706116294329241600:profile|MinuteMouse44> , you need to run in docker mode with --docker
tag to be able to inject env variables
Hi @<1526734383564722176:profile|BoredBat47> , it should be very easy and I've done it multiple times. For the quickest fix you can use api.files_server
in clearml.conf
Is it your own server installation or are you using the SaaS?
Hi @<1717350332247314432:profile|WittySeal70> , are you using a self hosted server or the community?
I think this can give you more information:
https://stackoverflow.com/questions/51279711/what-does-1000-mean-in-chgrp-and-chown
This means it assigns the owner as the first linux user on that machine.
TrickySheep9 , what is the use case? If I understand correctly, you want to use ClearML's package detection in a script to get the imports or do you want all the packages in the environment you're running?
Also, can you copy here the contents of your docker-compose file here?
Hi @<1523707653782507520:profile|MelancholyElk85> , in a section right under the default S3 credentials in clearml.conf
you have a section to specify per bucket 🙂
I think that something like that exists, it appears to be done in the paid version called hyper-datasets. The documentation is open for all apparently 🙂
There aren't any specific functions for this. But all of this information sits on the task object. I suggest running dir(task)
to see where this attribute is stored