WackyRabbit7 I don't believe there is currently a 'children' section for a task. You could try managing the children to access them later.
One option is add_pipeline_tags(True)
this should mark all the child tasks with a tag of the parent task
What do you mean? You control which version of the SDK you install (unless you just go for the latest)
Do you mean if they are shared between steps or if each step creates a duplicate?
UnevenDolphin73 , Hi!
I would avoid using cache_dir
since it's only a cache. I think using S3 or the fileserver with Task.upload_artifact()
is a nice solution
Also what do you mean by 'augment' arguments?
Can you provide a snippet to try and reproduce?
Also a good read are the environment variables - I think those also allow you some advanced capabilities
None
Hi @<1555362936292118528:profile|AdventurousElephant3> , if you clone/reset the task, you can change the logging level to 'debug'
Sounds like a great feature! Maybe open a github feature request to make it happen 🙂
Hi @<1580367711848894464:profile|ApprehensiveRaven81> , I'm afraid this is only option for the open source version. In the Scale/Enterprise licenses there are SSO/LDAP integrations
Hi @<1759749707573235712:profile|PungentMouse21> , you should be able to access machine logs from the autoscaler, this should give you a place to search
Hi @<1584716355783888896:profile|CornyHedgehog13> , you can only see a list of files inside a dataset/version. I'm afraid you can't really pull individual files since everything is compressed and chunked. You can download individual chunks.
Regarding the second point - there is nothing out of the box but you can get a list of files in all datasets and then compare if some file exists in others.
Does that make sense?
Hi @<1523701083040387072:profile|UnevenDolphin73> , can you list/show which buttons it affects and how?
You can configure it in ~/clearml.conf
at api.files_server
Discussion moved to internal channels
Sounds like some issue with queueing the experiment. Can you provide a log of the pipeline?
Hi OutrageousSheep60 , how did you add external links to a dataset? Can you provide a snippet?
Hi ReassuredArcticwolf33 , what are you trying to do and how is it being done via code?
Hi @<1523701260895653888:profile|QuaintJellyfish58> , can you please elaborate? Vault feature is only part of the Scale/Enterprise licenses
@<1523701181375844352:profile|ExasperatedCrocodile76> , I think you need to set agent.package_manager.system_site_packages: True
In clearml.conf
ClearML doesn't assume that you have all the necessary packages installed so it does need to have some sort of basis for the packages to install
Hi @<1524922424720625664:profile|TartLeopard58> , can you elaborate on what do you mean by code-server?
Hi TrickySheep9 , can you be a bit more specific?
Hi @<1529995795791613952:profile|NervousRabbit2> , if you're running in docker mode you can easily pass it in the docker_args
parameter for example so you can set env variables with -e
docker arg
Hi @<1594863230964994048:profile|DangerousBee35> , using the UI uses API calls, agents listening to a queue send API calls and also the applications send API calls all the time while running
I think this is what you're looking for - None
You can read up on the caching options in your ~/clearml.conf
You can make virtualenv creation a bit faster
I think it depends on your code and the pipeline setup. You can also cache steps - avoiding the entire need to worry about artifacts.
UnevenDolphin73 , that's an interesting case. I'll see if I can reproduce it as well. Also can you please clarify step 4 a bit? Also on step 5 - what is "holding" it from spinning down?
Didn't have a chance to try and reproduce it, will try soon 🙂