Hi @<1724960464275771392:profile|DepravedBee82> , I believe this is not currently supported
Hi @<1663354533759160320:profile|ExhilaratedLizard23> , can you please elaborate on how you built the dataset and how you're consuming it?
Hi @<1745616566117994496:profile|FantasticGorilla16> , under the hood the google API is being used - None
Regarding getting machines faster, I think that really depends on availability on Google's side 🙂
Hi @<1594863230964994048:profile|DangerousBee35> , did you follow the specific instructions to set _
allow
omegaconf_edit_
to True
as in the docs?
None
worker by default checks the backend every 5 seconds for new tasks in the queue. While running a task I think it basically sends whatever api calls a regular local task sends
Hi @<1594863230964994048:profile|DangerousBee35> , using the UI uses API calls, agents listening to a queue send API calls and also the applications send API calls all the time while running
Hi @<1714451225295982592:profile|FreshWoodpecker88> , is it possible that you didn't get permissions to the relevant directories to act as the actual storage for mongodb?
Hi @<1546303293918023680:profile|MiniatureRobin9> , if you use pipelines from decorator your can certainly set if statements to decide where/how to go
I suggest you see this example - None
And see how you can implement an if statement into this sample to basically create 'branches' in the pipeline 🙂
Hi @<1597762318140182528:profile|EnchantingPenguin77> , you can set this in the docker extra arguments section of the task
Hi MoodyCentipede68 ,
What versions of ClearML & Agent are you using?
I'll take a large snippet too 😛
Do you have any idea what's the source of this?TypeError: __init__() got an unexpected keyword argument 'configurations'
Hi @<1578555761724755968:profile|GrievingKoala83> , I don't think so.
Hi SweetHippopotamus84 , what version of ClearML are you using? Also, do you have a small snippet to play with?
RattyLouse61 , I think you can save the yml conda env file as an artifact, this way it would also be accessible by other tasks 🙂
Hi @<1535069219354316800:profile|PerplexedRaccoon19> , why not just run it as python script.y
?
Yes, you can also manually specify packages using --packages
flag 🙂
Maybe those are internal ports for the docker since I can see the following for one of the dockers ports
section8008/tcp, 8080-8081/tcp, 0.0.0.0:8085->8085/tcp, :::8085->8085/tcp
than maybe a process inside the container gets killed and the container will hang? Is this possible?
I'm not sure. Usually if Elastic is unresponsive/not working properly the API server will have issues raising/working and will print out errors
Looks great, I would suggest having at least 150 gb free when you do the upgrade 🙂
Hi @<1639799308809146368:profile|TritePigeon86> , what is the use case for passing multiple callbacks? Why not have it in the same function simply?
Hi @<1707203455203938304:profile|FoolishRobin23> , not sure I understand. Are you setting something in addition to the services agent in the docker compose?
Hi @<1535069219354316800:profile|PerplexedRaccoon19> , it depends on how you run the agent that is listening to your queue. If you set it to run with multiple gpus you'd be able to utilize multiple gpus in your sessions 🙂
Hi @<1535069219354316800:profile|PerplexedRaccoon19> you can setup api.files_server
in clearml.conf
to point to your s3 bucket
On prem is also K8s? Question is if you run the code unrelated to ClearML on EKS, do you still get the same issue?
Hmmm Regarding your issue you can use the following env vars to define your endpoint
https://clear.ml/docs/latest/docs/configs/env_vars/#server-connection
What is your usecase? Do you want to change the endpoint mid run?
Hi @<1730033904972206080:profile|FantasticSeaurchin8> , can you provide a sample script that reproduces this behaviour?