Can you please provide stand alone code snippets that reproduce this behavior?
Under add_step there is task_overrides and then you can find this section
reset requirements (the agent will use the "requirements.txt" inside the repo) task_overrides={'script.requirements.pip': ""}
Is there any specific reason you're not running in docker mode? Running in docker would simplify things
Hi @<1627478122452488192:profile|AdorableDeer85> , can you provide a code snippet that reproduces this?
What happens if you remove the run_locally()
?
And just making sure - the first pipeline step isn't even pushed into a queue? It remains in 'draft' mode?
Pipeline is a unique type of task, so it should detect it without issue
MagnificentWorm7 , I'm taking a look if it's possible π
As a workaround - I think you could split the dataset into different versions and then use Dataset.squash
to merge into a single dataset
https://clear.ml/docs/latest/docs/references/sdk/dataset#datasetsquash
Hi @<1798162812862730240:profile|PreciousCentipede43> , role based access controls, admins and resource configurations are all part of the Scale/Enterprise licenses. In the open source everyone is set as an admin with access to all parts of the system.
Hope this helps π
Hi @<1523701601770934272:profile|GiganticMole91> , As long as experiments are deleted then their associated scalars are deleted as well.
I'd check the ES container for logs. Additionally, you can always beef up the machine with more RAM to give elastic more to work with.
Thanks FiercePenguin76
We will update the roadmap and go into details on the next community Talk (in a week from now, I think)
Regrading clearml-serving, Yes! we are actively working on it internally, but we would love to get some feedback, I thinkΒ AnxiousSeal95 Β would appreciate itΒ π
Hopefully will have updates soon
Hi JitteryCoyote63 , I don't believe this is possible. Might want to open a GitHub feature request for this.
I'm curious, what is the use case? Why not use some default python docker image as default on agent level and then when you need a specific image put into the experiment configuration?
Try spinning a 1.6.0 server to see if it will work there. BTW what python version are you using?
Hi CostlyElephant1 , where is the data stored? on the fileserver or some s3 bucket or other solution?
@<1681111528419364864:profile|SmoothGoldfish52> , it will be saved to a cache folder. Take a look at what @<1576381444509405184:profile|ManiacalLizard2> wrote. I think tar files might work already. Give it a test
I assume that ec2-13-217-109-164.compute-1.amazonaws.com is the ec2 instance where the API is running?
Are you using the files server or S3 for storage? Can you verify on the storage itself that the artifacts are actually uploaded and are downloadable?
Hi RattyLouse61 π
Are these two different users using two sets of different credentials?
VexedCat68 Hi π
Please try with pip install clearml==1.1.4rc0
Hi @<1564785037834981376:profile|FrustratingBee69> , maybe add a pull request for the new feature? π
I see, maybe open a GitHub issue for this to follow up
Hi @<1546303293918023680:profile|MiniatureRobin9> , can you please add the full log of the run? Also, do you have some code that reproduces this?
DefiantLobster38 , please try the following - Change the verify_certificate
to False
https://github.com/allegroai/clearml/blob/aa4e5ea7454e8f15b99bb2c77c4599fac2373c9d/docs/clearml.conf#L16
Tell me if it helps π
Hi ElegantCoyote26 , I don't think so. I'm pretty sure the AWS AMI's are released for the open source server π
It is returned in queues.get_all. I'd suggest navigating to the webUI and checking what the webUI is sending to the server (It's all API calls) and replicating that in code with the APIClient
Hi @<1576381444509405184:profile|ManiacalLizard2> , it feels like something related to the resources of the server or networking and it's having a hard time retrieving the data from ES. What resources have you allocated for the API server/ ES?
@<1523701295830011904:profile|CluelessFlamingo93> , just so I understand - you want to upload a string as the artifact?
What do you mean by requirements by the docker? You can set the default docker in clearml.conf
but you can always specify a different docker image on the Task level that will override this
Not sure, let me know what works π
SubstantialElk6 , the data goes directly from s3 (for example) to the client. It never passes through the ClearML server
Hi BroadSeaturtle49 , what versions of clearml-agent
& clearml
are you using? What OS is this?
Hi DefeatedMoth52 , where have you been using --find-links
tag? When you run the experiment how does the package show up in the ClearML UI?