Hey, maybe AgitatedDove14 or ExasperatedCrab78 can help
Hi @<1535793988726951936:profile|YummyElephant76> , did you use Task.add_requirements
?
None
Hi @<1582179661935284224:profile|AbruptJellyfish92> , how do the histograms look when you're not in comparison mode?
Can you provide a self contained snippet that creates such histograms that reproduce this behavior please?
BTW, are you using http://app.clear.ml or a self hosted server?
I think you can set this code wise as well - https://clear.ml/docs/latest/docs/references/sdk/task#taskforce_requirements_env_freeze
I think AnxiousSeal95 updates us when there is a new version or release 🙂
Hi @<1529271098653282304:profile|WorriedRabbit94> , you can sign up with a new email
What is your use case though? I think the point of local/remote is that you can debug in local
DefiantLobster38 , please try the following - Change the verify_certificate
to False
https://github.com/allegroai/clearml/blob/aa4e5ea7454e8f15b99bb2c77c4599fac2373c9d/docs/clearml.conf#L16
Tell me if it helps 🙂
EnormousWorm79 , Hi 🙂
What do you mean by dependency structure?
@<1539417873305309184:profile|DangerousMole43> , I think for this specific ability you would need to re-write your pipeline code with pipelines from decorators
Hi @<1689084163396734976:profile|DistinctGoldfish85> , can you please add the full configuration of the HPO app you ran? Also, are you self hosted or using the community server?
Hi @<1664079296102141952:profile|DangerousStarfish38> , can you please add the full log of the task/agent? Also please add the configuration and the line you used to run the agent 🙂
Hi @<1541592227107573760:profile|EnchantingHippopotamus83> , to "clean" a task, you need to reset it. Resetting a task will purge all outputs
Hi @<1523701122311655424:profile|VexedElephant56> , can you please elaborate a bit more on how you set up the server? Is it on top of a VPN? Is there a firewall? Is it a simple docker compose or on top of K8s?
Because I think you need to map out the pip cache folder to the docker
Where is most of the data concentrated?
You can clone it via the UI, enqueue it to a queue that has a worker running against that queue. You should get a perfect 1:1 reproduction
Regarding this one, there is actually a way. If you work on http://app.clear.ml you can share an experiment for other users to see. However, to see the experiment people getting the link would need to sign up. This could also be a pretty cool feature request to make it completely public and open. Maybe open another feature request.
Hi @<1523703107031142400:profile|FlatOctopus65> , can you please elaborate on what exactly happens and when? Do you have a snippet to play with ?
Hi @<1523701842515595264:profile|PleasantOwl46> , you can use users.get_all
to fetch them - None
Hi @<1523703397830627328:profile|CrookedMonkey33> , not sure I follow. Can you please elaborate more on the specific use case?
Currently you can add plots to the preview section of a dataset
Hi @<1678574809706926080:profile|ChubbyWhale74> , is it possible that you ran the original experiment on an newer python version and the agent is running with an older version?
Hi @<1679299603003871232:profile|DefeatedOstrich25> , please add the full log of the run
Hi @<1649221402894536704:profile|AdventurousBee56> , sounds like a good way. You can also implement a clearml.conf
in the image as well 🙂
Hi @<1610083503607648256:profile|DiminutiveToad80> , can you please elaborate on what you mean/want to do?
Hi IcyFish32 , I don't think comparing more than 10 experiments. I think this ability will be added soon.