Hi @<1664079296102141952:profile|DangerousStarfish38> , looks like an issue with docker on your machine. Are you able to run that container manually?
@<1638349756755349504:profile|MistakenTurtle88> , I see what you mean. Please open a GitHub feature request for this 🙂
When you want to connect your parameters and other objects. Please take as look here:
https://clear.ml/docs/latest/docs/references/sdk/task#connect
You can find a usage example in
https://github.com/allegroai/clearml/blob/master/examples/reporting/hyper_parameters.py
Can you access the model in the UI and see the uri there?
Hi @<1539417873305309184:profile|DangerousMole43> , in that case I think you can simply save the file path as a configuration in the first step and then in the next step you can simply access this file path from the previous step. Makes sense?
As mentioned, this isn't supported in the current version of clearml-serving
, will be added in the next version that should come out soon
@<1739818374189289472:profile|SourSpider22> , this capability is available only in the HyperDatasets feature which is part of the Scale/Enterprise license. I suggest taking a look here - None
Hi UnevenDolphin73 , maybe JuicyFox94 or SuccessfulKoala55 can assist
EnormousWorm79 , are you working from different browsers / private windows?
Hi @<1523701868901961728:profile|ReassuredTiger98> , how are you currently uploading? You can use the max_workers
parameter to use multiple threads
Hi @<1523702031007617024:profile|GrotesqueDog77> , please refer to the documentation to see all the possibilities you have with the SDK - None (Just scroll down from there)
As a side note, this is the SDK, not the API 🙂
ShallowGoldfish8 , I think the best would be storing them as separate datasets per day and then having a "grand" dataset that includes all days and new days are being added as you go.
What do you think?
What is the exact python version you're trying to run on?
Hi @<1623491856241266688:profile|TenseCrab59> , can you elaborate on what do you mean spending this compute on other hprams? I think you could in theory check if a previous artifact file is located then you could also change the parameters & task name from within the code
Can you give an example of how you installed it using --find-links
(or which command used it)
And you use the agent to set up the environment for the experiment to run?
Unless you're running in docker mode, then I think the task will continue running inside the container. Might need to check it
@<1529271085315395584:profile|AmusedCat74> , I personally like nvcr.io/nvidia/pytorch:23.03-py3
Hi @<1856869640882360320:profile|TriteCoral46> , you can add custom columns in the webUI and filter/arrange according to them. The webUI uses the API in order to get this data from the apiserver. So you can use the webUI in order to generate whatever filtering you want to have in your code and then implement it via the API/SDK depending on what you want to create.
Hi @<1523708920831414272:profile|SuperficialDolphin93> , does it run fine if you use a regular worker?
Hi @<1742355077231808512:profile|DisturbedLizard6> , I think this is what you're looking for:
None
Also, is it an AWS S3 or is it some similar storage solution like Minio?
Hi ElegantCoyote26 , I don't think so. I'm pretty sure the AWS AMI's are released for the open source server 🙂
Hi @<1726047624538099712:profile|WorriedSwan6> , versioning is as incremental as you make it be when creating new child versions.
What do you mean by restoring old versions? The versioning assumes you will not be deleting parent versions.
Hi @<1523701304709353472:profile|OddShrimp85> , I assume you're running on top of K8s?
Sounds like an issue with your deployment. Did your Devops deploy this? How was it deployed?
Hi @<1548839979558375424:profile|DelightfulFrog42> , you can use tasks.set_requirements
to provide specific packages or a requirements.txt:
None
Hi @<1540867420321746944:profile|DespicableSeaturtle77> , what didn't work? What showed up in the experiment? What was logged in the installed packages?