Does the other PC have the package locally there somewhere?
Hi @<1742355077231808512:profile|DisturbedLizard6> , I think you need to select the last/max/min options
Hi @<1600661428556009472:profile|HighCoyote66> , I'm not sure I understand, can you please elaborate on this?
- Upload image from device via the UI buttonWhat do you mean by device?
SoreDragonfly16 You can disable this with the following argument in task.init() - auto_connect_frameworks=False for example:task = Task.init(..., auto_connect_frameworks={'pytorch': False})
You can refer to this documentation for further reading at this https://clear.ml/docs/latest/docs/references/sdk/task#taskinit 🙂
CheerfulGorilla72 , can you please provide to me how your ~/clearml.conf
has the following settings configured?
api.web_server
api.api_server
api.files_server
DepressedChimpanzee34 which section are you referring to, can you provide a screenshot of what you mean?
Can you try with the latest version of the server?
You can add basically whatever you want usingclearml-serving metrics add ...
None
You don't need to do any special actions. Simply run your script from within a repository and ClearML will detect the repo + commit + uncommitted changees
Hi CooperativeOtter46 ,
I think the best way would be to use the API (You can use the SDK but I don't think it is so easy to filter times)
Use https://clear.ml/docs/latest/docs/references/api/tasks#post-tasksget_all api call to to get all the tasks according to the time frame & filtering you want and then sum up the run time from all experiments returned 🙂
Is your function taking into account iterations? How are iterations moved along? Do you attempt this scalar report on every iteration or only once in the script?
Hello FuzzyMole65 , what do you see in compare currently?
Can you give a small snippet to reproduce these scalers?
Hi @<1742355077231808512:profile|DisturbedLizard6> , you can use the output_uri
parameter of Task.init()
to specify where to upload models.
None
Can you please run ls -la /opt/clearml
and send the output + your docker compose file
SoreDragonfly16 Hi, what is your usage when saving/loading those files? You can mute both the save/load messages but not each one separately.
Also, do you see all these files as input models in UI?
Maybe AnxiousSeal95 might have some input 🙂
JitteryCoyote63 , Hi 🙂
I'm having a bit of trouble understanding. Can you give a concise example of before VS after?
I'm being silly. You're actually directing it to the file itself to where it resides
Also try with!pip3 install clearml
Can you try with blank worker_id/work_name in your clearml.conf
(basically how it was before)?
You can force kill the agent using kill -9 <process_id>
but clearml-agent daemon stop should work.
Also, can you verify that one of the daemons is the clearml-services daemon? This one should be running from inside a docker on your server machine (I'm guessing you're self hosting - correct?).
It's not a requirement but I guess it really depends on your setup. Do you see any errors in the docker containers? Specifically the API server
How about trying to use register_artifact
None ?
Hi @<1799612372571131904:profile|HealthyChimpanzee89> , this is certainly doable and is one of the main use cases of ClearML 🙂
Hi, it does not log the internal sklearn configuration, that is correct.
I think the problem is finding a standard way to allow you both log and change it later from the UI. But I think you can still connect it directly as a configuration and that should work 🙂
Can you also try specifying the branch/commit?
Hi @<1574931891478335488:profile|DizzyButterfly4> , I think if you have a pandas object pd
then the usage would be something like ds.set_metadata(metadata=pd, metadata_name="my pandas object")
I think you would be referencing the entire thing using the metadata_name
parameter
SmugTurtle78 , regarding the CPU only mode - How are you running. Are you using the application in PRO version or are you running through one of the examples?
Can you please provide a snippet of how the debug images are saved, also an example url would be useful :)
After you store the model in ClearML server accessing it later becomes almost trivial 🙂
Is it a self deployed server or the Community?