The ValueError is happening because there is no queue called services it appears
Hi @<1755401041563619328:profile|PungentCow70> , currently only by tags and project title/ds name. But I think it would be a cool capability. Maybe add a GitHub feature request on this?
Hi @<1590514572492541952:profile|ColossalPelican54> , I'm not sure what you mean. output_uri=True
will upload the model to the file server - making it more easily accessible. Refining the model would require unrelated code. Can you please expand?
Hi IrritableJellyfish76 , it looks like you need to create the services queue in the system. You can do it directly through the UI by going to Workers & Queues -> Queues -> New Queue
Hi @<1584716373181861888:profile|ResponsiveSquid49> , what optimization method are you using?
Hi NarrowLion8 , you can simply change the file_server section to an s3 bucket like files_server:
s3://my_test_bucket
TartLeopard58 , I think you need to mount apiserver.conf
to the api server. This is an API configuration 🙂
Hi @<1543766544847212544:profile|SorePelican79> , can you provide a sample of how this looks? The suggested method is the one in the examples:
None
Did anything change in your configurations? In the previous version there was no such issue? Is the agent version the only change?
ShinyLobster84 , sorry for the delay, had to look into it 🙂
Please try task.get_reported_scalars()
Hmmmmm do you have a specific usecase in mind? I think pipelines are created only through the SDK but I might be wrong
Hi @<1523702251011444736:profile|ScaryBluewhale66> , I think the only port you need is the one that is allocated to the apiserver
Hi @<1661904968040321024:profile|SpotlessOwl43> , you can achieve this using the REST API of ClearML - None
https://clear.ml/docs/latest/docs/deploying_clearml/clearml_server_mongo44_migration
Looks like what you might need 🙂
I think the issue is that the message isn't informative enough. I would suggest opening a GitHub issue on this requesting a better message. Regarding confirming - I'm not sure but this is the default behavior of Optuna. You can run a random or a grid search optimization and then you won't see those messages for sure.
What do you think?
Hi @<1587977852635058176:profile|FloppyTurtle49> , yes same would be applicable. Regarding communication: It is one way communication from the agent to the clearml server done directly to the API server - basically what is defined in clearml.conf
Hope this clears things up
Not one known to me, also, it's a good practice to implement (Think of automation) 🙂
Hi @<1559711623147425792:profile|PlainPelican41> , you can re-run an existing pipeline using different parameters from the UI. Otherwise, you need to create new pipelines with new code 🙂
Hi @<1523701523954012160:profile|ShallowCormorant89> , I think you can simply spin down all the containers and copy everything in /opt/clearml/
Hi @<1845635622748819456:profile|PetiteBat98> , metrics/scalars/console logs are not stored on the files server. They are all stored in Elastic/Mongo. Files server is not required to use. default_output_uri will point all artifacts to your Azure blob
Hi HugeArcticwolf77 ,
Can you please open developer tools (F12) and see what is returned when you try to enqueue?
Hi @<1539417873305309184:profile|DangerousMole43> , I'm afraid this is not configurable currently. What is your use case?
Hi @<1719162252994547712:profile|FloppyLeopard12> , not sure I understand what you're trying to do, can you elaborate step by step?
Did you try to edit clearml.conf
on the agent side and add the extra index url there - https://github.com/allegroai/clearml-agent/blob/master/docs/clearml.conf#L78
I understand. In that case you could implement some code to check if the same parameters were used before and then 'switch' to different parameters that haven't been checked yet. I think it's a bit 'hacky' so I would suggest waiting for a fix from Optuna
PunyWoodpecker71 , regarding the REST API:
The format would be something like this:base_url/endpoint
Where base_url
would be the api_server
as configured in your ~/clearml.conf
and the endpoint is any endpoint you choose from the docs 🙂
Username/password are the access_key
/ secret_key
as also configured in ~/clearml.conf
(You can get it from the UI)
content-type is application/json
And of course it's a POST 🙂
Hi @<1523701260895653888:profile|QuaintJellyfish58> , can you please provide a standalone snippet that reproduces this?