@<1523703961872240640:profile|CrookedWalrus33> , pip instal clearml==1.5.3rc1
@<1541592204353474560:profile|GhastlySeaurchin98> , I think this is more related to how Optuna works, it aborts the experiment. I think you would need to modify something in order for it to run the way you want
Hi ThankfulHedgehong21 ,
What versions of ClearML & ClearML-Agent are you using?
Also, can you provide a small code snippet to play with?
I don't think there is any out of the box method for this. You can extract everything using the API from one workspace and repopulate it in another workspace also using the APIs.
Hi @<1562973083189383168:profile|GrievingDuck15> , I think you'll need to re-register it
From my knowledge the SDK uses the key/secret pair to create a token and then uses a token to communicate with the server.
But your use case would require customization I think
OutrageousSheep60 , it looks like it's not a bug. Internally x
is stored as an int
, however get_user_properties()
casts it back as a string. You could open a github issue with a feature request for this 🙂
Hi @<1544853721739956224:profile|QuizzicalFox36> , from the error you're getting it looks like a permissions issue with the credentials. Check if your credentials have read/write/delete permissions
Hi @<1590152201068613632:profile|StaleLeopard22> , you can simply add the extra index url as part of the agent requirements as such:
agent.package_manager.extra_index_url=["<extra_index_url>",...]
@<1774245260931633152:profile|GloriousGoldfish63> , you can simply use Task.set_name
- None
I think this is what you're looking for
Hi RoundMosquito25 , how are you building the pipeline? Is the pipeline controller run locally or on services queue?
Can you provide a code snippet that makes agent hang?
Hi MagnificentMosquito84 , is this a self hosted server? What version is it? Do you have visibility into the logs?
then yeah, all data sits in /opt/clearml/data
Hi SourLion48 , what if you try inserting the credentials etc individually? Are you using a self hosted server? Is it behind a proxy by chance?
Hi EnviousPanda91 , I'm not quite sure what you want to extract but you can extract everything from the UI using the API. The docs can be found here: https://clear.ml/docs/latest/docs/references/api/events
And for the best reference - You can open developer tools in the UI and see how the requests are handled there 🙂
You can set the docker image you want to run with using Task.set_base_docker
None
Regarding pipelines, did you happen to play with this example? - None
The idea is that each step in the pipeline including the pipeline controller are tasks in the system. So you have to choose separate queues for steps and also the controller. The controller by default maps the 'services' queue, but you can control also that.
The controller simply runs the logic of the pipeline and requires minimal resources. All the heavy computation happens on the nodes/machines running the steps
The highlighted line is exactly that. Instead of client.tasks.get_all()
I think it would be along the lines of client.debug.ping()
Hi @<1557537273090674688:profile|ThankfulOx54> , HyperDatasets are part of the Scale & Enterprise licenses. You can see more here: None
I might be wrong. Did you try 1.9.1?
Please implement in python the following command curl <HOST_ADDRESS>/v2.14/debug/ping
Hmmm, maybe you could save it as an env var. There isn't a 'default' server per say since you can deploy anywhere yourself. Regarding to check if it's alive, you can either check ping it with curl
or check up on the docker status of the server 🙂
Hi @<1802511466914385920:profile|PerfectSeaurchin36> , on your points:
- what was the body of the API call, you got 400 so it looks like the body was incorrect
- Not sure what the issue is
Not exactly sure I understand what the issue is, can you please elaborate?
connected_config = task.connect({})
Looks like you're connecting an empty config..
If you remove any reference of ClearML from the code on that machine, does it still hang?
Hi @<1806135344731525120:profile|GrumpyDog7> , I would personally go for the init script route. What part didn't work for you?