Hi FierceHamster54 , I'm afraid currently this is not possible. Maybe open a Github issue to track this 🙂
Hi :)
I found a comparison here:
https://clear.ml/blog/stacking-up-against-the-competition/
As far as I am aware, there is an on-prem enterprise solution
Hi CrookedWalrus33 , I think this is what you're looking for:
https://github.com/allegroai/clearml-agent/blob/master/docs/clearml.conf#L78
@<1787653555927126016:profile|SoggyDuck67> , can you try setting the binary to 3.11 instead of 3.10?
StickyCoyote36 , I think that is the solution. Is there a reason you want to ignore the "installed packages"? After all those are the packages that the task was ran with.
Hi! Hmmm, good question. I think it's asynchronous since most of the uploading processes are usually async. Is there a specific use case you're thinking of?
No, no I mean you need to be logged into your GS account on the same browser as the webserver when visiting
If it works on two computers and one computer is having problems then I'll be suspecting some issue with the computer itself. Maybe permissions or network issues
I think this is due to Optuna itself. It will manually kill experiments it doesn't think will have good results
Hi @<1535793988726951936:profile|YummyElephant76> , did you use Task.add_requirements ?
None
Is output_uri defined for both steps? Just making sure.
They need to switch to your workspace, create credentials on your workspace and then use them instead of their own. Makes sense?
Hi GentleSwallow91 ,
- When using jupyter notebooks its best to do
task.close()- It will bring the same affect you're interested in - If you would like to upload to the server you need to add the following parameter to your
Task.init()The parameter is output_uri. You can read more here - https://clear.ml/docs/latest/docs/references/sdk/task#taskinit
You can either mark it asTrueor provide a path to a bucket. The simplest usage would be ` Task.init(..., output_uri...
Hi GloriousPenguin2 , have you tried the method you mentioned in the previous thread?
Like fetching the task of the scheduler and then changing the configuration json?
RotundSquirrel78 , you can go to localhost:8080/version.json
Not sure. I think it would require the admin vault to implement something like this via env variables.
You can always instruct the users to add it to their code
DilapidatedDucks58 , regarding internal workings - MongoDB - all experiment objects are saved there. Elastic - Console logs, debug samples, scalars all is saved there. Redis - some stuff regarding agents I think
SparklingElephant70 , Hi
Can you please provide a screenshot of the error?
Hi ShallowGoldfish8 ,
I'm not sure I understand the scenario. Can you please elaborate? In the end the model object is there so you can easily fetch the raw data and track it.
Hi @<1835488771542355968:profile|PerplexedShells66> , you can set that up directly with set_repo - None
Hi @<1695969549783928832:profile|ObedientTurkey46> , this capability is only covered in the Hyperdatasets feature. There you can both chunk and query specific metadata.
None
You need to authenticate your communication with the ClearML server initially somehow, no? Otherwise basically anyone can create credentials on your server...
After you have authentication you can create credentials via the terminal.
You can create up to 10 sets of credentials per user so if you plan on creating new credentials every time you want to run a job - this is an incorrect approach.
Therefor - you should create the credentials once and then use them as environment variables as you...
It's totally possible, I think you need to do research on it. There are probably a few ways to do it too. I see CLEARML_API_ACCESS_KEY & CLEARML_API_SECRET_KEY in the docker compose - None
You should do some more digging around. One option is to see how you can generate a key/secret pair and inject them via your script into mongoDB where the credentials are stored. Another way is to see how the UI ...
I think you would need to contact the sales department for this 🙂
None
Hi @<1623491856241266688:profile|TenseCrab59> , can you elaborate on what do you mean spending this compute on other hprams? I think you could in theory check if a previous artifact file is located then you could also change the parameters & task name from within the code
Hi @<1797438038670839808:profile|PanickyDolphin50> , can you please elaborate? What is this accelerate functionality?