
Reputation
Badges 1
25 × Eureka!BeefyHippopotamus73 are you saying that on a remote machine you cannot set AWS_PROFILE
? or is it the clearml.conf
is missing ? (not sure I follow how / who spins the remote machine)
Okay that makes sense.best_diabetes_detection
is different from your example curl -X POST "
None "
notice best_mage_diabetes_detection` ?
is no agent listening to the "k8s_scheduler"
There should not be one, this is purely "virtual" , so users understand the k8s cluster is spinning their pod (sometimes it takes time, imagine EKS etc. , just visibility)
unfortunately I can't get info from the cluster
You should be able the pod in the cluster no?!
What's the Task Info panel say, can you share a screen shot ?
Hi SteadySeagull18
However, it seems to be entirely hanging here in the "Running" state.
Did you set a an agent to listen to the "services" queue ?
Someone needs to run the pipeline logic itself, it is sometimes part of the clearml-server deployment but not a mist
UnevenDolphin73 if you have the time to help fix / make it work it will be greatly appreciated 🙂
Hmm GreasyLeopard35 can you specify the range you are passing to the HPO, as well as the type of optimization class ? (grid/random/optuna etc.)
Are you running it in venv mode or docker mode?
I think the limit is a few GB, I'm not sure, I'll have to check
And yes the oldest experiments will be deleted first (with the exception of published experiments, they will be deleted last)
Do you think such a feature exists in ClearML?
Currently this is "fixed" for iterations (which is actually just a integer monotonic value) or the time stamp.
But I cannot see any reason why we could not allow users to control the x-axis title, and to be able to set it in code, I'm assuming this is what you have in mind?
I get the same "white" image in both TB & ClearML 😞
Hi @<1523706645840924672:profile|VirtuousFish83>
Hmm so generally I think the answer is no... I mean you can download all scalars and re-report them with a different title/series, but I think you will not be able to delete a specific set, and the only way would be to reset the entire Task.
I'm curious what's the scenario here? is it like a typo you want to fix?
Hi WackyRabbit7
Yes, we definitely need to work on wording there ...
"Dynamic" means you register a pandas object that you are constantly logging into while training, think for example the image files you are feeding into the network. Then Trains will make sure it is constantly updated & uploaded so you have a way to later verify/compare different runs and detect dataset contemplation etc.
"Static" is just, this is my object/file upload and store it as an artifact for me ...
Make sense ?
I guess only if autoscaling is used (one worker one machine)?
yes, basically depending on how you set autoscaling / k8s integration 🙂
link with "localhost" in it Oo
Hmm I think this is the main issue, for some reason the dataset default upload destination is "localhost", what do you have configured in your clearml.conf under files server?
Hi DeliciousKoala34
I am using Pycharm and i have set up the clear-ml plugin, but it still doesnt work.
Did you provide the key/secret to the plugin? I think this is a must for it to actually work
Checkout the trains-agent repo https://github.com/allegroai/trains-agent
It is fairly straight forward.
but who exactly executes agent in this case?
with both execute
/ build
commands, you execute it on your machine, for debugging purposes. make sense ?
MoodyCentipede68 is diagram 2 a batch processing workflow?
Oh I see, this seems like Triton configuration issue, usually dim -1 means flexible. I can also mention that serving 1.1 should be released later this week with better multiple input support for triton. Does that make sense?
The function
a delete request with a
raise_on_errors=False
flag.
Are you saying we should expose raise_on_errors
it to _delete_artifacts() function itself?
If so, sure seems logic to me, any chance you want to PR it? (please just make sure the default value is still False so we keep backwards compatibility)
wdyt?
GrievingTurkey78 short answer no 😞
Long answer, the files are stored as differentiable sets (think changes set from the previous version(s)) The collection of files is then compressed and stored as a single zip. The zip itself can be stored on Google but on their object storage (not the GDrive). Notice that the default storage for the clearml-data is the clearml-server, that said you can always mix and match (even between versions).
BTW: CloudyHamster42 I think this issue was discussed on GitHub, and the final "verdict" was we should have an option to split/combine graphs on the UI side (i.e. similar to the "smoothing" or wall-time axis etc.)
Hi CluelessElephant89
I'm thinking that different users might want to comment on results of an experiment and stuff. Im sure these things can be done externally on a github thread attached to the experiment
Interesting! Like a "comment section on top of a Task ?
Or should it be a project ?
Basically I have this intuition that Task granularity might be to small (I would want to talk about multiple experiments, not a single one?) and a project might be to generic ?
wdyt?
btw: The addr...
SuperiorDucks36 from code ? or UI?
(You can always clone an experiment and change the entire thing, the question is how will you get the data to fill in the experiment, i.e. repo / arguments / configuration etc)
There is a discussion here, I would love to hear another angle.
https://github.com/allegroai/trains/issues/230
This is odd it says 1.0.0 but then, it was updated t weeks ago ...
1.0.1 is only for the cleaml python client, no need for a server upgrade (or agent)
This is also set in the command line.
--cpu-only or maybe without any --gpus flag at all
is there a way that i can pull all scalars at once?
I guess you mean from multiple Tasks ? (if so then the answer is no, this is on a per Task basis)
Or, can i get experiments list and pull the data?
Yes, you can use Task.get_tasks to get a list of task objects, then iterate over them. Would that work for you?
https://clear.ml/docs/latest/docs/references/sdk/task/#taskget_tasks