Reputation
Badges 1
25 × Eureka!ShinyLobster84
fatal: could not read Username for '
': terminal prompts disabled
This is the main issue, it needs git credentials to clone the repo code, containing the pipeline logic (this is the exact same behaviour as pipeline v1 execute_remotely(), which is now the default, could it be that before you executed the pipeline logic, locally ?)
WackyRabbit7 could the local/remote pipeline logic could apply in your case as well ?
We're lucky that they let the developers see their code...
LOL 😄
and it is also set in the
/clearml-agent/.ssh/config
and it still can't clone it. So it must be some security issue internally.
Wait, are you using docker mode or venv mode ? in both cases your SSH credentials should be at the default ~/.ssh
FYI: if you need to query stuff you can always look directly in the RestAPI:
https://github.com/allegroai/clearml/blob/master/clearml/backend_api/services/v2_9/projects.py
https://allegro.ai/clearml/docs/rst/references/clearml_api_ref/index.html
You’ll just need the user to
name them
as part of loading them in the code (in case they are loading multiple datasets/models).
Exactly! (and yes UI visualization is coming 🙂 )
You can definitely configure the watchdog to set the timeout to 15min, it should not have any effect on running processes, they basically ping every 30 sec alive message
WickedGoat98 Nice!!!
BTW: The fix should solve both (i.e. no need to manually cast), I'll make sure the fix is on GitHub so you'll be able to verify 🙂
but maybe hyperparam aborts in those cases?
from the hyperparam perspective it will be trying to optimize the global minimum, basically "ignoring" the last value reported. Does that make sense ?
Could it be it checks the root target folder and you do not have permissions there only on subfolders?
Are you suggesting just taking the
read_and_process_file
function out of the
read_dataset
method,
Yes 🙂
As for the second option, you mean create the task in the
init
method of the NetCDFReader class?
correct
It would be a great idea to make the Task picklelizable,
Adding that to the next version to do list 😉
Thanks DilapidatedDucks58 ! We ❤ suggestions for improvements 🙂
Did you try to print a page using the browser (I think that they can all store it as pdf these days) Yes I agree, it would 🙂 we have some thoughts on creating plugins for the system, I think this could be a good use-case. Wait a week or two ;)
Thanks @<1523701713440083968:profile|PanickyMoth78> for pining, let me check if I can find something in the commit log, I think there was a fix there...
Thanks FrothyShark37
I just verified, this would work as well, I suspect what was missing is the plt.show call, this is the actual call that triggers clearml
poetry
stores git related data in ... you get an internal package we have with its version, but no git reference, i.e.
internal_module==1.2.3
instead of
internal_module @H4dr1en
This seems like a bug with poetry (and I think I have run into this one), worth reporting it, no?
No should be fine... Let me see if I can get a windows box 🙂
No idea, I just remember it is relatively old 😞
Any insight will help, if you can provide the log of the Task that did get stuck, that would be a good start
I want to be able to compare scalars of more than 10 experiments, otherwise there is no strong need yet
Make sense, in the next version, not the one that will be released next week, the one after with reports (shhh do tell anyone 🙂 ) , they tell me this is solved 🎊
NastyFox63 ask SuccessfulKoala55 tomorrow, I think there is a way to change the default settings even with the current version.
(I.e. increase the default 100 entries limit)
My use case is when I have a merge request for a model modification I need to provide several informations for our Quality Management System one is to show that the experiment is a success and the model has some improvement over the previous iteration.
Sounds likes good approach 🙂
Obviously I don't want the reviewer to see all ...
Maybe move publish the experiment and move it to a dedicated folder ? Then even if they see all other experiments, they are under "development" p...
This will allow them to experiment outside of clearml and only switch to it when they are in an OK state. This will also helpnot to pollute clearml spaces with half backed ideas
What's the value of runnign outside of an experiment management context ? don't you want to log it?
There is no real penalty here, no?!
If you passed the correct path it should work (if it fails it would have failed right at the beginning).
BTW: I think it is clearml-agent --config-file <file here> daemon ...
AstonishingSeaturtle47 yes it does. But I have to ask how come you have sub modules that one will have credentials for the master repo and not the sub ones? Also it sounds like a good solution would be for the trains-agent to try and pull the sub-modules and if it cannot, it should just print a warning and continue. What do you think?
Hi @<1544853695869489152:profile|NonchalantOx99>
I would assume the clearml-server configuration / access key is misconfigured in your copy of example.env
JitteryCoyote63 this is standard ssh authorized server removal
https://superuser.com/a/30089
specifically you can try:ssh-keygen -R 10.105.1.77