Reputation
Badges 1
25 × Eureka!Ohh sorry you will also need to fix the
def _patched_task_function
The parameter order is important as the partial call relies on it.
My bad no need for that ๐
Ohh sorry you will also need to fix the def _patched_task_function
The parameter order is important as the partial call relies on it.
If it cannot find the Task ID I'm guessing it is trying to connect to the demo server and not your server (i.e. configuration is missing)
ShinyLobster84
fatal: could not read Username for '
': terminal prompts disabled
This is the main issue, it needs git credentials to clone the repo code, containing the pipeline logic (this is the exact same behaviour as pipeline v1 execute_remotely(), which is now the default, could it be that before you executed the pipeline logic, locally ?)
WackyRabbit7 could the local/remote pipeline logic could apply in your case as well ?
We're lucky that they let the developers see their code...
LOL ๐
and it is also set in theย
/clearml-agent/.ssh/config
ย and it still can't clone it. So it must be some security issue internally.
Wait, are you using docker mode or venv mode ? in both cases your SSH credentials should be at the default ~/.ssh
FYI: if you need to query stuff you can always look directly in the RestAPI:
https://github.com/allegroai/clearml/blob/master/clearml/backend_api/services/v2_9/projects.py
https://allegro.ai/clearml/docs/rst/references/clearml_api_ref/index.html
Youโll just need the user toย
name them
ย as part of loading them in the code (in case they are loading multiple datasets/models).
Exactly! (and yes UI visualization is coming ๐ )
You can definitely configure the watchdog to set the timeout to 15min, it should not have any effect on running processes, they basically ping every 30 sec alive message
WickedGoat98 Nice!!!
BTW: The fix should solve both (i.e. no need to manually cast), I'll make sure the fix is on GitHub so you'll be able to verify ๐
but maybe hyperparam aborts in those cases?
from the hyperparam perspective it will be trying to optimize the global minimum, basically "ignoring" the last value reported. Does that make sense ?
Could it be it checks the root target folder and you do not have permissions there only on subfolders?
Are you suggesting just taking theย
read_and_process_file
ย function out of theย
read_dataset
ย method,
Yes ๐
As for the second option, you mean create the task in theย
init
ย method of the NetCDFReader class?
correct
It would be a great idea to make the Task picklelizable,
Adding that to the next version to do list ๐
Thanks DilapidatedDucks58 ! We โค suggestions for improvements ๐
Did you try to print a page using the browser (I think that they can all store it as pdf these days) Yes I agree, it would ๐ we have some thoughts on creating plugins for the system, I think this could be a good use-case. Wait a week or two ;)
Thanks @<1523701713440083968:profile|PanickyMoth78> for pining, let me check if I can find something in the commit log, I think there was a fix there...
Thanks FrothyShark37
I just verified, this would work as well, I suspect what was missing is the plt.show call, this is the actual call that triggers clearml
poetry
ย stores git related data in ... you get an internal package we have with its version, but no git reference, i.e.ย
internal_module==1.2.3
ย instead ofย
internal_module @H4dr1en
This seems like a bug with poetry (and I think I have run into this one), worth reporting it, no?
No should be fine... Let me see if I can get a windows box ๐
I want to be able to compare scalars of more than 10 experiments, otherwise there is no strong need yet
Make sense, in the next version, not the one that will be released next week, the one after with reports (shhh do tell anyone ๐ ) , they tell me this is solved ๐
NastyFox63 ask SuccessfulKoala55 tomorrow, I think there is a way to change the default settings even with the current version.
(I.e. increase the default 100 entries limit)
My use case is when I have a merge request for a model modification I need to provide several informations for our Quality Management System one is to show that the experiment is a success and the model has some improvement over the previous iteration.
Sounds likes good approach ๐
Obviously I don't want the reviewer to see all ...
Maybe move publish the experiment and move it to a dedicated folder ? Then even if they see all other experiments, they are under "development" p...
This will allow them to experiment outside of clearml and only switch to it when they are in an OK state. This will also helpnot to pollute clearml spaces with half backed ideas
What's the value of runnign outside of an experiment management context ? don't you want to log it?
There is no real penalty here, no?!
If you passed the correct path it should work (if it fails it would have failed right at the beginning).
BTW: I think it is clearml-agent --config-file <file here> daemon ...
AstonishingSeaturtle47 yes it does. But I have to ask how come you have sub modules that one will have credentials for the master repo and not the sub ones? Also it sounds like a good solution would be for the trains-agent to try and pull the sub-modules and if it cannot, it should just print a warning and continue. What do you think?
Hi @<1544853695869489152:profile|NonchalantOx99>
I would assume the clearml-server configuration / access key is misconfigured in your copy of example.env