Reputation
Badges 1
25 × Eureka!StorageManager
Oh it has no remove πStorageHelper.delete
is the only way
Hi MelancholyElk85
I think you are right, OutputModel is missing, remove
method.
Maybe we should have a class method on Model , something like:@classmethod Model.remove(model: Union[str, Model], delete_weights_file: bool, force: bool): # actually remove model and weights file
wdyt?
Hi GrittyKangaroo27
Is it possible to import user-defined modules when wrapping tasks/steps with functions and decorators?
Sure, any package (local included) can be imported, and will be automatically listed in the "installed packages" section of the pipeline component Task
(This of course assumes that on a remote machine you could do the "pip install <package")
Make sense ?
GreasyPenguin14 whats the clearml version you are using, OS & Python ?
Notice this happens on the "connect_configuration" that seems to be called after the Task was closed, could that be the case ?
DilapidatedDucks58 by default if you continue to execution, it will automatically continue reporting from the last iteration . I think this is what you are seeing
Hmm I suspect the 'set_initial_iteration' does not change/store the state on the Task, so when it is launched, the value is not overwritten. Could you maybe open a GitHub issue on it?
Lol, :)
I think the issue is that you do not need to manually set the initial iteration, it's supposed to get it , as it is stored on the Task itself
π DilapidatedDucks58 how exactly are you "relaunching/continue" the execution? And what exactly are you setting?
I think we should open a GitHub Issue and get some more feedback, maybe we should just add support in the backend side ?
Thank you DilapidatedDucks58 for the ping!
totally slipped my mind π
sorry that I keep bothering you, I love ClearML and try to promote it whenever I can, but this thing is a real pain in the assΒ
No worries I totally feel you.
As a quick hack in the actual code of the Task itself, is it reasonable to have:task = Task.init(....) task.set_initial_iteration(0)
I callΒ
Task.init
Β after I import tensorflow (and thus tensorboard?)
That should have worked...
Can you manually add a TB report before calling opennmt
function ?
(I want to verify the Task.init is indeed catching the TB calls, my theory is that somewhere inside the opennmt
we loose the TB)
Have a grid view (e.g. 3 plots per line instead of just one)Yes the plots are resizable move the cursor to the separating line and drag π
2. Check the group by section, they can be split per series (like in TB)
I see, that means xarray
is not an actual package but a folder add to the python path.
This explains why Task.add_requirements fails, as it is supposed to add python packages to the equivalent of "requirements.txt" ...
Is the folder part of the git repository ? How would you pass it to the remote machine the cleamrl-agent is running on?
Hi GiganticTurtle0
The problem is that the packages that I define in 'required_packages' are not in the scripts corresponding
What do you mean by that? is "Xarray" a wheel package? is it instllable from a git repo (example: pip install git+
http://github.com/user/xarray/axrray.git )
GiganticTurtle0 , let me add some background. The idea is that at some point you had your code running on your machine (when developing it for example),
when you actually executed the code itself in development, you call 'task.init' (to track the development process for example). This Task.init call, did the analysis of the code and python package dependencies and stored in on the Task. Then when you clone the Task, it already lists all the python packages your code directly imports (see "In...
I "think" you are referring to the venvs cash, correct?
If so, then you have to set it in the clearml.conf running on the host (agent) machine, make sense ?
Hmm what do you mean? Isn't it under installed packages?
'config.pbtxt' could not be inferred. please provide specific config.pbtxt definition.
This basically means there is no configuration on how to serve the mode, i.e. size/type of lower (input) layer and output layer.
You can wither store the configuration on the creating Task, like is done here:
https://github.com/allegroai/clearml-serving/blob/b5f5d72046f878bd09505606ca1147d93a5df069/examples/keras/keras_mnist.py#L51
Or you can provide it as standalone file when registering the mo...
Hi ShallowArcticwolf27
Does theΒ
clearml-task
Β cli command currently support remote repositories with that are intended to be used with ssh
It does π
but theΒ
git@
Β prefix used for gitlab's ssh it seems to default to looking for the repository locally
git@ is always the prefix for SSH repositories (it does not actually mean it uses it, it's what git will return when asked on the origin of the repository. The agent knows (if SSH credentials ...
Hi SkinnyPanda43
Are you trying to access the same Task or an external one ?
No worries, I would love for us to come up with a nice solution π
I can't think of any hack that will satisfy your IT other than than an actual vault...
wdyt?
Long story short, this is done internally when you call the Task.init (I think, there is a chance it is called before)
One way of controlling it would be to have something like:Task.init(auto_connect_frameworks={'hydra': {'log_before_resolve': True}})
That said, I think it will be simpler to store both (in different section of course)
Maybe "Configuration Object: OmegaConf" and "Configuration Object: OmegaConfDefinition" ?
(fyi: once we have a solid idea here, please open a github issue on the feature request, I'll try to see if we can push it fwd for the next RC π )
But they are all running inside the same pod, correct ?
from your jupyterlab can you do:!curl
Alternatively I understand I can also run the agent using...
No you should not if you are running the agent inside a container it cannot work in docker mode and spin its own containers
Bottom line use clearml-agent daemon