
Reputation
Badges 1
25 × Eureka!Hi ExcitedFish86
Of course, this is what it was designed for. Notice in the UI under Execution you can edit this section (Setup Shell Script). You can also set via task.set_base_docker
Hi LivelyLion31
Yes, the reason we designed Trains with an automagic integration is exactly that reason, so users do not need to learn another package and that with almost no effort you get most of the benefits.
Regrading the TB files, from experience most users will use the TB files short after they executed the experiment, usually for debugging and in depth capabilities (like network debugger profile etc), metric view is something that is much easier to do on a centralized server (like on...
Need - in my CI, the url used is https but I need the ssh url to be used. I see that we can pass repo to Task.create but not Task.init
Are you cloning an existing Task, or creating a new one ?
but when I run the same task again it does not map the keys..Β (edited)
SparklingElephant70 what do you mean by "map the keys" ?
Did you meantΒ
--detached
Β ?
Oops yes sorry you are correct should be --detached π
Hi SmugLizard25 I was able to test and it seems that style is being ignored by the FE π
I passed to FE guys to make sure it is fixed in the next version.
Notice this is just for tables, anything else works as expected (i.e. styling any other type of plot)
Hi GracefulDog98
As UnevenDolphin73 pointed you might be looking for https://clear.ml/docs/latest/docs/references/sdk/task#execute_remotely
Which will stop the current local process, and enqueue the task on the "default" queue, for the agent to execute.
Is this what you are looking for ?
The idea is you can run your code once in "development" mode, so you know everything is working, then from the UI (or programmatically) you can clone the experiment, edit the configuration (or anythin...
Thanks FiercePenguin76 , I can totally understand your point on running proper tests, and reluctance to break other things.
I suggest to add a comment with the temp fix that solved the problem for you, and we will make sure the team takes it from there. wdyt?
What is the proper way to change a clearml.conf ?
inside a container you can mount an external clearml.conf, or override everything with OS environment
https://clear.ml/docs/latest/docs/configs/env_vars#server-connection
Hi CheerfulGorilla72 ,
Sure there are:
https://github.com/allegroai/clearml/tree/master/examples/frameworks/pytorch-lightning
In terms of creating dynamic pipelines and cyclic graphs, the decorator approach seems the most powerful to me.
Yes that is correct, the decorator approach is the most powerful one, I agree.
mean? Is it not possible that I call code that is somewhere else on my local computer and/or in my code base? That makes things a bit complicated if my current repository is not somehow available to the agent.
I guess you can ignore this argument for the sake of simple discussion. If you need access to extra files/functions, just make sure you point the repo
argument to their repo, and the agent will make sure your code is running from the repo root, with all the repo files under i...
BTW: Basically just call Task.init(...)
the rest is magic π
Okay let me check if we can reproduce, definitely not the way it is supposed to work π
Hi NastyFox63 yes I think the problem was found (actually backend side).
It will be solved in the upcoming release (due after this weekend π )
Hi @<1624941407783358464:profile|GrievingTiger47>
I think you should try to contact the sales guys here: None
I'm hoping we are ready to release
Feel free to open an issue on GitHub making sure this is not forgotten
You can check the example here, just make sure you add the callback and you are good to go π
https://github.com/allegroai/trains/blob/master/examples/frameworks/keras/keras_tensorboard.py#L107
Does StorageManager.upload and upload_artifact use the same methods?
Yes they both use StorageManager.upload
Is the only difference is task being async?
Two differences:
Upload being async Registering the artifact on the experiment. StorageManager will only upload, where as upload_artifact will make sure the file is registered as an artifact on the experiment, together with all of the artifacts properties.
BTW: I suspect this is the main issue:
https://github.com/python-poetry/poetry/issues/2179
Scenario 1 & 2 are essentially the same from caching perspective (the face B != B` means they have different caching hashes, but in both cases are cached).
Scenario 3 is the basically removing the cache flag from those components.
Not sure if I'm missing something.
Back to the @<1523701083040387072:profile|UnevenDolphin73>
From decorators - when the pipeline logic is very straightforward ...
Actually I would disagree, the decorators should be used when the pipeline Logic is not a D...
@<1523704157695905792:profile|VivaciousBadger56>
Is the idea here the following? You want to use inversion-of-control such that I provide a function
f
to a component that takes the above dict an an input. Then I can do whatever I like inside the function
f
and return a different dict as output. If the output dict of
f
changes, the component is rerun; otherwise, the old output of the component is used?
Yes exactly ! this way you...
- Components anyway need to be available when you define the pipeline controller/decorator, i.e. same codebaseNo you an specify a different code base, see here:
None - The component code still needs to be self-composed (or, function component can also be quite complex)Well it can address the additional repo (it will be automatically added to the PYTHONPATH), and you c...
Do you accidentally know if there are any plans for an implementation with the logger variable, so that in case of something it would be possible to write to different tables?
CheerfulGorilla72 what do you mean "an implementation with the logger variable" ? pytorch-lighting defaults to the TB logger, which clearml will automatically catch and log into the clearml-server, you can always add additional logs with clearml interface Logger.current_logger().report_???
What am I mis...