
Reputation
Badges 1
981 × Eureka!They indeed do auto-rotate when you limit the size of the logs
I found, the filter actually has to be an iterable:Task.get_tasks(project_name="my-project", task_name="my-task", task_filter=dict(type=["training"])))
AgitatedDove14 How can I filter out tasks archived? I don't see this option
Will the from clearml import Task
raise an error if no clearml.conf exists? Or only when actual features requiring to define the server (such as Task.init
) will be called
Thanks SuccessfulKoala55 ! So CLEARML_NO_DEFAULT_SERVER=1 by default, right?
super, thanks SuccessfulKoala55 !
Both ^^, I already adapted the code for GCP and I was planning to adapt to Azure now
It indeed has the old commit, so they match, no problem actually π
The task I cloned from is not the one I though
The task is created using Task.clone() yes
AgitatedDove14 awesome! by "include it all" do you mean wizard for azure and gcp?
Ho I wasn't aware of that new implementation, was it introduced silently? I don't remember reading it in the release notes! To answer your question: no, for gcp I used the old version, but for azure I will use this one, maybe send a PR if code is clean π
for some reason when cloning task A, trains sets an old commit in task B. I tried to recreate task A to enforce a new task id and new commit id, but still the same issue
AgitatedDove14 Up π I would like to know if I should wait for next release of trains or if I can already start implementing azure support
In execution tab, I see old commit, in logs, I see an empty branch and the old commit
Yes, not sure it is connected either actually - To make it work, I had to disable both venv caching and set use_system_packages to off, so that it reinstalls the full env. I remember that we discussed this problem already but I don't remember what was the outcome, I never was able to make it update the private dependencies based on the version. But this is most likely a problem from pip that is not clever enough to parse the tag as a semantic version and check whether the installed package ma...
I call task._update_requirements(my_reqs) regardless whether I am in the local machine or in the clearml agent, so "installed packages" section is always updated to the list my_reqs
that I pass to the function, in this case ["."]
yes, in setup.py I have:..., install_requires= [ "my-private-dep @ git+
", ... ], ...
ok, so there is no way to cache it and detect when the ref changes?
Yes, I guess that's fine then - Thanks!
btw I monkey patched igniteβs function global_step_from_engine
to print the iteration and passed the modified function to the ClearMLLogger.attach_output_handler(β¦, global_step_transform=patched_global_step_from_engine(engine))
. It prints the correct iteration number when calling ClearMLLogger.OutputHandler.__ call__ .
` def call(self, engine: Engine, logger: ClearMLLogger, event_name: Union[str, Events]) -> None:
if not isinstance(logger, ClearMLLogger):
...
Although task.data.last_iteration
Β is correct when resuming, there is still this doubling effect when logging metrics after resuming π
Trying now your code⦠should take a couple of mins
Here is the minimal reproducable example.
Run test_task_a.py - It will register a dummy artifact, create a new task, set a parameter in that task and enqueue it test_task_b will try to retrieve parameter from parent task and fail
Yes, in the Task being executed in the agents, I have:from trains import Task task = Task.init(...) task.get_logger().report_text(str(task.get_parameters()))