Reputation
Badges 1
662 × Eureka!Indeed. I'll open an issue, sure!
One more UI question TimelyPenguin76 , if I may -- it seems one cannot simply report single integers. The report_scalar feature creates a plot of a single data point (or single iteration).
For example if I want to report a scalar "final MAE" for easier comparison, it's kinda impossible π
We're not using the docker setup though. The CLI run by the autoscaler is python -m clearml_agent --config-file /root/clearml.conf daemon --queue aws_small , so no docker
Using the PipelineController with add_function_step
-
I guess? π€ I mean the same filter option one has for e.g. tags in the table view. In the "all experiments" project I think it would make sense for one to be able to select the projects of interest, or even filter for textual matches.
-
Sorry I meant the cards indeed :)
For example, can't interact with these two tasks from this view (got here from searching in the dashboard view; they're in different projects):
Also (sorry for all of these!) - could be nice to have a direct "task comparison" link in the UI somewhere, that would open a comparison with no tasks and the user can add them manually using the "add experiments" button. :)
Unfortunately I can't take a photo of not being able to compare tasks by navigating around the WebUI...
Does that clarify the issue CostlyOstrich36 ?
I can't seem to manage the first way around. If I select tasks in different projects, I don't get the bottom bar offering to compare between them
Does that make sense SmugDolphin23 ?
Because setting env vars and ensuring they exist on the remote machine during execution etc is more complicated π
There are always ways around, I was just wondering what is the expected flow π
After setting the sdk.development.default_output_uri in the configs, my code kinda looks like:
` task = Task.init(project_name=..., task_name=..., tags=...)
logger = task.get_logger()
report with logger freely `
Anything else youβd recommend paying attention to when setting the clearml-agent helm chart?
Well the individual tasks do not seem to have the expected environment.
Yes, thanks AgitatedDove14 ! It's just that the configuration object passed onwards was a bit confusing.
Is there a planned documentation overhaul? π€
There's no decorator, just e.g.
def helper(foo: Optional[Any] = None):
return foo
def step_one(...):
# stuff
Then the type hints are not removed from helper and the code immediately crashes when being run
I'll have some reports tomorrow I hope TimelyPenguin76 SuccessfulKoala55 !
There's code that strips the type hints from the component function, just think it should be applied to the helper functions too :)
It's okay π I was originally hoping to delete my "initializer" task, but I'll just archive it if someone is interested in the worker data etc. Setting the queue is quite nice.
I think this should get my team excited enough π
Will try!
Curious - is there a temporary changelog for 1.2.0? π Always fun to poke at the upcoming features
EDIT: Wait, should the clearml RC be installed outside the venv for the agent as well?
Hey @<1523701435869433856:profile|SmugDolphin23> , thanks for the reply! Iβm aware of the caching β thatβs not the issue Iβm trying to resolve π
Ah I see, if the pipeline controller begins in a Task it does not add the tags to itβ¦
Sure, for example when reporting HTML files:

I tried that, unfortunately it does not help π
Indeed with ~ the .root call ends with an empty string, so it has a bit of different flow
Aw you deleted your response fast CostlyOstrich36 xD
Indeed it does not appear in ps aux so I cannot simply kill it (or at least, find it).
I was wondering if it's maybe just a zombie in the server API or similar