Reputation
Badges 1
25 × Eureka!Hi @<1523706645840924672:profile|VirtuousFish83>
Hmm so generally I think the answer is no... I mean you can download all scalars and re-report them with a different title/series, but I think you will not be able to delete a specific set, and the only way would be to reset the entire Task.
I'm curious what's the scenario here? is it like a typo you want to fix?
Thanks @<1634001106403069952:profile|DefeatedMole42>
A follow up, (1) how are you spinning the agent ? (2) could it be the docker image "ultralytics/yolov5" does not have Bash as entry point ?
you can force that with
@PipelineDecorator.component(return_values=['int'], cache=False,
task_type='training',
docker="ultralytics/yolov5",
docker_args="--entrypoint /bin/bash",
pa...
Hi ReassuredOwl55
How would I find Tasks that have the same code with different inputs/parameters?
Assuming you have the git repo
you can do:Task.query_tasks(..., task_filter={'_all_'=dict(fields=['script.repository'], pattern='github.com/user/repo'))wdyt?
Local changes are applied before installing requirements, right?
correct
SolidSealion72 I'm able to reproduce, hurrah!
(and a fix is already being tested, I will keep you guys updated)
However, when we try to access the webapi from remote through the VPN we fail. The VPN logs don't show any blockage. Any ideas?
Maybe the VPN firewall blocks http connections ? or it might be BrightRabbit75 case, that sounds quite logical to never show anywhere
that is odd..
So if you have 3 agents, how many concurrent experiment are they running ? (actually running, not registered as running)
And you have the exact same folder structure / content, and server A/B give a different set of experiments ?
(is serverB empty, meaning no experiments at all?)
Hi @<1569858449813016576:profile|JumpyRaven4>
task.add_requirements()
This is the problem, if you look closely this is a class method, meant for helping the Task.init better capture python packages, it does Not change the task requirements.
To do that, use " task.set_packages "
Hi SoggyFrog26
Yes, it is stored at ~/.clearml_data.json
Notice you can always change it by passing --id dataset_id
and the clearml server version ?
TrickyRaccoon92 I'm not sure I follow, TB do show? and you want to add additional plotly plot ?
Hi SubstantialElk6
We will be running some GUI applications so is it possible to forward the GUI to the clearml-session?
If you can directly access the machine running the agent, yes you could. If not reverse proxy is in the working 😉
We have a rather locked down environment so I would need a clear view of the network view and the ports associated.
Basically all connections are outgoing only, with the exception of the clearml-server (listening on ports 8008 8080 8081)
Hi SubstantialElk6
I can't see that is was removed, could you send the full log ?
MysteriousBee56 not a different port, just not with "localhost" but with your machine's IP
Change to add_missing_installed_packages=False, here, and see if you end up with git diff
https://github.com/allegroai/clearml/blob/1f82b0c4010799be6157f5c845c7f6ac48e71c0c/clearml/backend_interface/task/populate.py#L158
still it is a chatgpt interface correct ?
Actually, no. And we will change the wording on the website so it is more intuitive to understand.
The idea is you actually train your own model (not chatgpt/openai) and use that model internally, which means everything is done inside your organisation, from data through training and ending with deployment. Does that make sense ?
Decorators are good 🙂
Something along the lines of
` @PipelineDecorator.pipeline(...)
def pipeline(skip_a=False):
if not skip_a:
a = step_a()
else:
# somehow get a previous A?
# let's call it cached A
a = "replace with real'
step_b(a)
... `Is this the gist?
If it is, this looks like, "how can I control whether A is cached or not", is that correct?
hey, that worked! what library is being used that reads that configuration?
It's passed to boto3, but the pyhon interface and aws cli use different configuration, I guess, because otherwise it should have worked...
I failed to update the "STARTED AT" and the "COMPLETED AT" attributes in the "INFO" tab.
I'm not sure this can actually be overridden...
then will have to rerun the pipeline code then manually get the id and update the task.
Makes total sense to me!
Failed auto-generating package requirements: _PyErr_SetObject: exception SystemExit() is not a BaseException subclass
Not sure why you are getting this one?!
ValueError: No projects found when searching for
MyProject/.pipelines/PipelineName
hmm, what are you getting with:
task = Task.get_task(pipeline_uid_here)
print(task.get_project_name())
Hi @<1523701066867150848:profile|JitteryCoyote63>
I found a memory leak
in
Logger.report_matplotlib_figure
Are you sure this is not Matplotlib leak but the Logger's fault ? I'm trying to think how we could create such a mem leak
wdyt?
LazyTurkey38 configuration pushed to github :)
DeliciousBluewhale87 out of curiosity , what do you mean by "deployment functionality" ? is it model serving ?
Thanks for answering, Yes, this is exactly what I wanted
Hmm should be possible, how slow is the update that we want to save the time ?
when you clone the Task, it might be before it is done syncying git / packages.
Also, since you are using 0.16 you have to have a section name (Args or General etc.)
How will task b use the parameters ? (argparser / connect dict?)