
Reputation
Badges 1
25 × Eureka!HealthyStarfish45
No, it should work π
Hi SpicyOtter88plt.plot([0, 1], [0, 1], 'r--', label='')
ti cannot have a legend without a label, so it gives it "anonymous" label, I think it should just get "unlabeled 0" wdyt?
GloriousPenguin2 could you open a GitHub issue on it? Just making sure this will actually get fixed π
so you have a repo with poetry that some users update and some do not?
All working on the same branch ?
Awesome ! thank you so much!
1.0.2 will be out in an hour
We abuse the object description here to store the desired file path.
LOL, yep that would work, I'm assuming you have some infrastructure library that does this hack for you, but really cool way around it π
And last but not least, for dictionary for example, it would be really cool if one could do:
Hmm what you will end up now is the following behaviour,my_other_config['bar']
will hold a copy of my_config
, if you clone the Task and change "my_config" it will hav...
JitteryCoyote63
Picks a new experiment on top of the long one running
This is very very strange. Is the long running experiment being logged (i.e. do you still see console output in the UI)?
Hi JitteryCoyote63
So that I could simply do
task._update_requirements(".[train]")
but when I do this, the clearml agent (latest version) does not try to grab the matching cuda version, it only takes the cpu version. Is it a known bug?
The easiest way to go about is to add:Task.add_requirements("torch", "==1.11.0") task = Task.init(...)
Then it will auto detect your custom package, and will always add the torch version. The main issue with relying on the package...
BattyLion34 the closest I can think of the is monitoring class that can easily be extended.
Datasets are a type of Task, so we can monitor a project and trigger an action when we see a change in number of Tasks/Datasets that are completed.
Monitoring class:
https://github.com/allegroai/clearml/blob/master/clearml/automation/monitor.py
Monitoring example:
https://github.com/allegroai/clearml/blob/master/examples/services/monitoring/slack_alerts.py
I think a dataset monitoring example wil...
Hi, I was expecting to see the container rather then the actual physical machine.
It is the container, it should tunnels directly into it. (or that's how it should be).
SSH port 10022
RoughTiger69 how did you end up with a Task with just "origin" in the repo field ?
I am symlinking the .clearml directory to a NAS server and this is perhaps part of the problem.
Yep, that sounds about right, it uses Posix file system for internal lock mechanisms (multi process locks), and my guess is that the NAS for some reason does not support it...
I'm not sure I'm the right person to answer that, but yes my understanding is that this is a Scale/Enterprise tier feature, at least for the time being.
What's your clearml version (python and server) ?
It seems that once the job as completed once, it doesn't accept any new report...
completed can be forced, published cannot ...
What's the error you are getting ?
Hi @<1536518770577641472:profile|HighElk97>
Is there a way to change the smoothing algorithm?
Just like with TB, this is front-end, not really something you can control ...
That said you can report a smoothed value (i.e. via python) as additional series, wdyt ?
Hi @<1598487094601191424:profile|MysteriousCow84>
You should put it in the dedicated section:
None
Can i log new lines to an old dataframe plot? any other suggestions?
Hi ChubbyLouse32
you mean to an already reported Table? or an artifact ? or a dataset ?
The image is
allegroai/clearml:1.0.2-108
Yep, that makes sense, seems like a backwards compatibility issue
β¦every user in the server has the same credentials, and they donβt need to know them..makes sense?
Make sense, single credentials for everyone, without the need to distribute
Is that correct?
Hi GiddyPeacock64
If you already have K8s setup, and are already using ClearML.
In your kubeflow Yaml:trains-agent execute --id <task_id> --full-monitoring
This will install everything your Task needs inside the docker. Just make sure that you pass the env variable setting the ClearML , see here:
https://github.com/allegroai/clearml-server/blob/6434f1028e6e7fd2479b22fe553f7bca3f8a716f/docker/docker-compose.yml#L127
GrittyStarfish67
I do not wish for data duplication. Any Idea how to do this with clearml-data CLI/GUI/python?
At least in theory creating a new version with parents from multiple Datasets should just work out of the box.
wdyt?
LOL @<1545216070686609408:profile|EnthusiasticCow4>
I assume this is a hidden folder?
for example datasets are hidden folders that can be viewed if you go to the settings page and turn on "show hidden folders"
from what I gather there is a lightly documented concept
Yes ... π the reason for it is that actually one could do:
` @PipelineDecorator.pipeline(...)
def pipeline(i):
....
if name == 'main':
pipeline(0)
pipeline(1)
pipeline(2) `Basically rerunning the pipeline 3 times
This support was added as some users found a use case for it, but I think this would be a rare one
Hi GrotesqueOctopus42
creates a graph of the neural network and would be nice to have it on the experiment logs aswell
I think the main issue is displaying later in the UI, thoughts?
BTW: is this useful for you outside f very local TF debugging ?