
Reputation
Badges 1
25 × Eureka!AbruptHedgehog21 could it be the console log itself is huge ?
I could take a look and figure that out.
This will greatly accelerate integration 😉
Btw I sometimes get a gzip error when I am accessing artefacts via the '.get()' part.
Hmm this is odd, is this a download issue? if this is reproducible maybe we should investigate further...
https://hub.docker.com/layers/nvidia/cuda/10.1-cudnn7-runtime-ubuntu18.04/images/sha256-963696628c9a0d27e9e5c11c5a588698ea22eeaf138cc9bff5368c189ff79968?context=explore
the docker image is missing the cudnn which is a must for TF to work 🙂
Hi JealousParrot68
You mean by artifact names ?
Hmm this is odd, when you press on the parent dataset in the UI, and go to full-details, then the INFO tab. Can you copy here everything ?
GrievingTurkey78 sure, aws autoscaler can do that:
https://github.com/allegroai/clearml/blob/master/examples/services/aws-autoscaler/aws_autoscaler.py
Hi GrievingTurkey78
I think the main issue is the lack of support for jsonargparse
, is that correct ?
(vanilla pytorch lightning is using argpraser, which seems to work out of the box)
Did you run clearml-init
after the pip install ?
My pleasure, and apologies 🙂
Hi FancyWhale93 , in your clear.conf configure default output uri, you can specify the file server as default, or any object storage:
https://github.com/allegroai/clearml-agent/blob/9054ea37c2ef9152f8eca18ee4173893784c5f95/docs/clearml.conf#L409
JitteryCoyote63 are you suggesting it happens ?
(obviously it should not 🙂 )
Hi @<1661904968040321024:profile|SpotlessOwl43>
My problem is that when the AWS virtual machine is killed, my Pipelines and Scheduling stop working because of the killed ClearML agent,
are you using the ClearML AWS autoscaler to spin that machine ? or are you spinning it manually ?
LittleShrimp86 did you try to run the pipeline form the UI on remote machines (i.e. with the agents)? Did that work?
In the installed packages section it includes
pywin32 == 303
even though that is not in my requirements.txt.
So for some reason it is being detected (meaning your code base actually imports it in code)
But you can just remove it, either by manually editing the cloned Task (right click, reset, then you can edit the section), or via codeTask.ignore_requirements("pywin32") task = Task.init(...)
Scenario 1 & 2 are essentially the same from caching perspective (the face B != B` means they have different caching hashes, but in both cases are cached).
Scenario 3 is the basically removing the cache flag from those components.
Not sure if I'm missing something.
Back to the @<1523701083040387072:profile|UnevenDolphin73>
From decorators - when the pipeline logic is very straightforward ...
Actually I would disagree, the decorators should be used when the pipeline Logic is not a D...
SmugSnake6 what's the clearml version you are using ?
LittleShrimp86 can you post the full log of the pipeline? (something is odd here)
Yey! okay let me make sure we add this feature to the Task.init arguments so one can control it from code 🙂
I know there is a aux cfg with key value pairs but how can use it in the python code?
This is actually for helping to configure Triton services, you cannot (I think) easily access it from the code
Hi RoundMole15
What exactly triggers the "automagic" logging of the model and weights?
framework save call, for example torch.save or joblib.save
I've pulled my simple test project out of jupyter lab and the same problem still exists,
What is "the same problem" ?
task.update({'script': {'version_num': 'my_new_commit_id'}})
This will update to a specific commit id, you can pass empty string '' to make the agent pull the latest from the branch
But the artifacts and my dataset of my old experiments still use the old adress for the download ( is there a way to change that ) ?
MotionlessCoral18 the old artifacts are stored with direct links, hence the issue, as SweetBadger76 noted you might be able to replace the links directly inside the backend databases
Hi @<1661542579272945664:profile|SaltySpider22>
question 1: are parallel writes to a dataset with the same version possible?
When you are saying parallel what do you mean? from multiple machines ?
Whats the recommended way to append the dataset in a future version?
Once a dataset was finalized the only way to add files is to add another version that inherits from the previous one (i.e. the finalized version becomes the parent of the new version)
If you are worried about multip...
Different question. How can I pass PYTHONPATH env variable to a task, run by agent (so python can find classes inside m subdirectories)?
Hi HelpfulHare30
By default the working directory will be added to the python path, this means if I have under execution:Working Dir: "." Script: "src/script.py"
The root git repo will be added to the python path.
BTW: next RC you could add a flag to the agent to always add the git repo
And can you see your promethues in your grafana?