Reputation
Badges 1
25 × Eureka!RC should be out later today (I hope), this will already be there, I'll ping here when it is out
Hi GiganticTurtle0
ClearML will only list the directly imported packaged (not their requirements), meaning in your case it will only list "tf_funcs" (which you imported).
But I do not think there is a package named "tf_funcs" right ?
As I'm a Full-stack developer at Core. I'd be looking to extend the TRAINS Frontend and Backend APIs to suit my need of On-Prem data storage integration and lots of other customization for Job Scheduler(CRON)/Dataset Augmentation/Custom Annot. tool etc.
That is awesome! Feel free to post a specific question here, and I'll try to direct to the right place 🙂
Can you guide me to one such tutorial that's teaching how to customize the backend/front end with an example?
You mean l...
Hi EnviousPanda91
You mean like collect plots, then generate a pdf?
It also seems that
PipelineDecorator.upload_artifact
is not compatible with caching, sadly,
Both use the exact same mechanism of uploading artifacts (i.e. including caching for downloaded artifacts), in terms of caching pipeline components, this is on a component level (i.e. same code/task same arguments, equals cache hit)
What exactly are you getting ? how is it that the "PipelineDecorator.upload_artifact" uploads to a different storage ? is that reproducible ?
Hi JitteryCoyote63
The NVIDIA_VISIBLE_DEVICES is set automatically for the process the trains-agent spins, so from your code, it is transparent, you can only "see" GPU 0.
(Obviously not using docker you can forcefully change the OS environment in runtime, but you should avoid that ;))
Hi RoundMosquito25
How did you spin the agent (whats the cmd line? is it in docker mode or venv mode?)
From the console it seems the pip installation inside the container (based on the log this is what I assume) seems like it is stuck ?!
My question is if there is an easy way to track gradients similar to
wandb.watch
@<1523705099182936064:profile|GrievingDeer61> not at the moment, but should be fairly easy to add.
Usually torch examples just use TB as a default logging, which would go directly to clearml , but this is a great idea to add
Could probably go straight to the next version 🙂
wdyt?
Hi @<1572395184505753600:profile|GleamingSeagull15>
Try adjusting:
None
to 30 sec
It will reduce the number of log reports (i.e. API calls)
p.s. you should remove this line 🙂extra_index_url: ["git@github.com:salimmj/xxxx"]
Of course, I used "localhost"
Do not use "localhost" use your IP then it would be registered with a URL that points to the IP and then it will work
Requested version: 2.28, Used version 1.0" for some reason
This is fine that means there is no change in that API
Hi VexedCat68
One of my steps just finds the latest model to use. I want the task to output the id, and the next step to use it. How would I go about doing this?
When you say "I want the task to output the id" do you mean to pass t to the next step:
Something like this one:
https://github.com/allegroai/clearml/blob/c226a748066daa3c62eddc6e378fa6f5bae879a1/clearml/automation/controller.py#L224
You can however pass a specific Task ID and it will reuse it "reuse_last_task_id=aabb11", would that help?
Hmm I'm sorry it might be "continue_last_task", can you try:Task.init(..., continue_last_task="aabb11")
TrickySheep9 Yes, let's do that!
How do you PR a change ?
HurtWoodpecker30 could it be you hit a limit of some sort ?
Since you are running in venv mode, adding the OS environment before the clearml-agent, will basically make sure it will propagate to the process itself.
ReassuredTiger98 make sense ?
main clearml repo?
Yep that sounds right 🙂 thank you!
Hi RoundMosquito25
What do you mean by "local commits" ?
As a hack you can try DEFAULT_VERSION
(it's just a flag and should basically do Store)
EDIT: sorry that won't work 😞
Notice you should be able to override them in the UI (under Args seciton)
Hi @<1785479228557365248:profile|BewilderedDove91>
It's all about the databases in the under the hood, so 8gb is really a must
so all models are part of the same experiment and has the experiment name in their name.
Oh that explains it, (1) you can use the model filename to control the model name in clearml (2) you can disable the autologging and manually upload the model, then you can control the model name
wdyt?
However, when 'extra' is a positional argument then it is transformed to 'str'
Hmm... okay let me check something
Yes the one you create manually is not really of the same "type" as the one you create online, this is why you do not see it there 😞
Hi ClumsyElephant70
Any idea how to get the credentials in there?
How about to map it into the docker with -v you can set it here:
https://github.com/allegroai/clearml-agent/blob/0e7546f248d7b72f762f981f8d9033c1a60acd28/docs/clearml.conf#L137extra_docker_arguments: ["-v", "/host/folder/cred.json:/gcs/cred.json"]