Reputation
Badges 1
14 × Eureka!Checking 🙂
THANK YOU! It's because of user feedback that this feature was made so you can thank yourself and the community while you're at it 😄
I think you should call dataset.finalize()
The new welcome screen to pipelines and our fancy new icon on the left sidebar 😄
Does that answer your question?
Hi TenseOstrich47
You can also check this video out on our youtube channel:
https://youtu.be/gPBuqYx_c6k
It's still branded as trains (our old brand) but it applies to clearml just the same!
In the installed pacakges I got:
- 'torch==1.14.0.dev20221205 # https://download.pytorch.org/whl/nightly/cu117/torch-1.14.0.dev20221205%2Bcu117-cp38-cp38-linux_x86_64.whl '
- torchtriton==2.0.0+0d7e753227
- 'torchvision==0.15.0.dev20221205 # https://download.pytorch.org/whl/nightly/cu117/torchvision-0.15.0.dev20221205%2Bcpu-cp38-cp38-linux_x86_64.whl '
Oh!!! Sorry 🙂
So...basically it's none of them.
All of these are hosted tiers. The self-hosted is our Open Source which you can find https://github.com/allegroai/clearml-server
It has an explanation on how to install it and some of the options available for you.
Looking at our pricing page, I can see how it's not trivial to get from there to the github page...I'll try to improve that! 😄
Yeah I totally get what you're saying. Basically you want the same code to run locally or remotely, and something external would control whether it runs locally or enqueued to a worker. Am I right?
Oki doke 🙂 I'll see what the great powers of beyond (AKA, R&D folks) will have to say about that!
If you return on a pre_execute_callback false (or 0, not 100% sure 🙂 ) the step just won't run.
Makes sense?
HI SquareFish25 , We also have a few webinars discussing these topics (more theoretical and what can be achieved using pipelines), check https://youtu.be/_5Re2GpcRp8 and https://youtu.be/yGg-exQHUfE out!
I'll check with R&D if this is the plan or we have something else we planned to introduce and update you
Once defined, the new dataset will have the content of all it's parents. then you can add \ modify \ remove files from it and commit a new dataset.
PompousBaldeagle18 Unfortunately no. We thought this to be a promising avenue but have decided, for various reasons, to move and do other things 😞
Try this, I tested it and it works:docker=pipe._parse_step_ref("${pipeline.url}")
It's hack-ish but it should work. I'll try and get a fix in one of the upcoming SDK releases that supports parsing references for parameters other than kwargs
KindGiraffe71 We're working on a new docs version, it'll be there as well!
Hi OutrageousSheep60 , The plan is to release this week \ early next week a version that solves this.
Seems like it. Is that an issue?
report_scalar() with a constant iteration, is a hack that you can use in the meantime 🙂
Yeah, that makes lots of sense!
Are you talking about consecutive pipeline steps? Or parallel?
ReassuredTiger98 Nice digging and Ouch...that isn't fun. Let me see how quickly I can get eyes on this 🙂
JitteryCoyote63 Welcome to the wonderful world of coding where some stuff doesn't work and you don't know why and some stuff works and you don't know why 😂
Hi Doron, as a matter of fact yup 🙂 The next version would include a similar feature. Plan is to have it released middle of December so stay tuned 😄
JitteryCoyote63 you should've talked about a million dollars because we just discussed this today as it's also based on pytorch-ignite!
Hmm, can you give a small code snippet of the save code? Are you using a wandb specific code? If so it makes sense we don't save it as we only intercept torch.save() and not wandb function calls
Hi MelancholyElk85 , documentation is a soft spot for us, we're trying to be better but we aren't always 🙂 https://github.com/allegroai/clearml/blob/master/examples/pipeline/pipeline_from_decorator.py this is an example, if you check the pipeline folder you'll see others (like pipeline_from_function.py