Reputation
Badges 1
14 × Eureka!Do you mean when calling "PipelineDecorator.debug_pipeline()" ?
ReassuredTiger98 Nice digging and Ouch...that isn't fun. Let me see how quickly I can get eyes on this 🙂
JitteryCoyote63 ReassuredTiger98
Could you please try with the latest agent 1.5.2rc0 and let us know if it solved the issue?
Yeah I guess that's the culprit. I'm not sure clearml and wandb were planned to work together and we are probably interfering with each other. Can you try removing the wandb model save callback and try again with output_uri=True?
Also, I'd be happy to learn of your use-case that uses both clearml and wandb. Is it for eval purposes or anything else?
Hi Tim, Yes we know there are a few broken links in the docs.
We've been hard to work, building a new documentation site which should bring a bit more order and aim to explain ClearML a bit better! Expect it very soon!
Hey GrotesqueDog77
A few things, first you can call _logger.flush() which should solve the issue you're seeing (We are working to add auto-flushing when tasks end 🙂 )
Second, I ran this code and it works for me without a sleep, does it also work for you?
` from clearml import PipelineController
def process_data(inputs):
import pandas as pd
from clearml import PipelineController
data = {'Name': ['Tom', 'nick', 'krish', 'jack'],
'Age': [20, 21, 19, 18]}
_logger...
Let me know, if this still doesn't work, I'll try to reproduce your issue 🙂
The ClearML team appreciates bitching anywhere you feel like it (especially the memes section).
In the absence of some UI \ UX channel I suggest to just write here. I can promise you the people who's responsibility it is to fix \ improve the UI are roaming here and will see the request 😄
You can also open github issues, it helps us prioritise features according to how much comments \ upvotes they receive.
MiniatureCrocodile39 Thanks for reporting...I guess 6 eyes are better than 4? 😄 I'll get it fixed 🙂
Hi Moki, Great idea! We'll add it to our plans and update here once it's done 😄
Hi DefeatedMoth52 , so the reason why we don't support --find-links is that it is not in the requirements.txt standard (Or so I'm told 😄 )
What can be done is just putting the specific links to the wheel (something like https://data.dgl.ai/wheels/dgl-0.1.2-cp35-cp35m-macosx_10_6_x86_64.whl ) in the requirements.txt, and this should work. Makes sense?
Hi FierceHamster54 can you try another instance type? I just tried with n1 and it works. We are looking to see if it's instance type related
pytorch wheels are always a bit of a problem and AFAIK it tells that there isn't a matching version to the cuda specified \ installed on the machine. You can try and update the pytorch to have exact versions and it usually solves the issue
ExcitedFish86 You came to ClearML because it's free, you stayed because of the magic 🎊 🎉
ReassuredTiger98 , Pytorch installation are a sore point 🙂 Can you maybe try to specify a specific build and see if it works?
Yeah! I think maybe we don't parse the build number..let me try 🙂
ReassuredTiger98 I think it works for me 🙂
I added this to the requirements (You can put the extra-index-url in the clearml.conf), and I've enabled the torch nightly flag:
--extra-index-url https://download.pytorch.org/whl/nightly/cu117
clearml
torch == 1.14.0.dev20221205+cu117
torchvision == 0.15.0.dev20221205+cpu
In the installed pacakges I got:
- 'torch==1.14.0.dev20221205 # https://download.pytorch.org/whl/nightly/cu117/torch-1.14.0.dev20221205%2Bcu117-cp38-cp38-linux_x86_64.whl '
- torchtriton==2.0.0+0d7e753227
- 'torchvision==0.15.0.dev20221205 # https://download.pytorch.org/whl/nightly/cu117/torchvision-0.15.0.dev20221205%2Bcpu-cp38-cp38-linux_x86_64.whl '
Why not add the extra_index_url to the installed packages part of the script? Worked for me 😄
Sorry, not of the script, of the Task. I just added --extra-index-url to the "Installed Packages" section, and it worked.
` pipe = PipelineController(
project='examples',
name='Pipeline demo',
version='1.1',
add_pipeline_tags=False,
)
set the default execution queue to be used (per step we can override the execution)
pipe.set_default_execution_queue('default')
add pipeline components
pipe.add_parameter(
name='url',
description='url to pickle file',
default=' '
)
pipe.add_function_step(
name='step_one',
function=step_one,
function_kwargs=dict(pickle_data_url='${pi...
Did you try with function_kwargs?
Try this, I tested it and it works:docker=pipe._parse_step_ref("${pipeline.url}")
It's hack-ish but it should work. I'll try and get a fix in one of the upcoming SDK releases that supports parsing references for parameters other than kwargs
Hey GrotesqueDog77 , so it seems like references only works on "function_kwargs" and not other function step parameter.
I'm trying to figure out if there's some workaround we can offer 🙂
AHHHHHHHHHHHH! That makes more sense now 😄 😄
Checking 🙂