Reputation
Badges 1
14 × Eureka!JitteryCoyote63 ReassuredTiger98
Could you please try with the latest agent 1.5.2rc0 and let us know if it solved the issue?
Hi DefeatedMoth52 , so the reason why we don't support --find-links is that it is not in the requirements.txt standard (Or so I'm told 😄 )
What can be done is just putting the specific links to the wheel (something like https://data.dgl.ai/wheels/dgl-0.1.2-cp35-cp35m-macosx_10_6_x86_64.whl ) in the requirements.txt, and this should work. Makes sense?
I'll check with R&D if this is the plan or we have something else we planned to introduce and update you
If spot is taken from you then yes. It will be. (unless there's some drive persistence)
You get all the features that are available for the hosted version such as experiment management, orchestration (with ClearML agent), data management (with ClearML Data), model serving (with ClearML serving) and more 🙂
Why not add the extra_index_url to the installed packages part of the script? Worked for me 😄
EnviousStarfish54 VivaciousPenguin66 So for random seed we have a way to save it so this should be possible and reproducible.
As for execution progress I totally agree. We do have our pipelining solution but I see it's very common to use us only for experiment tracking and use other tools for pipelining as well.
Not trying to convert anyone but may I ask why did you choose to use another tool and not the built-in pipelining feature in ClearML? Anything missing? Or did you just build the in...
DilapidatedDucks58 , We have a hunch we know what's wrong (we think we treat loading data like loading model and then we register each file \ files pickle as a model which takes time). How are you loading data? Is monai built inside pytorch? Or are you downloading it and loading manually? If you can share the loading code that might be helpful 🙂
This gets me the artifact that I return in step1
I think this is what you wanted
Now in step2, I add a pre_execute_callback
You can use pre \ post step callbacks.
If you're using method decorators like https://github.com/allegroai/clearml/blob/master/examples/pipeline/pipeline_from_decorator.py , calling the steps is just like calling functions (The pipeline code translates them to tasks). Then the pipeline is a logic you write on your own and then you can add whatever logic needed. Makes sense?
pipe.add_step(name='stage_process', parents=['stage_data', ],
base_task_project='examples', base_task_name='pipeline step 2 process dataset',
parameter_override={'General/dataset_url': '${stage_data.artifacts.dataset.url}',
'General/test_size': 0.25}, pre_execute_callback=pre_execute_callback_example, post_execute_callback=post_execute_callback_example)
in the pre_execute_callback, you can actually access any task in the pipeline. You can either directly access a node (task) in the pipe like the example above, or you can use the parent like this:pipe._nodes[a_node.parents[0]].job.task.artifacts
So I'm looking at the example in the github, this is step1:def step_one(pickle_data_url): # make sure we have scikit-learn for this step, we need it to use to unpickle the object import sklearn # noqa import pickle import pandas as pd from clearml import StorageManager pickle_data_url = \ pickle_data_url or \ ' ` '
local_iris_pkl = StorageManager.get_local_copy(remote_url=pickle_data_url)
with open(local_iris_pkl, 'rb') as f:
iris ...
And in the pre_execute_callback, I can access this:a_pipeline._nodes[a_node.parents[0]].job.task.artifacts['data_frame']
pipe._nodes['stage_data'].job.task.artifacts
If you return on a pre_execute_callback false (or 0, not 100% sure 🙂 ) the step just won't run.
Makes sense?
Hi OutrageousSheep60 , we have good news and great news for you! (JK, it's all great 😄 ). In the coming week or two we'll release the ability to also add links to clearml-data, so you can bring your s3 (or any other cloud) and local files as links (instead of uploading to the server). 🎉
OutrageousSheep60 took a bit longer but SDK 1.4.0 is out 😄 please check the links feature in clearml-data 🙂
Sorry, not of the script, of the Task. I just added --extra-index-url to the "Installed Packages" section, and it worked.
We're in the brainstorming phase of what are the best approaches to integrate, we might pick your brain later on 😄
If you can open a git issue to help tracking and improve visibility, that'll be very awesome!
And yes, we are going to revisit our assumptions for the model object, adding more stuff to it. Our goal is for it to have just enough info so you can have actionable information (IE, how accurate is it? How fast? How much power does it? How big it is, and other information), but not as comprehensive as a task. something like a lightweight task 🙂 This is one thing we are considering though.
ExcitedFish86 You came to ClearML because it's free, you stayed because of the magic 🎊 🎉
We update for server and SDK here. For RC's we're still not amazing 🙂
As we always say, you came because it's free, you stayed because features are being released before git issues are even opened 😉
Thanks for contributing back with ideas and inputs! 😄