Reputation
Badges 1
14 × Eureka!VexedCat68 you mean the artifact in the previous step is called "merged_dataset_id"? Is it an artifact or is it a parameter? And what issues are you having with accessing the parameter?
JitteryCoyote63 you should've talked about a million dollars because we just discussed this today as it's also based on pytorch-ignite!
So I'm looking at the example in the github, this is step1:def step_one(pickle_data_url): # make sure we have scikit-learn for this step, we need it to use to unpickle the object import sklearn # noqa import pickle import pandas as pd from clearml import StorageManager pickle_data_url = \ pickle_data_url or \ '
` '
local_iris_pkl = StorageManager.get_local_copy(remote_url=pickle_data_url)
with open(local_iris_pkl, 'rb') as f:
iris ...
pipe._nodes['stage_data'].job.task.artifacts
And in the pre_execute_callback, I can access this:a_pipeline._nodes[a_node.parents[0]].job.task.artifacts['data_frame']
We update for server and SDK here. For RC's we're still not amazing 🙂
Hmm, I'm not 100% sure I follow. you have multiple models doing predictions. Is there a single data source that feeds to them and they run in parallel. or is one's output is another input and they run serially?
Hi TenseOstrich47 Yup 🙂 You can check our scheduler module:
https://github.com/allegroai/clearml/tree/master/examples/scheduler
It supports time-events as well as triggers to external events
That's how I see the scalar comparison, no idea which is the "good" and which is the "bad"
Thanks! 😄 As i've mentioned above, these features were chosen because of users feedback so keep it up and Thanks again!
pytorch wheels are always a bit of a problem and AFAIK it tells that there isn't a matching version to the cuda specified \ installed on the machine. You can try and update the pytorch to have exact versions and it usually solves the issue
Hi MysteriousSeahorse54 How are you saving the models? torch.save() ? If you're not specifying output_uri=True it makes sense that you can't download as they are local files 🙂
And when you put output_uri = True, does no model appear in the UI at all?
As for experimenting, I'd say (and this community can be my witness 🙂 ) that managing your own experiments isn't a great idea. First, you have to maintain the infra (whatever it is, a tool your wrote yourself, or an excel sheet) which isn't fun and consumes time. From what I've heard, it usually takes at least 50% more time than what you initially think. And since there are so many tools out there that do it for free, then the only reason I can imagine of doing it on your own would be if y...
Hi, in addition to natanM's question, does it fail on trigger or by running the script? if running with worker, please share worker logs as well!
Sorry about the inconvenience...We are updating our website and the docs are the victims...should be resolved soon
Hi TenseOstrich47
You can also check this video out on our youtube channel:
https://youtu.be/gPBuqYx_c6k
It's still branded as trains (our old brand) but it applies to clearml just the same!
Hey There SlimyRat21
We did a small integration of Trains with a Doom agent that uses reinforcement learning.
https://github.com/erezalg/ViZDoom
What we did is basically change a bit the strcuture of how parameters are cought (so we can modify them from the UI), then logged stuff like loss, location on the map, frame buffers at certain times and information about end of episode that might be helpful for us.
You can see how it looks on the demoapp (as long as it lasts 🙂 )
Let me know if...
We'll check this. I assume we don't catch the error somehow or the proccess doesn't indicate it died failing
TrickySheep9 Tough question 😄 We are working on a major change to pipelines. We are now documenting pre\post step callbacks (so people can write custom code that interacts with the pipeline that's independent of the script's code).
We're working on adding the ability to run small code snippets directly on the pipeline controller task (so you don't have to wait for an agent to setup).
AND we are working on a new UI soon 🙂
A tiny spoiler is that we'll soon improve our visibility and...
JitteryCoyote63 I'm not sure we can get to it fast enough, unfortunately 😞 (It only means we have cooler stuff that we're working on 😄 )
As we always say, you came because it's free, you stayed because features are being released before git issues are even opened 😉
Thanks for contributing back with ideas and inputs! 😄
stopped is the client's name for aborted
Try this, I tested it and it works:docker=pipe._parse_step_ref("${pipeline.url}")
It's hack-ish but it should work. I'll try and get a fix in one of the upcoming SDK releases that supports parsing references for parameters other than kwargs
DilapidatedDucks58 Do you see in the project card the overview tab? On top you are prompted to select a metric snapshot. Do you see it?
It's a known fact that documentation always trail features by 3-6 months 😄 We're working on new docs, should be released this week 🙂
Hi Jax, I'm working on a few more examples of how to use clearml-data. should be released in a few weeks (with some other documentation updates). These however don't include the use case you're talking about. Would you care to elaborate more on that? Are you looking to store the code that created the data, in the execution part of the task that saves the data itself?