Reputation
Badges 1
14 × Eureka!Just to make sure, if you change the title to "mean top four accuracy" it should work OK
It's a known fact that documentation always trail features by 3-6 months 😄 We're working on new docs, should be released this week 🙂
This is what I'm seeing the > is the title - series relation. I'm not 100% clear why the iteration is a problem, could you elaborate?
what's the value of ARGS.model? is it "4"?
BattySeahorse19 replacing k8s is a very tall order 🙂 What ClearML means when it says orchestration is taking for the environment for running experiments. This is achieved by using ClearML Agent which, once installed, can fetch tasks from execution queues (which allows you to build management on top such as fairness, load distribution and so on). Once a task is fetched it takes care of everything it needs, from cloning the repository to installing dependencies to pulling specific dockers. ...
And yes, we are going to revisit our assumptions for the model object, adding more stuff to it. Our goal is for it to have just enough info so you can have actionable information (IE, how accurate is it? How fast? How much power does it? How big it is, and other information), but not as comprehensive as a task. something like a lightweight task 🙂 This is one thing we are considering though.
If you're using method decorators like https://github.com/allegroai/clearml/blob/master/examples/pipeline/pipeline_from_decorator.py , calling the steps is just like calling functions (The pipeline code translates them to tasks). Then the pipeline is a logic you write on your own and then you can add whatever logic needed. Makes sense?
Thanks MotionlessMonkey27 , we're looking into that! Thanks for the info
Hi TenseOstrich47 Yup 🙂 You can check our scheduler module:
https://github.com/allegroai/clearml/tree/master/examples/scheduler
It supports time-events as well as triggers to external events
Yeah, with pleasure 🙂
Yeah! I think maybe we don't parse the build number..let me try 🙂
Yeah I guess that's the culprit. I'm not sure clearml and wandb were planned to work together and we are probably interfering with each other. Can you try removing the wandb model save callback and try again with output_uri=True?
Also, I'd be happy to learn of your use-case that uses both clearml and wandb. Is it for eval purposes or anything else?
Hey GrotesqueDog77
A few things, first you can call _logger.flush() which should solve the issue you're seeing (We are working to add auto-flushing when tasks end 🙂 )
Second, I ran this code and it works for me without a sleep, does it also work for you?
` from clearml import PipelineController
def process_data(inputs):
import pandas as pd
from clearml import PipelineController
data = {'Name': ['Tom', 'nick', 'krish', 'jack'],
'Age': [20, 21, 19, 18]}
_logger...
Hi Tim, Yes we know there are a few broken links in the docs.
We've been hard to work, building a new documentation site which should bring a bit more order and aim to explain ClearML a bit better! Expect it very soon!
Hey There Jamie! I'm Erez from the ClearML team and I'd be happy to touch on some points that you mentioned.
First and foremost, I agree with the first answer that was given to you on reddit. There's no "right" tool. most tools are right for the right people and if a tool is too much of a burden, then maybe it isn't right!
Second, I have to say the use of SVN is a "bit" of a hassle. the MLOps space HEAVILY leans towards git. We interface with git and so does every other tool I know of. That ...
AHHHHHHHHHHHH! That makes more sense now 😄 😄
Checking 🙂
OutrageousSheep60 took a bit longer but SDK 1.4.0 is out 😄 please check the links feature in clearml-data 🙂
Hi OutrageousSheep60 , we have good news and great news for you! (JK, it's all great 😄 ). In the coming week or two we'll release the ability to also add links to clearml-data, so you can bring your s3 (or any other cloud) and local files as links (instead of uploading to the server). 🎉
Hi, in addition to natanM's question, does it fail on trigger or by running the script? if running with worker, please share worker logs as well!
in the pre_execute_callback, you can actually access any task in the pipeline. You can either directly access a node (task) in the pipe like the example above, or you can use the parent like this:pipe._nodes[a_node.parents[0]].job.task.artifacts
Now in step2, I add a pre_execute_callback
This gets me the artifact that I return in step1
I think this is what you wanted
Hey There SlimyRat21
We did a small integration of Trains with a Doom agent that uses reinforcement learning.
https://github.com/erezalg/ViZDoom
What we did is basically change a bit the strcuture of how parameters are cought (so we can modify them from the UI), then logged stuff like loss, location on the map, frame buffers at certain times and information about end of episode that might be helpful for us.
You can see how it looks on the demoapp (as long as it lasts 🙂 )
Let me know if...
ImmensePenguin78 we also have a new example for this!
https://github.com/allegroai/clearml/blob/master/examples/reporting/artifacts_retrieval.py
To add to Natan's answer, you can run on the services docker anything depending on the HW. We don't recommend training with it as the server's machine might get overloaded. What you can do is simple stuff like cleanup or any other routines 🙂