Reputation
Badges 1
14 × Eureka!It would, but it would get the trick. you can update the iteration 0 value and you'll be able to see it in the table. or do you mean you ONLY want to see it in the table and not in the scalars tab?
TrickySheep9 Tough question 😄 We are working on a major change to pipelines. We are now documenting pre\post step callbacks (so people can write custom code that interacts with the pipeline that's independent of the script's code).
We're working on adding the ability to run small code snippets directly on the pipeline controller task (so you don't have to wait for an agent to setup).
AND we are working on a new UI soon 🙂
A tiny spoiler is that we'll soon improve our visibility and...
AFAIK max spinup time is max life of agent (busy or idle) and max idle is maximum allowed time to be idle
Hey There Jamie! I'm Erez from the ClearML team and I'd be happy to touch on some points that you mentioned.
First and foremost, I agree with the first answer that was given to you on reddit. There's no "right" tool. most tools are right for the right people and if a tool is too much of a burden, then maybe it isn't right!
Second, I have to say the use of SVN is a "bit" of a hassle. the MLOps space HEAVILY leans towards git. We interface with git and so does every other tool I know of. That ...
BattySeahorse19 replacing k8s is a very tall order 🙂 What ClearML means when it says orchestration is taking for the environment for running experiments. This is achieved by using ClearML Agent which, once installed, can fetch tasks from execution queues (which allows you to build management on top such as fairness, load distribution and so on). Once a task is fetched it takes care of everything it needs, from cloning the repository to installing dependencies to pulling specific dockers. ...
ReassuredTiger98 I think it works for me 🙂
I added this to the requirements (You can put the extra-index-url in the clearml.conf), and I've enabled the torch nightly flag:
--extra-index-url https://download.pytorch.org/whl/nightly/cu117
clearml
torch == 1.14.0.dev20221205+cu117
torchvision == 0.15.0.dev20221205+cpu
Hi Doron, as a matter of fact yup 🙂 The next version would include a similar feature. Plan is to have it released middle of December so stay tuned 😄
@<1590514584836378624:profile|AmiableSeaturtle81> Cool to see the community building such things! 🙂 If this works out for you, we'll be happy if you share your process!
A question both to you and @<1541954607595393024:profile|BattyCrocodile47> , what compels you to use a different orchestrator? Anything missing from the ClearML orchestration layer?
Yeah! I think maybe we don't parse the build number..let me try 🙂
the upload method (which has an SDK counterpart) allows you to specify where to upload the dataset to
Once defined, the new dataset will have the content of all it's parents. then you can add \ modify \ remove files from it and commit a new dataset.
Happy our intention was still clear
@<1541954607595393024:profile|BattyCrocodile47> Thanks a lot for the explanation! These inputs help us a lot building our tools, and eventually, building user's trust in them 🙂 Let us know with what orchestrator you ended up with and how it's going!
Hi SillySealion58 ,
I'm Erez from ClearML! I'm revisiting the way we manage models on tasks and stumbled upon this unanswered question and wanted to help!
The below code works, and creates 2 models. note that we capture the models when you call torch.save() and we save the filename. The filename is also the "name" which you can use to modify models later on. If this is still relevant, would be happy if you could tell me whether it worked!
when you create a new dataset you can define one or more parents. by default there are no parents to a dataset.
Are you using the OSS version or the hosted one (app.clear.ml)? The ClearML enterprise offering has a built-in annotator. Please note that this was meant more for correcting annotations during the development process rather than mass annotating lots of images.
Hi Jax, I'm working on a few more examples of how to use clearml-data. should be released in a few weeks (with some other documentation updates). These however don't include the use case you're talking about. Would you care to elaborate more on that? Are you looking to store the code that created the data, in the execution part of the task that saves the data itself?
Did you try with function_kwargs?
In the installed pacakges I got:
- 'torch==1.14.0.dev20221205 # https://download.pytorch.org/whl/nightly/cu117/torch-1.14.0.dev20221205%2Bcu117-cp38-cp38-linux_x86_64.whl '
- torchtriton==2.0.0+0d7e753227
- 'torchvision==0.15.0.dev20221205 # https://download.pytorch.org/whl/nightly/cu117/torchvision-0.15.0.dev20221205%2Bcpu-cp38-cp38-linux_x86_64.whl '
get_local_copy() would download the file to your cache and return it's path
Hi GrittyHawk31 , maybe I'm missing something, but what stops you from using Dataset.get() in the preprocessing script? Is there a limitation on it?
Hi EnviousStarfish54 If you want to not send info to the server, I suggest you to set an environment variable, this way as long as the machine has this envvar set it won't send to the server
And as for clearml-data I would love to have more examples but not 100% sure what to focus on as using clearml-data is a bit...simple? In my, completely biased, eyes. I assume you're looking for workflow examples, and would love to get some inspiration 🙂
LOL Love this Thread and sorry I didn't answer earlier!
VivaciousPenguin66 EnviousStarfish54 I totally agree with you. We do have answers to "how do you do X or Y" but we don't have workflows really.
What would be a logical place to start? Would something like "training a Yolo V3 person detector on COCO dataset and how you enable continuous training (let's say adding PASCAL dataset afterwords) be something interesting?
The only problem is the friction between atomic and big picture. In...