Reputation
Badges 1
14 × Eureka!I think the best model name is person_detector_lr0.001_batchsz32_accuracy0.63.pkl 😄
Yeah, it might be the cause...I had a script with OOM and it crashed regularly 🙂
Let me circle this back to the UI folks and see if I can get some sort of date attached to this 🙂
report_scalar() with a constant iteration, is a hack that you can use in the meantime 🙂
Hey There SlimyRat21
We did a small integration of Trains with a Doom agent that uses reinforcement learning.
https://github.com/erezalg/ViZDoom
What we did is basically change a bit the strcuture of how parameters are cought (so we can modify them from the UI), then logged stuff like loss, location on the map, frame buffers at certain times and information about end of episode that might be helpful for us.
You can see how it looks on the demoapp (as long as it lasts 🙂 )
Let me know if...
ImmensePenguin78 we also have a new example for this!
https://github.com/allegroai/clearml/blob/master/examples/reporting/artifacts_retrieval.py
Hey, AFAIK, SDK version 1.1.0 disabled the demo server by default (still accessible by setting an envvar).
https://github.com/allegroai/clearml/releases/tag/1.1.0
Is this still an issue even in this version?
Hi Tim, Yes we know there are a few broken links in the docs.
We've been hard to work, building a new documentation site which should bring a bit more order and aim to explain ClearML a bit better! Expect it very soon!
ReassuredTiger98 Nice digging and Ouch...that isn't fun. Let me see how quickly I can get eyes on this 🙂
That's how I see the scalar comparison, no idea which is the "good" and which is the "bad"
That's true 🙂 Our SDK is a python based and your code needs to be python code for us to integrate with
It's a known fact that documentation always trail features by 3-6 months 😄 We're working on new docs, should be released this week 🙂
Hi Jax, I'm working on a few more examples of how to use clearml-data. should be released in a few weeks (with some other documentation updates). These however don't include the use case you're talking about. Would you care to elaborate more on that? Are you looking to store the code that created the data, in the execution part of the task that saves the data itself?
You can use:task = Task.get_task(task_id='ID') task.artifacts['name'].get_local_copy()
LOL Love this Thread and sorry I didn't answer earlier!
VivaciousPenguin66 EnviousStarfish54 I totally agree with you. We do have answers to "how do you do X or Y" but we don't have workflows really.
What would be a logical place to start? Would something like "training a Yolo V3 person detector on COCO dataset and how you enable continuous training (let's say adding PASCAL dataset afterwords) be something interesting?
The only problem is the friction between atomic and big picture. In...
the upload method (which has an SDK counterpart) allows you to specify where to upload the dataset to
Can you check again? It works for me. If you're still not able to reach it, can you send an image of the error you're getting?
ZanyPig66 , the 2 agents can run from the same ubuntu account and use the same clearml.conf. if you want each to have its own configuration file just add --config-file PATH_TO_CONF_FILE and it would take another config file. Makes sense?
Hey There Jamie! I'm Erez from the ClearML team and I'd be happy to touch on some points that you mentioned.
First and foremost, I agree with the first answer that was given to you on reddit. There's no "right" tool. most tools are right for the right people and if a tool is too much of a burden, then maybe it isn't right!
Second, I have to say the use of SVN is a "bit" of a hassle. the MLOps space HEAVILY leans towards git. We interface with git and so does every other tool I know of. That ...
ReassuredTiger98 I think it works for me 🙂
I added this to the requirements (You can put the extra-index-url in the clearml.conf), and I've enabled the torch nightly flag:
--extra-index-url https://download.pytorch.org/whl/nightly/cu117
clearml
torch == 1.14.0.dev20221205+cu117
torchvision == 0.15.0.dev20221205+cpu
You can also open github issues, it helps us prioritise features according to how much comments \ upvotes they receive.
Hmm, I actually think there isn't a way. Once you'll have more projects in the system the project will be pushed down and you won't see it in the front page. Is there any specific reason why you want it removed?
BTW, I suggest for new questions, just ask in the clearml-community. I'm really happy to help but I almost missed this message 😄
In the installed pacakges I got:
- 'torch==1.14.0.dev20221205 # https://download.pytorch.org/whl/nightly/cu117/torch-1.14.0.dev20221205%2Bcu117-cp38-cp38-linux_x86_64.whl '
- torchtriton==2.0.0+0d7e753227
- 'torchvision==0.15.0.dev20221205 # https://download.pytorch.org/whl/nightly/cu117/torchvision-0.15.0.dev20221205%2Bcpu-cp38-cp38-linux_x86_64.whl '
We are 😄 We have 3 talks in the upcoming GTC
Yes, If you go to "all projects" you'll see all the experiments and you can choose and compare them there.
You can always "hack" the URL of the compare (just compare 2 experiments, then add and experiments ID), if the experiments are running for long and the "all projects" option doesn't work then it's hackish but works 😄
EnviousStarfish54 VivaciousPenguin66 Another question if we're in a sharing mood 😉 Do you think a video \ audio session with one of our experts, where you present a problem you're having (let's say large size of artifacts) and he tries to help you, or even can give some example code \ code skeleton. Would something like that be of interest? Would you spend some time in such monthly session?
when you create a new dataset you can define one or more parents. by default there are no parents to a dataset.