 
			Reputation
Badges 1
14 × Eureka!Hi Moki, Great idea! We'll add it to our plans and update here once it's done 😄
report_scalar() with a constant iteration, is a hack that you can use in the meantime 🙂
PompousBaldeagle18 Unfortunately no. We thought this to be a promising avenue but have decided, for various reasons, to move and do other things 😞
Hey  GrotesqueDog77 , so it seems like references only works on "function_kwargs" and not other function step parameter.
I'm trying to figure out if there's some workaround we can offer  🙂
Hi ZanyPig66 ,
I assume you're using torch.save() to save your model? A good place to start is with David's suggestion with specifying output_uri = True in the Task.init() code.
That's right. once you call clearml-data close, the completed dataset is immutable. This is a very important feature if traceability is important as once an experiment uses a dataset version, we want to make sure it doesn't change without leaving a trace!
Yeah! I think maybe we don't parse the build number..let me try 🙂
Why not add the extra_index_url to the installed packages part of the script? Worked for me 😄
As for the git, I'm no git expert but having your own git server is doable. I can't tell you what it means in terms of how does it work in your organization though as every one has their own limitations and rules. And as I said, you can use SVN but the connection between it and ClearML won't be as good as with git.
EnviousStarfish54 VivaciousPenguin66 Another question if we're in a sharing mood 😉 Do you think a video \ audio session with one of our experts, where you present a problem you're having (let's say large size of artifacts) and he tries to help you, or even can give some example code \ code skeleton. Would something like that be of interest? Would you spend some time in such monthly session?
Hi Jevgeni! September is always a slow month in Israel as it's holiday season  🙂  So progress is slower than usual and we didn't have an update!
Next week will be the next community talk and publishing of the next version of the roadmap, a separate message will follow
As for your question, yes, our effort was diverted into other avenues and not a lot of public progress has been made.
That said, what is your plan for integration of the tools? automatically promote models to be served from within clearml?
Yeah, it might be the cause...I had a script with OOM and it crashed regularly 🙂
Just making sure, you're running the server locally and run the script on jupyter also locally, right?
ReassuredTiger98 , Pytorch installation are a sore point 🙂 Can you maybe try to specify a specific build and see if it works?
Hey There  SlimyRat21
We did a small integration of Trains with a Doom agent that uses reinforcement learning.
https://github.com/erezalg/ViZDoom
What we did is basically change a bit the strcuture of how parameters are cought (so we can modify them from the UI), then logged stuff like loss, location on the map, frame buffers at certain times and information about end of episode that might be helpful for us.
You can see how it looks on the demoapp (as long as it lasts  🙂  )
Let me know if...
And as for clearml-data I would love to have more examples but not 100% sure what to focus on as using clearml-data is a bit...simple? In my, completely biased, eyes. I assume you're looking for workflow examples, and would love to get some inspiration 🙂
I think that's a hydra issue 🙂 I was able to reproduce this locally. I'll see what can be done
I think the best model name is person_detector_lr0.001_batchsz32_accuracy0.63.pkl 😄
You can also open github issues, it helps us prioritise features according to how much comments \ upvotes they receive.
It's a known fact that documentation always trail features by 3-6 months 😄 We're working on new docs, should be released this week 🙂
pytorch wheels are always a bit of a problem and AFAIK it tells that there isn't a matching version to the cuda specified \ installed on the machine. You can try and update the pytorch to have exact versions and it usually solves the issue
Hmm... My thoughts drift towards the ending of each scalar series, which ATM is the beginning of the Task ID (which probably doesn't tell you much). What if we replace the tags? BTW, in your use case, do you have 1 tag different? multiple?
We update for server and SDK here. For RC's we're still not amazing 🙂
the upload method (which has an SDK counterpart) allows you to specify where to upload the dataset to
GrotesqueDog77 checking 🙂
In the installed pacakges I got:
- 'torch==1.14.0.dev20221205 # https://download.pytorch.org/whl/nightly/cu117/torch-1.14.0.dev20221205%2Bcu117-cp38-cp38-linux_x86_64.whl '
- torchtriton==2.0.0+0d7e753227
- 'torchvision==0.15.0.dev20221205 # https://download.pytorch.org/whl/nightly/cu117/torchvision-0.15.0.dev20221205%2Bcpu-cp38-cp38-linux_x86_64.whl '