Reputation
Badges 1
14 × Eureka!Just making sure, you're running the server locally and run the script on jupyter also locally, right?
GiganticTurtle0 So 🙂 had a short chat with one of our R&D guys. ATM, what you're looking for isn't there. What you can do is use OutputModel().update_weights_package(folder_here)
and a folder will be saved with EVERYTHING in it. Now I don't think it would work for you (I assume you want to donwload the model all the time, but artifacts just some times, and don't want to download everything all the time) but it's a hack.
Another option is to use model design field to save links to a...
And yes, we are going to revisit our assumptions for the model object, adding more stuff to it. Our goal is for it to have just enough info so you can have actionable information (IE, how accurate is it? How fast? How much power does it? How big it is, and other information), but not as comprehensive as a task. something like a lightweight task 🙂 This is one thing we are considering though.
I'll check with R&D if this is the plan or we have something else we planned to introduce and update you
Hi SubstantialElk6 For monitoring and production labelling, what we found is that there's no "one size fits all" so while we tried designing ClearML to be easily integrate-able. In the enterprise solution we do have a labeling solution but it's not meant to do production labeling and more to do R&D label fixes. We have customers that integrated 3rd party annotation services with Clearml.
As for DAG workflow, I saw someone who integrated Clearml with Luigi but couldn't find the post anywhere! 😄
Hey, AFAIK, SDK version 1.1.0 disabled the demo server by default (still accessible by setting an envvar).
https://github.com/allegroai/clearml/releases/tag/1.1.0
Is this still an issue even in this version?
Oki doke 🙂 I'll see what the great powers of beyond (AKA, R&D folks) will have to say about that!
Hmm, I actually think there isn't a way. Once you'll have more projects in the system the project will be pushed down and you won't see it in the front page. Is there any specific reason why you want it removed?
Just randomly check if there's a new version...every day 😉
Well...I'll make sure we do something about it 🙂
And some real pipeline (As real as our tests get 😄 )
We update for server and SDK here. For RC's we're still not amazing 🙂
Hmm... My thoughts drift towards the ending of each scalar series, which ATM is the beginning of the Task ID (which probably doesn't tell you much). What if we replace the tags? BTW, in your use case, do you have 1 tag different? multiple?
ExcitedFish86 You came to ClearML because it's free, you stayed because of the magic 🎊 🎉
Not sure I follow your suggestion 🙂
This is how my code compare looks, it's ok because I see the tags:
That's how I see the scalar comparison, no idea which is the "good" and which is the "bad"
Yeah, that makes lots of sense!
Let me circle this back to the UI folks and see if I can get some sort of date attached to this 🙂
If you can open a git issue to help tracking and improve visibility, that'll be very awesome!
Hi RotundSquirrel78 that indeed doesn't look right...What server version are you using? From the "t" icon it seems it's still a trains server?
The ClearML team appreciates bitching anywhere you feel like it (especially the memes section).
In the absence of some UI \ UX channel I suggest to just write here. I can promise you the people who's responsibility it is to fix \ improve the UI are roaming here and will see the request 😄
You can also open github issues, it helps us prioritise features according to how much comments \ upvotes they receive.
KindGiraffe71 We're working on a new docs version, it'll be there as well!
A new version should be available in a week and a half or so 😄
JitteryParrot8 in the new SDK we'll have dataset.add_description() which will do the same as KindChimpanzee37 provided but with a nicer interface 😄
I think you should call dataset.finalize()
stopped is the client's name for aborted
instead of system_tags use: