Reputation
Badges 1
195 × Eureka!That's good enough for me, I forgot about the all projects option
AgitatedDove14 , TimelyPenguin76 , a small blast from the past
Unfortunately it seems like this is not working for backslash escape character
https://demoapp.demo.clear.ml/projects/7eaa1749475d4ad4bd21a5456fd2e157/experiments/3efe981238e543c8b6ad682dd13c72bc/output/hyper-params/hyper-param/General
https://colab.research.google.com/drive/1w5lQGxsblnLGlhJEDH_b0aIiUvjGeLjy?usp=sharing
if preferred I can open a Github issue about this
thanks AgitatedDove14 ! this is what I was looking for
is there a fundamental reason why is this only enabled in --docker mode?
I was hoping for something that I can scale
imagine the beautiful PR showing such a feature 👀
so a different behavior between a string and a string in a tuple is by design? I find it confusing, I guess this is the YAML convention?
https://colab.research.google.com/drive/1w5lQGxsblnLGlhJEDH_b0aIiUvjGeLjy?usp=sharing
AgitatedDove14 , by the way, can you take a look at https://clearml.slack.com/archives/CTK20V944/p1625558368001600
maybe you'll have other ideas? at the moment it seems like a dead end
for sure.. and more then the eye-candy aspect it can actually be super useful visualizations
AgitatedDove14 thanks, at peak usage we have 6-8 gb of free RAM
AlertBlackbird30 I saw you asked about wanted features to add to the roadmap.. this my top one 🙂
AgitatedDove14 , mostly out of curiosity, what is the motivation behind introducing this as an environment variable knob rather then a flag with some default in Task.init?
and also in terms of outcome, the scalars follow the correct epoch count, but the debug samples and monitored performance metric show a different count
AgitatedDove14 in terms of explicit reporting I'm using the current_epoch which is correct when I check it in debug mode
Hi AgitatedDove14 , so it looks something like this:
` Task.init
trainer.fit(model) # clearml logging starts from 0 and logs all summaries correctly according to real count
triggered fit stopping at epoch=n
something
trainer.fit(model) # clearml logging starts from n+n (thats how it seems) for non explicit scalar summaries (debug samples, scalar resources monitoring, and also global iteration count)
triggered fit stopping
... `I am at the moment diverging from this implementation to s...
AgitatedDove14 no it has an offset of the value that it started with, so for example you stopped at n, then when you are running the n+1 epoch you get the 2*n+1 reported
AgitatedDove14 should be, I'll try to create a small example later today or tomorrow
hi TimelyPenguin76 thanks, for some reason it didn't show up in my search or maybe I missed it..
I was wondering specifically about the following case:
lets say I'm cloning the task you created above, now I am editing some of the hyper parameters in the UI and enqueueing it.
would the config be "automatically" synced? I assume not, if not what would be a recommended way to sync it?
I especially wondered if there is a "smart" sync (with parsing) that can take advantage of the type hinting in...
Hi AgitatedDove14 , if you don't mind having a look too, I think its probably just a small misunderstanding
according to the above I was expecting the config to be auto-magically updated with the new yaml config I edited in the UI, however it seems like an additional step is required.. probably connect_dict? or am I missing something
TimelyPenguin76 thanks for the answer, so for example (to make sure I understand) with the example you gave above when I'll print the config I'll see the new edited parameters?
What about the second part of the question, would it be parsed according to the type hinting?
AgitatedDove14 sounds great I'm going to give it ago
worked like a charm
FrothyDog40 :man-facepalming: yea that seem to do the job, however it still feels like a bug that the out-of-view experiments are not deselected when clicking the top box (select all \deselect all)
less critical though.. thanks!
this one is with the brave browser but I get the same with chrome
OS:Fedora
browser: brave but also on chrome
reproduce like in the example I gave above.. drag the right corner across more then a single column
AgitatedDove14 , well.. having the demo server by default lowers the effort threshold for trying ClearML and getting convinced it can deliver what it promises, and maybe test some simple custom use cases. I don't know what are the behind the scenes considerations in terms of costs of keeping the demo server running, but even having a leaner version where you limit the duration in which the experiment records are deleted after a week or few days sounds useful to me
SuccessfulKoala55
` import plotly.express as px
df = px.data.gapminder()
fig = px.scatter(df, x="gdpPercap", y="lifeExp", animation_frame="year", animation_group="country",
size="pop", color="continent", hover_name="country",
log_x=True, size_max=55, range_x=[100, 100000], range_y=[25, 90])
task.get_logger().report_plotly(title="TEST", series="sepal", iteration=0, figure=fig) `Thanks