Reputation
Badges 1
25 × Eureka!It only happens in the clearml environment, works fine local.
Hi BoredHedgehog47
what do you mean by "in the clearml environment" ?
It should also work with host IP and two docker compose files.
I'm not sure where to push a for a unified docker compose?
BTW if the plots are too complicated to convert to interactive plotly graphs, they will be rendered to images and the server will show them. This is usually the case with seaborn plots
The way ClearML thinks about it is the execution graph would be something like:
script_1 -> script_2 -> script_3 ->
Where each script would have in/out, so that you can trace the usage.
Trying to combine the two into a single "execution" graph might not represent the orchestration process.
That said visualizing them could be done.
I mean in theory there is no reason why we could add those "datasets" as other types of building blocks, for visualization purposes only
(Of course this would o...
UnevenDolphin73 if you have the time to help fix / make it work it will be greatly appreciated π
JitteryCoyote63 I think that with 0.17.2 we stopped mounting the venv build to the host machine. Which means it is all stored inside the docker.
FlutteringWorm14 any insight on the Task the it fails to delete ? or to reproduce ?
Hi Guys,
I hear you guys, and I know this is planned but probably bump down priority.
I know the main issue is the "Execution Tab" comparison, the rest is not an issue.
Maybe a quick Hack to only compare the first 10 in the Execution, and remove the limit on the others ? (The main isue with the execution is the git-diff / installed packages comparison that is quite taxing on the FE)
Thoughts ?
Hi @<1739818374189289472:profile|SourSpider22>
could you send the entire console log? maybe there is a hint somewhere there?
(basically what happens after that is the agent is supposed to be running from inside the container, but maybe it cannot access the clearml-server for some reason)
With offline mode,
Later if you need you can actually import the execution (including artifacts etc.) you just need the zip file it creates when you are done.
sorry typo client.task.
should be client.tasks.
This is odd, how are you spinning clearml-serving ?
You can also do it synchronously :
predict_a = self.send_request(endpoint="/test_model_sklearn_a/", version=None, data=data)
predict_b = self.send_request(endpoint="/test_model_sklearn_b/", version=None, data=data)
Hi SmallGiraffe94
I think it now has to be a semantic version (like pyhton packages for example)
This is so that the auto version increment can bump to the next one automatically.
Maybe adding the date as a tag would make sense? what do you think?
Or maybe in the description field
Hmm you mean how long it takes for the server to timeout on registered worker? I'm not sure this is easily configured
SuperiorDucks36 you mean to manually set an experiment (and the dummy Task is just a way to have an entry to configure), do I understand you correctly ?
Following on that, we are thinking of doing it all for you with a CLI , that will basically create a task from a code/repo you already have on your machine. What do you think?
How do you currently report images, with the Logger or Tensorboard or Matplotlib ?
AbruptHedgehog21 looking at the error, seems like you are out of storage π
Is it possible to do something so that the change of the server address is supported and the pictures are pulled up on the new server from the new server?
The link itself (full link) is stored inside the server. Can I assume the access is IP based not host based (i.e. dns) ?
Right, if this is the case, then just use 'title/name 001'
it should be enough (I think this is how TB separates title/series or metric/variant )
and you have clearml v0.17.2 installed on the "system" packages level, and 0.17.5rc6 installed inside the pyenv venv ?
And maybe adding idle time spent without a job to API is not that a bad idea π
yes, adding that to the feature list π
What if I write the last active state in an instance tag? This could be a solutionβ¦
I love this hack, yes this should just work.
BTW: if you lambda is a for loop that is constantly checking there is no need to actually store "last idle timestamp check as tag", no?
Well it seems we forgot that one π I'll quickly make sure it is there.
As a quick solution (no need to upgrade)task.models["output"]._models.keys()
but this gives me an idea, I will try to check if the notebook is considered as trusted, perhaps it isn't and that causes issues?
This is exactly what I was thinking (communication with the jupyter service is done over http, to localhost, sometimes AV/Firewall software will block it, false-positive detection I assume)
I'm not sure on the frequency it updates though
if so is there any doc/examples about this?
Good point, passing to docs π
https://github.com/allegroai/clearml/blob/51af6e833ddc5a8ba1efaaf75980f58616b25e85/examples/optimization/hyper-parameter-optimization/hyper_parameter_optimizer.py#L123
I mean it is mentioned, but we should highlight it better
What is the difference toΒ
file_history_size
Number of unique files per titles/series combination (aka how many images to store in the history, when the iteration is constantly increasing)