poetry
stores git related data in ... you get an internal package we have with its version, but no git reference, i.e.
internal_module==1.2.3
instead of
internal_module @H4dr1en
This seems like a bug with poetry (and I think I have run into this one), worth reporting it, no?
Hi ImpressionableRaven99
In the UI the re is a download button when you hover over the graph.
Are you asking if there is a programmatic interface?
What is the use case for all experiments ?
to add an init script or to expand its capacity,
@<1546665634195050496:profile|SolidGoose91> I seem to see it in the wizard here, what am I missing?
Hi DepressedChimpanzee34
How do I reproduce the issue ?
What are we expecting to get there ?
Is that a Colab issue or hyper-parameter encoding issue ?
so I didn't have much time to upgrade all the packs because I have some issues with that but it is on my todo list
No worries 🙂
Quick question, if you run https://github.com/allegroai/trains/blob/master/examples/frameworks/keras/legacy/keras_tensorboard.py
Do you see models in the artifacts tab?
MassiveHippopotamus56
the "iteration" entry is actually the "max reported iteration over all graphs" per graph there is different max iteration. Make sense ?
But in credentials creation it still shows 8008. Are there any other places in docker-compose.yml where port from 8008 to 8011 should be replaced?
I think there is a way to "tell" it what to out there, not sure:
https://clear.ml/docs/latest/docs/deploying_clearml/clearml_server_config#configuration-files
SkinnyPanda43 issue verified, this seems to be related to python 3.9 and subprocesses.
Let me check what we can do
DistressedGoat23 check this example:
https://github.com/allegroai/clearml/blob/master/examples/optimization/hyper-parameter-optimization/hyper_parameter_optimizer.pyaSearchStrategy = RandomSearch
It will collect everything on the main Task
This is a curial point for using clearml HPO since comparing dozens of experiments in the UI and searching for the best is just not manageable.
You can of course do that (notice you can actually order them by scalars they report, and even do ...
UnevenDolphin73 sounds great, any chance you can open a git issue on clearml-agent repo for this feature request ?
for future reference this is indeed a PEP-610 related bug, f
👍
can we also set the
poetry
version used?....
Actually the agent assumes poetry is preinstalled (so whatever you already have on the docker) ...
That said, maybe we should install a specific version (after installing pip, we could do that if poetry is selected)
wdyt ?
(some packages that are not inside the cache seem to have be missing and then everything fails)
How did that happen?
I gather there's a distinction between the two, with app.clear being the public cloud-based SaaS version
My apologies SmallDeer34 , this is all some legacy domain stuff
actually " http://app.pro.clear.ml ," is not used any longer (although up), and will be removed in the future
SaaS free/pro is the same domain ( http://app.clear.ml ), same accounts, the only difference is whether you added a credit card, other than that it is the same domain and access.
does that make sense ?
2021-07-11 19:17:32,822 - clearml.Task - INFO - Waiting to finish uploads
I'm assuming a very large uncommitted changes 🙂
I'm with on this one 🙂 it better to make a company wide decision on these things and not allow too much flexibility (just two options to choose from, and it should be enough, I think)
data it is going to s3 as well as ebs. Why so it should only go to s3
This sounds odd, if this is mounted then it goes to the S3 (the link will point to the files server, but it will be stored on the mounted drive i.e. S3)
wdyt?
Clearml automatically gets these reported metrics from TB, since you mentioned see the scalars , I assume huggingface reports to TB. Could you verify? Is there a quick code sample to reproduce?
How can i make it such that any update to the upstream database
What do you mean "upstream database"?
Because it lives behind a VPN and github workers don’t have access to it
makes sense
If this is the case, I have to admit that combining offline-mode and remote execution makes sense, no?
AgitatedTurtle16 from the screenshot, it seems the Task is stuck in the queue. which means there is no agent running to actual run the interactive session.
Basic setup:
A machine running clearml-agent
(this is the "remote machine") A machine running cleaml-session (let's call it laptop 🙂 )You need to first start the agent on the "remote machine" (basically call clearml-agent daemon --docker --queue default
), Once the agent is running on the remote machine, from your laptop ru...
if I want to run the experiment the first time without creating the
template
?
You mean without manually executing it once ?
it handles 2FA if my repo lies in Github and my account needs 2FA to sign in
It does not 😞
ColossalDeer61 FYI all is fixed now 🙂
The problem is of course filling in all the configuration details, so that they are viewable.
Other than that, check out:
https://allegro.ai/docs/task.html#trains.task.Task.export_task
https://allegro.ai/docs/task.html#trains.task.Task.import_task
Sounds good ?
Could you send the "installed packages" section of the Task that was created in the notebook ?
UnsightlySeagull42 the assumption is that the agent has a read-only all access user.
As the moment there is no way to configure it to have diff user/pass per repository in the clearml.conf
You can however:
embed the user/pass on the repository link (not very secure) Use ssh-key and have it on .ssh on the host machine Use .git-credentials and configure them (with per project user/pass)
When you login with user/pass in the UI the same "process" happens and you get a Token to work with, this is the same as secret/key
Since in both cases you provide credentials and get back access token, it should work
(This is of course only if you are setting user/pass manually and disabling pass_hashed
as you have)