I just set
agent.enable_git_ask_pass: true
in the config of the clearml agent (v1.5.1) and the task is still stuck at asking username when trying to get the private dependency.
Hmm that should not happen, could you delete the cache and retry? maybe?
Hi @<1523701066867150848:profile|JitteryCoyote63>
Hi, how does
agent.enable_git_ask_pass
works
basically it pushes the pass through stdin to git when it asks (it is a git feature)
is there a built in programmatic way to adjust
development.default_output_uri
?
How about: In your Task.init(output_uri='...')
Wait @<1523701066867150848:profile|JitteryCoyote63>
If you reset the Task you would have lost the artifacts anyhow, how is that different?
Hi @<1541229812243238912:profile|PoisedMoth54>
We should probably add a better interface (please feel free to open a github issue on the interface) until then
dataset._task.connect_configuration(configuration="path/to/file", name="my config")
Hi @<1561885921379356672:profile|GorgeousPuppy74>
- Could you copy the 3 messages here into your original message, it helps keeping things tidy and nice (press on the 3 dot menu and select edit)
- what do you mean by "currently its not executing in queue-01", you changed it so it should be pushed to queue-02, no? Also notice that you can run the enire pipeline as sub-processes for debugging,
just callpipe.start_locally(run_pipeline_steps_locally=True)
You also need an agent on the ser...
Probably less secure though :)
AbruptHedgehog21 the bucket and the full link are registered on the model object itself, you can see them in the ui, under the models tab. The only thing you actually need to pass inside is the credentials. Make sense?
PompousParrot44 with pleasure. If during your search for a solution you come across something that solves it, and might integrate to the agent, do not hesitate to suggest it :)
Hi @<1610808279263350784:profile|FriendlyShrimp96>
Is there a way to get a list of variants given a metric, or even just a full list of metrics and variants for a given task id?
Try this
None
from clearml.backend_api.session.client import APIClient
c = APIClient()
metrics = c.events.get_task_metrics(tasks=["TASK_ID_HERE"], event_type="training_debug_image")
print(metrics)
I think API ...
Is is across the board for any Task ?
What would you expect to happen if you clone a Task that used the requirements.txt, would you ignore the full "pip freeze" and use the requirements .txt again, or is this thime we want to use the "installed packages" ?
HI QuizzicalDove0
I guess the reason is that the idea is integration is literally 2 lines, and it will take less time to execute the code on a system with working env (we assume there is one) then to configure all the git , python packages, arguments etc...
All that said you can create an experiment from code , using Task.import_task https://allegro.ai/docs/task.html#trains.task.Task.import_task
ReassuredTiger98 oh wow I did not realize you actually call importlib to import your libraries (any reason not to call import
?)
And yes, I think we will miss it as the package analysis is actually static text analysts of the code
MelancholyElk85 notice there is the pipeline controller queue (i.e. which agent will run the logic of the pipeline), and the default queue for the pipeline steps (i.e. the actual steps of the pipeline).
The default queue for the pipeline logic itself is services
. you can change it ( pipeline.start(..., queue='another_q')
)
Make sense ?
So it's seemingly not the image, but maybe something to do with how Studio runs it as a kernel.
Yeah I think that for some reason it fails detecting this is actually jupyter noteboko (not really sure why), Thank you for double checking on the container !!
TrickySheep9 Yes, let's do that!
How do you PR a change ?
Hmmm could you attach the entire log?
Remove any info that you feel is too sensitive :)
ClumsyElephant70 yes there is 🙂clearml-agent build --id <task id> --target <folder>
(I might have a typo there, but you can basically check the full help clearml-agent build --help
)
SoreDragonfly16
btw: The difference between the two graphs is the ratio pf the graph display , that it 🙂
Hi @<1556450111259676672:profile|PlainSeaurchin97>
While testing the migration, we found that all of our models had their
MODEL URL
set to the IP of the old server.
Yes all the artifacts/models/debug-samples are stored "as is" , this means that if you configured your original setup with IP, it is kind of stuck there, this is why it is always preferred to use host-name ...
you apparently also need to rename
all
model URLs
Yes 😞
EnviousStarfish54
plt.show will capture the figure, that if you call it multiple times, it will add a running number to the figure itself (because the figure might change, and you might want the history)
if you call plt.imshow, it's the equivalent of debug image, hence it will be shown in the debug-samples tab, as an image.
Make sense ?
MysteriousBee56 and please this one: "when you run the trains-agent
with --foreground , before it starts the docker it print the full command line"
Are you seeing the entire jupyter notebook in the "uncommitted changes" section
It's in my local conda environment though.
Meaning this is a wheel installed manually in conda? or is it a folder inside the conda environment ?
DeliciousBluewhale87 fyi, the new version of the pipeline (hopefully pushed towards the end of this week), will allow you to more easily write steps as functions (not only as Tasks, as the current implementation)
Also check the new Trigger and Scheduler both intended to trigger these pipelines:
https://github.com/allegroai/clearml/blob/fe3c481c37e70881c44d67c1cf9bbce00a84747e/clearml/automation/scheduler.py#L457
https://github.com/allegroai/clearml/blob/fe3c481c37e70881c44d67c1cf9bbce00a8...
AstonishingSeaturtle47 I think there's a workaround for the GitHub multiple repo issue. See https://gist.github.com/gubatron/d96594d982c5043be6d4
It has to be alive so all the "child nodes" could report to it....