Reputation
Badges 1
25 × Eureka!If you have idea on where to start looking for a quick win, I'm open to suggestions π
your account has 2FA enabled and you must use a personal access token instead of a password.
I'm assuming you have created the personal access token and used it, not the pass
Any chance you can share the Log?
(feel free to DM it so it will not end up public)
Hi PanickyAnt52
hi, is there a way to get back the pipeline object when given a pipeline id?
Yes basically this is a specific type of Task, anything you stored on it can be accessed via the Task object, i.e. pipeline_task=Task.get_task(pipeline_id)
I'm curious, how would you use it?
BTW: since pipeline is also a Task you can have a pipeline launch a step that is a pipeline by its own
For visibility, after close inspection of API calls it turns out there was no work against the saas server, hence no data
Hi ScaryBluewhale66
TaskScheduler I created. The status is still
running
. Any idea?
The TaskScheduler needs to actually run in order to trigger the jobs (think cron daemon)
Usually it will be executed on the clearml-agent services queue/mahine.
Make sense ?
Hi ExcitedFish86
Of course, this is what it was designed for. Notice in the UI under Execution you can edit this section (Setup Shell Script). You can also set via task.set_base_docker
It should be under script.diff:'script': {'binary': '', 'repository': '', 'tag': '', 'branch': '', 'version_num': '', 'entry_point': '', 'working_dir': '', 'requirements': {'pip': ''}, 'diff': ''}
For some reason this is empty in your case, are you seeing it in the UI?
If you are querying the current task (i.e. running) it might not be there yet.
You can call this internal function that returns only after the repo detection is done.task._wait_for_repo_detection()
How did you add the args? Is it argparser? If so the help is automatically picked so you can see it in yhe UI. BTW, the ability to provide a list of options is a really cool feature to have, I'll make sure to pass ot to product π
Sadly, I think we need to add another option like task_init_kwargs
to the component decorator.
what do you think would make sense ?
GreasyPenguin14 could you test with the 0.17.5rc4
?
Also what's the PyCharm / OS?
I see, so basically fix old links that are now not accessible? If this is the case you might need to manually change the document on the mongodb running in the backend
Hi SoreDragonfly16
Sadly no, the idea is to create full visibility to all users in the system (basically saying share everything with your colleagues) .
That said, I know the enterprise version have permission / security features, I'm sure it covers this scenario as well.
Hi ColossalDeer61 ,
My question is about existing monitors in the trains-server (preferably the web UI)
So the idea is you run the code once, it creates a Task in the system and verifies the Slack credentials are working. then you can enqueue it in the "services", and voila, you have a monitoring service running, that you can control from the UI and creates alerts to Slack. unfortunately there is no built-in way to achieve that in the UI. but it should not take more than a few minute...
Actually you cannot breakpoint at "atexit" calls (or at least doesn't work with my gdb)
But I would add a few prints here:
https://github.com/allegroai/clearml/blob/aa4e5ea7454e8f15b99bb2c77c4599fac2373c9d/clearml/task.py#L3166
Hi @<1547028116780617728:profile|TimelyRabbit96>
You are absolutely correct, we need to allow to override configuration
The code you want to change is here:
None
You can try:
channel = self._ext_grpc.aio.insecure_channel(triton_server_address, options=dict([('grpc.max_send_message_length', 512 * 1024 * 1024), ('grpc.max_receive_message_len...
Hi @<1630377234361487360:profile|RoughSeaturtle43>
code from gitlab repo with ssl cert.
what do you mean by ssl secret? is it SSH or app-token ?
Hi FantasticPig28
or does every individual user have to configure their own minio credentials?
You can configure the clients files
entry in the clearml.conf (or use an OS environment)files_server: "
"
https://github.com/allegroai/clearml/blob/12fa7c92aaf8770d770c8ed05094e924b9099c16/docs/clearml.conf#L10
Notice to make sure you also provide credentials here:
https://github.com/allegroai/clearml/blob/12fa7c92aaf8770d770c8ed05094e924b9099c16/docs/clearml.conf#L97
Hi GrievingTurkey78
the artifacts are downloaded to the cache folder (and by default the last 100 accessed artifacts are maintained there).
node executes the task all the info will be erased or does this have to be done explicitly?
Are you referring to the trains-agent
running a docker?
By default the cache is persistent between execution (i.e. saving time on multiple downloads between experiments)
Actually, no. This is ti spin the clearml-server on GCP, not the agent
Question - why is this the expected behavior?
It is π I mean the original python version is stored, but pip does not support replacing python version. It is doable with conda, but than you have to use conda for everything...
But I do not know how it can help me:(
In your code itself after the Task.init
call add:task.set_initial_iteration(0)
See reply here:
https://github.com/allegroai/clearml/issues/496#issuecomment-980037382
Oh yes, you probably have sorting or filter applies there :)
Hi FrothyShark37
Can you verify with the latest version?
pip install -U clearml
We workaround the issue by downloading the file with a request and unzipping only when needed.
We have located the issue, it seems the file-server is changing the header when sending back the file (basically saying CSV with gzip compression, which in turn will cause any http download client to automatically unzip the content). Working on a hot fix for it π
When I do the port forward on my own usingΒ
ssh -L
Β also seems to fail for jupyterlab and vscode, too, which i find odd
The only external port exposed is the SSH one 10022, then the client forwards it locally (so you, the user, can always have the same connect i.e. "ssh root@localhost -p 8022")
If you need to expose an additional port , when the clearml-session is running, open another terminal and do:sh root@localhost -p 8022 -L 10123:localhost:6666
This should po...
Hi SteadySeagull18
What does the intended workflow for making a "pipeline from tasks" look like?
The idea is if you have existing Tasks in the system and you want to launch them one after the other with control over inputs (or outputs of them) you can do that, without writing any custom code.
Currently, I have a script which does some
Task.create
's,
Notice that your script should do Task.init - Not Task.create, as Task create is designed to create additional ...
Hmm make sense, then I would call the export_task once (kind of the easiest to get the entire Task object description pre-filled for you) with that, you can just create as many as needed by calling import_task.
Would that help?