
Reputation
Badges 1
25 × Eureka!PompousParrot44 now that I think about it, you might be able to limit the cpu affinity, would that help?
We're lucky that they let the developers see their code...
LOL π
and it is also set in theΒ
/clearml-agent/.ssh/config
Β and it still can't clone it. So it must be some security issue internally.
Wait, are you using docker mode or venv mode ? in both cases your SSH credentials should be at the default ~/.ssh
PleasantGiraffe85 can you send examples of the different git repo links (one internal one public) ?
My main issue with this approach is that it breaks the workflow into βa-syncβ set of tasks:
This is kind of the way you depicted it, meaning, there is an an initial dataset, "offline process" (i.e. external labeling) then, ingest process.
I was wondering if the βwaitingβ operator can actually be a part of the pipeline.
This way it will look more clear what is the workflow we are executing.
Hmm, so pipeline is "aborted", then the trigger relaunches the pipeline, and the pipeli...
RoughTiger69 I think this could work, a pseudo example:
` @PipelineDecorator.component(...)
def the_last_step_before_external_stuff():
print("doing some stuff")
@PipelineDecorator.pipeline()
def logic():
the_last_step_before_external_stuff()
if not check_if_data_was_ingested_to_the_system:
print("aborting ourselves")
Task.current_task().abort()
# we will not get here, the agent will make sure we are stopped
sleep(60)
# better safe than sorry
exit(0) `wdyt? (the...
Hi @<1797800418953138176:profile|ScrawnyCrocodile51>
The upload function returns Only after the files were uploaded:
None
Is this what you mean?
If I add the bucket to that ....
Oh no .... you should also set SSL off for the connection, but I think this is only in the clearml.conf:
https://github.com/allegroai/clearml/blob/fd2d6c6f5d46cad3e406e88eeb4d805455b5b3d8/docs/clearml.conf#L101
Is itΒ
CLEARML_CONFIG_FILE
? (I had to dig this from the GH codeΒ
Β )
Yes it is !
https://clear.ml/docs/latest/docs/faq#clearml-configuration
(I will make sure we add it to https://clear.ml/docs/latest/docs/configs/env_vars#server-connection as well π )
We created an account, setup our data pipeline, and now we can't get back in. Nothing is in the project. Can someone from support reach out to help?
Hi @<1545216077846286336:profile|DistraughtSquirrel81>
You mean in the SaaS? (app.clearml.ml) or is it a local installation?
If this is the SaaS, could it be the data is on a different workspace ? (you can switch workspace and refresh the page)
What would be the best way to get all the models trained using a certain Task, I know we can use query_models to filter models based on Project and Task, but is it the best way?
On the Task object itself you have all the models.Task.get_task(task_id='aabb').models['output']
Let me check something
This means it will Always authenticate with SSH force_git_ssh_protocol
...
But it seems you need mixed behavior ?
Are you using github as git provider ?
can we use a currently setup virtualenv by any chance?
You mean, if the cleamrl-agent needs to setup a new venv each time? are you running in docker mode ?
(by default it is caching the venv so the second time it is using a precached full venv, installing nothing)
So you could change it down the road if infra/hosting changes.
Internally this is doable and Enterprise edition supports it, at the end this is stored in DBs π
Also in this case, I'm uploading the data to the public file server URL, but my k8 pod can't reach that for security reasons.
Yes, this is solvable as well (again sorry for pointing it, but only in the enterprise version), where you can specify per client or globally:
` path_substitution = [
# Replace regis...
Thanks! Let me check something
Hi ScatteredClams84
Is there any parameter that adjusts the "number of files that can be stored in the cache"? I am using clearml python version 1.0.3 to upload artifacts and get the artifacts back from a task.Β (edited)
Yes you are correct, the default value is 100 entries.
You can configure it in the clearml.conf, just add:sdk.storage.cache.default_cache_manager_size = 1000
or from code:
` from clearml.storage.cache import CacheManager
CacheManager.get_cache_manager(cache_file_...
let me check
is there a way to increase the size of the text input for fields or a better way to handle lists?
No π
Maybe an easier way to use connect_configuration instead ? it will take an entire dict and store it as text (format is hocon, which is YAML/Json compatible, which means it is hard to break when editing)
yep, that's the reason it is failing, how did you train the model itself ?
Good question πfrom clearml import Task Task.init('examples', 'test')
Thanks, yes you are correct the color is derived from the series name, so I guess the issue is the name+Id is not kept in full screen
SubstantialElk6
The CA is taken automatically by urllib, check the OS environments you need to configure it.
https://stackoverflow.com/questions/27835619/urllib-and-ssl-certificate-verify-failed-errorSSL_CERT_FILE REQUESTS_CA_BUNDLE
@<1671689437261598720:profile|FranticWhale40> I might have found something, let me see if I can reproduce it
from your jupyterlab can you do:!curl
Hmm you either need to run with SUDO or make sure the running user has docker run permissions
I mean what is the actual link?
File:// is a path to a file.
If your machine cannot access that path you get an error.
For example:
file:///home/user/file.bin
translates to /home/user/file.bin
If you do not have the file /home/user/file.bin on your machine you get an error.
GrievingTurkey78 make sense ?
Note that by default trains / clearml will not upload your weights file anywhere , only if you set "output_uri" to a specific location it will do that .
@<1671689437261598720:profile|FranticWhale40> this one: None
AttributeError: 'NoneType' object has no attribute 'base_url'
can you print the model
object ?
(I think the error is a bit cryptic, but generally it might be the model is missing an actual URL link?)print(model.id, model.name, model.url)