Reputation
Badges 1
25 × Eureka!Okay, I think this might be a bit of an overkill, but I'll entertain the idea π
Try passing the user as key, and password as secret?
But it should work out of the box ...
Yes it should ....
The user and personal access token are used as is and it propagates down to submodules, since those are simply another git repository.
Can you manually successfully run:git clone --recursive https://user:token@github.com/company/repo_with_submodules
StickyBlackbird93 the agent is supposed to solve for the correct version of pytorch based on the Cuda in the container. Sounds like for some reason it fails? Can you provide the log of the Task that failed? Are you running the agent in docker-mode , or inside a docker?
These are both specific cases of the glue, and yes both need to be fixed.
(1) I think is actually a feature, nonetheless we should support it.
FriendlySquid61 could you verify specifically on (2)
Hi @<1572395184505753600:profile|GleamingSeagull15>
Try adjusting:
None
to 30 sec
It will reduce the number of log reports (i.e. API calls)
(Also can you share the clearml.conf, without actual creds π )
2 and 3 - I want to manage access control over the RestAPI
Long story short, put a load-balancer in front of the entire thing (see the k8s setup), and have the load-balancer verify JWT token as authentication (this is usually the easiest)
1- Exactly, custom code
Yes, we need to add a custom example there (somehow forgotten)
Could you open an Issue for that?
in the meantime:
` #
Preprocess class Must be named "Preprocess"
No need to inherit or to implement all methods
lass P...
Thanks MagnificentSeaurchin79 ! This code snippet is exactly what I needed, let me check if I can reproduce it.
BattyLion34 Okay, I'll try to see if we can solve the multi-instance issue on Windows (because obviously it should be automatic)
So this is optuna π the idea is it will test which parameters have potential (with early stopping), then launch a subset of the selected parameters
I guess only if autoscaling is used (one worker one machine)?
yes, basically depending on how you set autoscaling / k8s integration π
BTW: you will be loosing the comments π
trains-agent RC (which they tell me will be out tomorrow) will have a switch to do that, just so it is easier π
QuaintJellyfish58 Notice it tries to access AWS not your minio"This seems like a bug?! can you quickly verify with previous version ?
Also notice you have to provide the minio section in the clearml.conf so it knows how to access the endpoint:
https://github.com/allegroai/clearml/blob/bd53d44d71fb85435f6ce8677fbfe74bf81f7b18/docs/clearml.conf#L113
I guess it wonβt due to the nature of services?
Correct, k8s glue works differently, that said I would actually use the helm to spin a pod woth the agent in services mode and venv mode.
This workflow however is the only way I have found to easily fix my previous βModule not foundβ errors
Hmm okay make sense,
Did you try to set these ?
or even hack the sys.path with something likeimport sys, os sys.path.insert(0, os.path.abspath(os.path.dirname(__file__)+"/../")
Hi IrritableGiraffe81
PipelineDecorator.debug_pipeline() runs everything as regular python functions, but "PipelineDecorator.run_locally()" is actually sumulating all the steps on the same local machine (so that it is easier to debug the "real" pipeline running on multiple machines)
What I think is happening is that the casting of the arguments passed to the component fail.
Basically the type hints are currently ignored (we are working on using them for casting in the next version)
but righ...
Hi CluelessElephant89
Hi guys, if I spot issue with documentations, where should I post them?
The best way from our perspective PR the fix π this is why we put it on GitHub
What I'd really want is the same behaviour in the console (one smooth progress bar) and one line per epoch in the logs; high hopes, right?
I think they send some "odd" character instead of CR, otherwise I cannot explain the difference.
Can you point to a toy example demonstrating the same issue ?
Also I just tried the pytorch-lightningΒ
RichProgressBar
Β (not yet released) instead of the default (which is unfortunately based on tqdm) and it works great.
Yey!
@<1523701304709353472:profile|OddShrimp85> are you trying to shut down the one running on your machine ?
Also, how do pipelines compare here?
Pipelines are a type of Task, so like Tasks you can clone and enqueue them, or set them as the target of the trigger.
the most flexible solution would be to have some way of triggering the execution of a script in the parent task environment,
This is the exact idea of the TriggerScheduler None
What am I missing here?
If this is the case and assuming you were able to use clearml to upload them, this means that adding the
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
To your env file should just work
https://github.com/allegroai/clearml-serving/blob/main/docker/example.env
Make sense?
hen, in the bash console, after some time, I see some messages being logged from clearml
JitteryCoyote63 Hmm that is strange, let me check something
Ohh I see, could you copy paste what you put there (instead of the secret and key *** will do π )
This is very odd, can you also put here the file names? maybe an odd character is causing it?
Can you also test it with the latest clearml version (1.8.0) ?
With pleasure, I'll make sure we officially release RC1 soon :)
If you need to change the values:config_obj.set(...)You might want to edit the object on a copy, not the original π
SubstantialElk6 could you add a github issue to set the direct url for the vscode as a parameter to the cleaml-session?
We already have --vscode-version we could either extend it to include a direct url, or add a new argument.
wdyt ?
Hi CheerfulGorilla72
the "installed packages" section is used as "requirements.txt for the agent.
Are you saying the autodetection fails to detect all packages? You can specify in "manual execution" (i.e not when the agent is running the code), to just take the requirements.txt locally:` Task.force_requirements_env_freeze(requirements_file="./requirements.txt")
notice the above call should be executed Before Task.init
task = Task.init(...) `3. If you clear all the "installed packages" se...