Reputation
Badges 1
25 × Eureka!Hi RobustGoldfish9 Kudos on the mount, and my apologies for forgetting to mention it.
You are absolutely right, I'll make sure we have it in the documentation, there is no way to know that obscure env variable 🙂
Sure this is basically REST query 🙂
` from clearml.backend_api.session.client import APIClient
client = APIClient()
models = client.models.get_all(name='regexp', tags=['demo'], project=['project_id'])
print(models) `
Hi SubstantialElk6
If you are using boto to acess anything that is Not AWS S3 you have to add both address and port, and make sure you configure the "security" flag.
See example in clearml.conf :
https://github.com/allegroai/clearml-agent/blob/176b4a4cdec9c4303a946a82e22a579ae22c3355/docs/clearml.conf#L247
` aws {
s3 {
{
host: "my-minio-host:9000"
key: "12345678"
secret: "12345678"
...
Notice this is only when:
Using Conda as package manager in the agent the requested python version is already installed (multiple python version installation on the same machine/container are supported)
LudicrousParrot69 we are working on adding nested project which should help with the humongous mass the HPO can create. This is a more generic solution for the nesting issue. (since nesting inside a table is probably not the best UX solution 🙂 )
Hi LazyTurkey38
Documentation for applications is currently worked on, generally speaking this is a way to package features available in ClearML with a UI interface. First these are going to be applications built by the ClearML team and later expanded for the community to be able to contribute to them. Finally users will be able to add their own applications (i.e. package Tasks with UI wizard and dashboard) in their hosted solutions. wdyt?
AverageBee39 I cannot reproduce it 😞 (at least on the latest from Github)
I'm assuming the pipeline is created with target_project , anything else I need to add?
Hi @<1687653458951278592:profile|StrangeStork48>
secrets manager per se,
Quick question, are you running the trains-server over http or https ?
"General" is the parameter section name (like Args)
For example, ServerA stores file at /opt/clearml but ServeB stores at /some_path/clearml
As long as you adjust your docker-compose yaml file, should be just fine
or point to the self signed certificate:export REQUESTS_CA_BUNDLE=/path/to/your/certificate.pem
-- I've been running my script from VSCode for the first time,
In the initial Task (the one created when running inside VSCode) do you have all the packages listed in the "Installed Packages" section ?
@<1523715429694967808:profile|ThickCrow29> this is odd... how did you create the pipeline? can you provide code sample?
That is awesome!
If you feel like writing a bit about the use-case and how you solved it, I think AnxiousSeal95 will be more than happy to publish something like that 🙂
hmm I assume the reason is the cookie / storage changed?
ConvolutedSealion94 if you do bash:cd ~/work/repo/code/ git statuswhat are you getting ?
Hi VivaciousBadger56
Basically you can think of MLRun as "amazon lambda service without amazon". It is designed to run a "function" in scale on multiple nodes.
ClearML on the other hand is an MLOps platform. It does the experiment tracking, it orchestrates Task (think jobs), it does data management and lastly we recently released the serving. These are two different use cases.
Am I making sense here?
In that case when you create the Tasks for the step,do not specify any packages/requirements, then the agent will just use the "requirements.txt" from the repository.
If you need you can also specify them when you create the Task itself see https://github.com/allegroai/clearml/blob/912f6f5ba2328b26de042de03f02de5802df360f/clearml/task.py#L608
https://github.com/allegroai/clearml/blob/912f6f5ba2328b26de042de03f02de5802df360f/clearml/task.py#L609
Hi ScantChimpanzee51
Is it possible to run multiple agent on EC2 machines started by the Autoscaler?
I think that by default you cannot,
having the Autoscaler start 1x p3.8xlarge (4 GPU) on AWS might be better than 4x p3.2xlarge (1 GPU) in terms of availability, but then then we’d need one Agent per GPU.
I think that this multi-GPU setup is only available in the enterprise tier.
That said, the AWS pricing is linear, it costs the same having 2 instances with 1 GPU as 1 instanc...
So maybe the path is related to the fact I have venv caching on?
hmmm could be...
Can you quickly disable the caching and try ?
Hi RoundMole15
What exactly triggers the "automagic" logging of the model and weights?
framework save call, for example torch.save or joblib.save
I've pulled my simple test project out of jupyter lab and the same problem still exists,
What is "the same problem" ?
Hi @<1547028074090991616:profile|ShaggySwan64>
That sounds awesome ! may I suggest a PR to None ?
Is this like a local minio?
What do you have under the sdk/aws/s3 section ?
If this is GitHub/GitLab/Bitbucket what I'm thinking is just a link opening an iframe / tab with the exact entry point script / commit.
What do you think?
This is an odd error, could it be conda is not installed in the container (or in the Path) ?
Are you trying with the latest RC?
single task in the DAG is an entire ClearML
pipeline
.
just making sure detials are not lost, "entire ClearML pipeline ." : the pipeline logic is process A running on machine AA.
Every step of that pipeline can be (1) subprocess, but that means the exact same environement is used for everything, (2) The DEFAULT behavior, each step B is running on a different machine BB.
The non-ClearML steps would orchestrate putting messages into a queue, doing retry logic, and tr...
Hi @<1610083503607648256:profile|DiminutiveToad80>
This sounds like the wrong container ? I think we need some more context here
The Overview panel would be extremely well suited for the task of selecting a number of projects for comparing them.
Could you elaborate ?
Another useful feature would be to allow adding information (e.g. metrics or metadata) to the tooltip.
You mean are we still talking about the "Overview" Tab?
I see... We could definitely add an argument to control it. I'll update here once there is an RC