Hi @<1635813046947418112:profile|FriendlyHedgehong10> , can you please elaborate on the exact steps you took? When you view the model in the UI - can you see the tags you added during the upload?
Hi GrittyHawk31 , can you elaborate on what you mean by metadata? Regarding models you can achieve this by defining the following in Task.init(output_uri="<S3_BUCKET>")
I am using v.1.3.2
The SDK I assume.
Is this a self hosted version? What is the server version?
This is part of the log - I'll need the entire thing 🙂
` ERROR: Could not find a version that satisfies the requirement ipython==7.33.0 (from -r /tmp/cached-reqssiv6gjvc.txt (line 4)) (from versions: 0.10, 0.10.1, 0.10.2, 0.11, 0.12, 0.12.1, 0.13, 0.13.1, 0.13.2, 1.0.0, 1.1.0, 1.2.0, 1.2.1, 2.0.0, 2.1.0, 2.2.0, 2.3.0, 2.3.1, 2.4.0, 2.4.1, 3.0.0, 3.1.0, 3.2.0, 3.2.1, 3.2.2, 3.2.3, 4.0.0b1, 4.0.0, 4.0.1, 4.0.2, 4.0.3, 4.1.0rc1, 4.1.0rc2, 4.1.0, 4.1.1, 4.1.2, 4.2.0, 4.2.1, 5.0.0b1, 5.0.0b2, 5...
@<1590514584836378624:profile|AmiableSeaturtle81> , please see the section regarding minio in the documentation - None
What's the version of your ClearML-Agent?
Are all agents running on the same machine or is it spread out?
Hi GiganticMole91 , what version of ClearML server are you using?
Also, can you take a look inside the elastic container to see if there are any errors there?
I don't think you can currently assign cpu cores to the agents. They just use the resources they have in cpu mode
Cool, thanks for the info! I'll try to play with it as well 🙂
RipeAnt6 Hi!
Yes, you simply need to configure the two following fields in your ~/clearml.conf
api.fileserver: <PATH_TO_NAS>
sdk.development.default_output_uri: <PATH_TO_NAS>
Latest clearml versions in github appears to be around 1.16.3~
Hi @<1811208768843681792:profile|BraveGrasshopper38> , , from my understanding this is a feature to be added to the imminent release of clearml-serving.
As mentioned, this isn't supported in the current version of clearml-serving
, will be added in the next version that should come out soon
Hi SuperiorCockroach75 , yes you should be able to run it on a local setup as well 🙂
ExasperatedCrocodile76 , did you run the original experiment on linux machine with pip and the remote machine is linux with conda package manager?
Hi @<1546303293918023680:profile|MiniatureRobin9> , do you have some stand alone script that reproduces this behaviour for you? Are you both running the same pipeline? How are you starting the pipeline
We all do eventually 😛
ShakyJellyfish91 , Hi!
If I understand correctly you wish for the agent to take the latest commit in the repo while the task was ran at a previous commit?
Hi @<1607184400250834944:profile|MortifiedChimpanzee9> , yes 🙂
This is exactly how the autoscalers work. Scale from 0 to as many as needed and then back to 0
I'm not sure. Maybe AgitatedDove14 , might have an idea
Hi @<1547028031053238272:profile|MassiveGoldfish6> , I think this is what you're looking for - None
Hi @<1529633468214939648:profile|CostlyElephant1> , I think this is what you're looking for:CLEARML_AGENT_SKIP_PIP_VENV_INSTALL
CLEARML_AGENT_SKIP_PYTHON_ENV_INSTALL
None
Hi @<1660817806016385024:profile|FantasticMole87> , you'll either have to re-run it or change something in the DB. I suggest the first option.
Hi @<1607184400250834944:profile|MortifiedChimpanzee9> , to use a specific requirements.txt
you can use Task.add_requirements
None
Hi RattyLouse61 ,
Do you have an example of the parameters you're trying to connect?
Hi SuperiorPanda77 ,how are the tasks running? Locally or via agent? What does the log show?