@<1734020208089108480:profile|WickedHare16> , I suggest going through the documentation and seeing how each part works with the other
None
Hi @<1535069219354316800:profile|PerplexedRaccoon19> , not sure I understand what you mean, can you please elaborate on what you mean by doing the evaluations within ClearML?
Hi @<1533257411639382016:profile|RobustRat47> , can you please add a screenshot of what you're getting?
I think this should include both fixes
CheerfulGorilla72 , Hi 🙂
Can you please give some examples? Like what setting produced what link in your system and how you ran it.
@<1576381444509405184:profile|ManiacalLizard2> , the rules for caching steps is as follows - First you need to enable it. Then assuming that there is no change of input from the previous time run AND there is no code change THEN use output from previous pipeline run. Code from imports shouldn't change since requirements are logged from previous runs and used in subsequent runs
CheerfulGorilla72 , can you please provide to me how your ~/clearml.conf
has the following settings configured?
api.web_server
api.api_server
api.files_server
Hi @<1750327622178443264:profile|CleanOwl48> , I think you need to also connect the model object to the task as an InputModel
and this way you will be able to use the input model.
Hi @<1734020208089108480:profile|WickedHare16> , what issues are you facing?
Is the entire pipeline running on the autoscaler?
Hi VexedCat68 , what is the error that you're receiving? Do you have a print of it by chance? Also what are you trying to do with 'File 2'?
containing the correct on-premises s3 settings
Do you mean like an example for minio?
Regarding controlling the timeout - I think this is more of a pip configuration
It broke the shift holding to select multiple experiments btw
Oh! Can you please open an issue in github for tracking please?
Hi @<1683648242530652160:profile|ApprehensiveSeaturtle9> , monitoring is already built into it. However you don't have to actively use it and simply work with the endpoint.
I think these env varibles might be relevant to you:
CLEARML_AGENT_SKIP_PIP_VENV_INSTALL
CLEARML_AGENT_SKIP_PYTHON_ENV_INSTALL
https://clear.ml/docs/latest/docs/clearml_agent/clearml_agent_env_var
Do you mean you don't have a files server running? You can technically circumvent this by overring the api.files_server
in clearml.conf
and set it to your default storage
Strange, maybe @<1523703436166565888:profile|DeterminedCrab71> might have an idea
Hi @<1576381444509405184:profile|ManiacalLizard2> , I would suggest playing with the Task object in python. You can do dir(<TASK_OBJECT>)
in python to see all of it's parameters/attributes.
Hi @<1523702932069945344:profile|CheerfulGorilla72> , making sure I understand - You basically want to select an input model via the UI?
GreasyPenguin14 Hi!
If I understand you correctly, you would have to change the url's of the models yourself so they would point to the now downed instances.
You can also use the following setting:sdk.development.default_output_uri: "SOME_URL"
in your ~/clearml.conf to set it to send the models anywhere you want them to go from the get go 🙂
Is that helpful?
Hi @<1695969549783928832:profile|ObedientTurkey46> , this capability is only covered in the Hyperdatasets feature. There you can both chunk and query specific metadata.
None
Hi @<1523701949617147904:profile|PricklyRaven28> , note that steps in a pipeline are special tasks with hidden system tag, I think you might want to enable that in your search
Hi @<1675675716852649984:profile|LackadaisicalLizard46> , I think that's a really neat idea, maybe open a GitHub feature request?
Hmmmm this looks like what you're looking for:
https://clear.ml/docs/latest/docs/references/sdk/automation_controller_pipelinecontroller#stop-1
Tell me if this helps 🙂
VexedCat68 , do you mean does it track which version was fetched or does it track everytime a version is fetched?
PanickyMoth78 , let me check on that 🙂