Reputation
Badges 1
147 × Eureka!do you want a fully reproducible example or just 2 scripts to illustrate?
ok, so if it goes over whole repository, then my question transforms into: how to make sure it will traverse only current package? I have separate packages for serving and training in a single repo. I donāt want serving requirements to be installed.
I found this in the conf:# Default auto generated requirements optimize for smaller requirements # If True, analyze the entire repository regardless of the entry point. # If False, first analyze the entry point script, if it does not contain other to local files, # do not analyze the entire repository. force_analyze_entire_repo: false
clearml.utilities.pigar.main.GenerateReqs.extract_reqs
You have two options
I think both can work but too much of a hassle. I think Iāll skip extracting the common code and keep it duplicated for now
I donāt see these lines when requirement deducing from imports happen.
exactly what Iām talking about
first analyze the entry point script, if it does not contain other to local files
but we run everything in docker containers. Will it still help?
I am importing a module which is in the same folder as the main one (i.e. in the same package)
example here: https://github.com/martjushev/clearml_requirements_demo
One workaround that I see is to export commonly used code not to a local module, but rather to a separate in-house library.
āTo have the Full pip freeze
as āinstalled packagesā - thatās exactly what Iām trying to prevent. Locally my virtualenv has all the dependencies for all the clearml tasks, which is fine because I donāt need to download and install them every time I launch a task. But remotely I want to keep the bare minimum needed for the concrete task. Which clearml successfully does, as long as I donāt import any local modules.
āsupply the local requirements.txtā this means I have to create a separate requirements.txt for each of my 10+ modules with different clearml tasks
SmugDolphin23 sorry I donāt get how this will help with my problem
I guess you can easily reproduce it by cloning any task which has an input model - logs, hyperparams etc are being reset, but inputmodel stays.
clearml==1.5.0
WebApp: 1.5.0-192 Server: 1.5.0-192 API: 2.18
in cloned tasks, the correct model is being applied, but the original one stays registered as input model
I think we can live without mass deleting for a while
SuccessfulKoala55 any ideas or should we restart?
need to check with infra engineers
restart of clearml-server helped, as expected. Now we see all experiments (except for those that were written into task__trash during the ādark timesā)
we certainly modified some deployment conf, but lets wait for answers tomorrow
weāll see, thanks for your help!
yeah, I think Iāll go with schedule_function
right now, but your proposed idea would make it even clearer.
havenāt tested it within decorator pipelines, but try
Logger.current_logger()
I want to have 2 instances of scheduler - 1 starts reporting jobs for staging, another one for prod
not sure I fully get it. Where will the connection between task and scheduler appear?