Reputation
Badges 1
147 × Eureka!do you want a fully reproducible example or just 2 scripts to illustrate?
example here: https://github.com/martjushev/clearml_requirements_demo
ok, so if it goes over whole repository, then my question transforms into: how to make sure it will traverse only current package? I have separate packages for serving and training in a single repo. I donāt want serving requirements to be installed.
One workaround that I see is to export commonly used code not to a local module, but rather to a separate in-house library.
if you call Task.init in your entire repo (serve/train) you end up with "installed packages" section that contains all the required pacakges for both use cases ?
yes, and I thought that it is looking at what libraries are installed in virtualenv, but you explained that it rather doing a static analysis over whole repo.
as I understand this: even though force=false, my script is importing another module from same project and thus triggering analyze_entire_repo
clearml.utilities.pigar.main.GenerateReqs.extract_reqs
āsupply the local requirements.txtā this means I have to create a separate requirements.txt for each of my 10+ modules with different clearml tasks
I found this in the conf:# Default auto generated requirements optimize for smaller requirements # If True, analyze the entire repository regardless of the entry point. # If False, first analyze the entry point script, if it does not contain other to local files, # do not analyze the entire repository. force_analyze_entire_repo: false
You have two options
I think both can work but too much of a hassle. I think Iāll skip extracting the common code and keep it duplicated for now
first analyze the entry point script, if it does not contain other to local files
this is where the āmagicā happens
if you import a local package from a different local folder, and that folder is Not in the same repo
need to check with infra engineers
yes, but note that Iām not talking about VS Code instance set up be clearml-session, but about a local one. Iāll do another test to determine whether VS Code from clearml-session suffers from the same problem
or somehow, we can centralize the storage of S3 credentials (i.e. on clearml-server) so that clients can access s3 through the server
like replace a model in staging seldon with this model from clearml; push this model to prod seldon, but in shadow mode
we are just entering the research phase for a centralized serving solution. Main reasons against clearml-serving triton are: 1) no support for kafka 2)no support for shadow deployments (both of these are supported by Seldon, which is currently the best=looking option for us)
SmugDolphin23 sorry I donāt get how this will help with my problem
also, I donāt see an edit button near input models
all subsequent invocations are done by cloning this task in UI and changing the model task_id
no, Iām providing the id of task which generated the model as a āhyperparamā
But the second problem hints that we need to change Dict[datetime, str]
-> Dict[str, datetime]
or do some custom processing before serialization
if a provide a PR, there I donāt see any CI processes in place that will verify the correctness of my code.
Iāll make it more visible though
I think Iāll skip with PR: there is a related problem, that makes the fix (and especially its testing much more difficult): https://github.com/allegroai/clearml/issues/648#issuecomment-1102595620
Did a small update: added a workaround and renamed the issue to include more client_facing conditionlimit_execution_time is present
instead of an implementation detail conditiontimeout_jobs are present
The only thing I found is that I need to run flake8, but it fails even without any changes, i.e. it was not enforced before (see my msg in )
Also added implementation thought to the issue