Reputation
Badges 1
25 × Eureka!Also what do you have in the "Configuration" section of the serving inference Task?
Profile page top left corner
Hi MagnificentSeaurchin79
This means the tensorflow was not directly imported in the repository (which is odd, it might point to the auto package analysis failing to find a the package, if this is the case please let me know)
Regardless, if you need to make sure a package is listed in the requirements either import it or use.Task.add_requirements('tensorflow')
or Task.add_requirements('tensorflow', '2.3.1')
Hi DullCamel78
Hi everyone! Has anyone tried running
aws_autoscaler.py without docker?
Well generally since this is a remote machine the easiest way to control environment is with containers, hence the default use case. In theory you can change it to use venv, but then of course your a somewhat limited with the diff drivers/cuda/python environement.
performance under docker is 10% lower than on bare metal
add to your extra docker args
` extra_docker_arguments: ["...
Hi PompousBeetle71 , what exactly is the scenario / problem we are trying to solve ?
JitteryCoyote63 this is standard ssh authorized server removal
https://superuser.com/a/30089
specifically you can try:ssh-keygen -R 10.105.1.77
Hi RobustGoldfish9 Kudos on the mount, and my apologies for forgetting to mention it.
You are absolutely right, I'll make sure we have it in the documentation, there is no way to know that obscure env variable 🙂
Hi SubstantialElk6
I think you are absolutely correct, it seems the glue pops all the arguments, when in fact it should maybe process them a,d convert the --env/-e
What do you think?
Aloso I assume if these are the default arguments they should actually be part of the k8s apply.yaml template no ?
load_model
will get a link to a previously registered URL (i.e. it search a model pointing to the specific URL, if it finds it, it will get you the Model object)
SmallDeer34 I have to admit this reference is relatively old, maybe we should update to auther http://clearml.ml (would that make sense ?)
SmarmySeaurchin8 it could be a switch, the problem is that when you have automatic stopping flows, they will abort a task, which is legitimate (e.g. should not considered failed)
How come you have aborted tasks in the pipeline ? If you want to abort the pipeline, you need to first abort the pipeline Task then the tasks themselves.
Please let me know what you find 🤞
PompousParrot44 the fundamental difference is that artifacts are uploaded manually (i.e. a user will specifically "ask" to upload an artifact), models are logged automatically and a user might not want them uploaded (imagine debugging sessions, or testing).
By adding the 'upload_uri' arguments, you can specify to trains that you want all models to be automatically uploaded (not just logged).
Now here is the nice thing, when running using the trains-agent, you can have:
Always upload the mod...
Hi ExasperatedCrocodile76
This is quite the hack, but doable 🙂
`
file_path = task.connect_configuration(name = 'augmentations', configuration = 'augmentations.py')
import importlib
module_name = 'augmentations'
spec = importlib.util.spec_from_file_location(module_name, file_path)
module = importlib.util.module_from_spec(spec)
spec.loader.exec_module(module) `
https://stackoverflow.com/a/54956419
FriendlySquid61 could you help?
SubstantialElk6 (2) yes definitely will be fixed
Regrading (1), what do you mean by "via the code" ? Do you mean like as a Task docker cmd ?
Hi GiganticTurtle0
The problem is that the packages that I define in 'required_packages' are not in the scripts corresponding
What do you mean by that? is "Xarray" a wheel package? is it instllable from a git repo (example: pip install git+
http://github.com/user/xarray/axrray.git )
none of my pipeline tasks are reporting these graphs, regardless of runtime. I guess this line would also fix that?
Same issue, that said, good point, maybe with pipeline we should somehow make that a default ?
Hmm could it be this is on the "helper functions" ?
BTW, it looks like a lot of users really like the idea of runnig pipeline steps as subprocesses (which frankly I really cannot understand as Python Process is such an amazing tool to do just that),
anyhow We will have PipelineDecorator.debug_pipeline()
which will run the pipeline steps as functions, and PipelineDecorator.execute_locally()
which will run the Pipeline steps as subprocess
wdyt?
Looks great, let me see if I can understand what's missing, because it should have worked ...
sorry that I keep bothering you, I love ClearML and try to promote it whenever I can, but this thing is a real pain in the ass
No worries I totally feel you.
As a quick hack in the actual code of the Task itself, is it reasonable to have:task = Task.init(....) task.set_initial_iteration(0)
This is odd, how are you spinning clearml-serving ?
You can also do it synchronously :
predict_a = self.send_request(endpoint="/test_model_sklearn_a/", version=None, data=data)
predict_b = self.send_request(endpoint="/test_model_sklearn_b/", version=None, data=data)
Is there still an issue? Could it be the browser cannot access the file server directly?