@<1671689437261598720:profile|FranticWhale40> this one: None
So actually while weβre at it, we also need to return back a string from the model, which would be where the results are uploaded to (S3).
Is this being returned from your Triton Model? or the pre/post processing code?
Calling the script without the
PipelineDecorator.run_locally()
i.e. running the pipeline remotely still gives the
ModuleNotFoundError: No module named
Do you have the needed module listed on the pipeline controller Task ? (press on the details link, then go to Execution tab / "Installed Packages"
SweetGiraffe8 no need to import it, any report to TB is automatically logged by ClearML π
I get what you're saying. Only problem is in the case of AutoLogging, I don't have the model id, for the model being saved.
Task.models['output'] should return all the model objects the autologging created
StraightDog31 how did you get these ?
It seems like it is coming from maptplotlib, no?
Yes exactly like a Task (pipeline is a type of task)
'''
clonedpipeline=Task.clone(pipeline_uid_here)
Task.enqueue(...)
'''
That is odd, can you send the full Task log? (Maybe some oddity with conda/pip ?!)
Hi JitteryCoyote63
I think this is the default python str() casting.
But you can specify the preview test when you call upload_artifact:
https://clear.ml/docs/latest/docs/references/sdk/task#upload_artifact
see preview
argument
BTW: how are you using them? should we have a direct interface to those ?
@<1545216077846286336:profile|DistraughtSquirrel81> shoot an email to "support@clear.ml" and provide all the information you can on the "lost account" (i.e. the one you had the data on), this means email account that created it (or your colleagues emails), and any other information that might help to locate it.
Yes there was a bug that it was always cached, just upgrade the clearmlpip install git+
Then as you suggested, I would just use sys.path it is probably the easiest and actually very safe (because the subfolders are Always next to the "main" source code)
DeliciousBluewhale87 basically any solution that is compliant with S3 protocol will work. An example:output_uri="
:PORT/bucket/folder"
Are you sure Nexus supports this protocol ?
I "think" nexus sits on top of a storage solution (like am object storage), meaning we can use the same storage solution Nexus is using.
Just to clarify we do not support the artifactory protocol Nexus provides for storing models/artifacts. But we do support it as a source for python packages used by the a...
so that one app I am using inside the Task can use the python packages installed by the agent and I can control the packages using clearml easily
That's the missing part for me, You have all the requiremnts on the Task (that you can fully control), the agent is setting a brand new venv for each Task inside a container (the venv is cahced, and you can also make the agent just use the default python without installing anything). The part where I'm lost is why would you need the path to t...
I think the main risk is ClearML upgrades to MongoDB vX.Y, and mongo changed the API (which they did because of amazon), and now the API call (aka the mongo driver) stops working.
Long story short, I would not recommend it π
, I can see the shape is
[136, 64, 80, 80]
. Is that correct?
Yes that's correct. In case of the name, just try input__0
Notice you also need to convert it to torchscript
In that case I suggest you turn on the venv cache, it will accelerate the conda environment building because it will cache the entire conda env.
Hmm that makes sense, btw the PYTHONPATH set by the agent would be the working dir listed under the Task, But if you set the agent.force_git_root_python_path
the agent would also add the root git repo to the python path
CourageousKoala93 when you call Task.close() it will mark the task as completed, there is no need to do that manually. The idea with mark_completed is that you can forcefully change the state if needed, or externally stop the task and mark it completed. Make sense?
DrabOwl94 how many 1M files did you end up having ?
ERROR: Error checking for conflicts. ... AttributeError: _DistInfoDistribution__dep_map
Hi AbruptHedgehog21
How i can add S3 credentials to S3 bucket in example.env for clearml-serving-triton? I need to add bucket name, keys and endpoint
Basically boto (s3) environment variables would just work:
https://clear.ml/docs/latest/docs/clearml_serving/clearml_serving#advanced-setup---s3gsazure-access-optional
Hi ObedientDolphin41
I keep bumping against the
ModuleNotFoundError: No module named
exception.
Import the package inside the component function (the one you decorated), it will make sure it lists it in the requirements section automatically.
You can also set it manually by passing it to as the "packages" argument on the decorator function:
Just making sure i understand, you are to upload your models with clearml to the Yandex compatible s3 storage?
should I update nodejs in centos image ?
I think so, it might have been forgotten
Hi @<1545216070686609408:profile|EnthusiasticCow4>
Now that it's running how does one add a new task or remove an existing task from the scheduler?
Did you notice the scheduler stores its own configuration as a config object on the Task?
Notice that you can abort/reset the scheduler, change it's configuration in the UI and relaunch it (i.e. enqueue it). It will use the configuration from the UI (backend) and not the original code that created it. Does that make sense?