Hmm there was a commit there that was "fixing" some stuff , apparently it also broke it
Let me see if we can push an RC (I know a few things are already waiting to be released)
Hi @<1523702932069945344:profile|CheerfulGorilla72>
the agent is Always inherits from the docker system installed environment
If you have a custom venv inside the docker that is Not activated by default you can set the agent to use it:
None
CLEARML_AGENT_SKIP_PIP_VENV_INSTALL
Would I be able to add customized columns like I am able to in
task.connect
? Same question applies for parallel coordinates and all kinds of comparisons
No to both 😞
And If I create myself a Pro account
Then you have the UI and implementation of both AWS & GCP autoscalers, am I missing something?
In Windows setting
system_site_packages
to
true
allowed all stages in pipeline to start - but doesn't work in Lunux.
Notice that it will inherit from the system packages not the venv the agent is installed in
I've deleted tfrecords from master branch and commit the removal, and set the folder for tfrecords to be ignored in .gitignore. Trying to find, which changes are considered to be uncommited.
you can run git diff it is essentially...
How so? they are in one place? the creation of the venv is transparent, and the packages that are there are everything you have in the docker, plus the ability to override them from the UI.
What am I missing here ?
GiganticTurtle0 the fix was not applied in 1.1.2 (which was a hot fix after pyjwt interface changed and broke compatibility)
The type hint fix it on the latest RC:pip install clearml==1.1.3rc0I just verified with your example
apologies for the confusion, we will release 1.1.3 soon (we just need to make sure all tests pass with a few PRs that were merged)
Hi AstonishingRabbit13
now I’m training yolov5 and i want to save all the info (model and metrics ) with clearml to my bucket..
The easiest thing (assuming you are running YOLOv5 with python train.py is to add the following env variable:CLEARML_DEFAULT_OUTPUT_URI=" " python train.pyNotice that you need to pass your GS credentials here:
https://github.com/allegroai/clearml/blob/d45ec5d3e2caf1af477b37fcb36a81595fb9759f/docs/clearml.conf#L113
PompousBeetle71 If this is argparser and the type is defined, the trains-agent will pass the equivalent in the same type, with str that amounts to '' . make sense ?
, but are you suggesting sending the requests to Triton frame-by-frame?
yes! trition backend will do the autobatching, and in an enterprise deployment the gRPC loadbalancer will split it across multiple GPU nodes 🙂
ComfortableShark77 are you saying you need "transformers" in the serving container?CLEARML_EXTRA_PYTHON_PACKAGES: "transformers==x.y"https://github.com/allegroai/clearml-serving/blob/6005e238cac6f7fa7406d7276a5662791ccc6c55/docker/docker-compose.yml#L97
TightElk12 I think this message belongs to a diff thread ;)
It takes 20mins to build the venv environment needed by the clearml-agent
You are Joking?! 😭
it does apt-get install python3-pip , and pip install clearml-agent, how is that 20min?
If I call explicitly
task.get_logger().report_scalar("test", str(parse_args.local_rank), 1., 0)
, this will log as expected one value per process, so reporting works
JitteryCoyote63 and do prints get logged as well (from all processes) ?
Apparently it ignores it and replaces everything...
ContemplativePuppy11
yes, nice move. my question was to make sure that the steps are not run in parallel because each one builds upon the previous one
if they are "calling" one another (or passing data) then the pipeline logic will deduce they cannot run in parallel 🙂 basically it is automatic
so my takeaway is that if the funcs are class methods the decorators wont break, right?
In theory, but the idea of the decorator is that it tracks the return value so it "knows" how t...
Hi @<1526371965655322624:profile|NuttyCamel41>
How are you creating the model? specifically what do you have in "config.pbtxt"
specifically any python code should be in the pre/post processing code (actually not running on the GPU instance)
I have a lot of parameters, about 40. It is inconvenient to overwrite them all from the window that is on the screen.
Not sure I follow, so what are you suggesting?
Can you see that the environment is actually being passed ?
FriendlySquid61 could you help?
Hmm, I think you should use --template-yaml
The --template-yaml allows you to use foll k8s YAML template (the overrides is just overrides, which do not include most of the configuration options. we should probably deprecate it
But from the log it seems that:
you are not running as root in the docker? Python3.8 is installed (and not python 3.6 as before)
Hi BitterStarfish58
What's the clearml version you are using ?
dataset upload both work fine
Artifacts / Datasets are uploaded correctly ?
Can you test if it works if you change " http://files.community.clear.ml " to " http://files.clear.ml " ?
Click on the "k8s_schedule" queue, then on the right hand side, you should see your Task, click on it, it will open the Task page. There click on the "Info" Tab, there look for "STATUS MESSAGE" and "STATUS REASON". What do you have there?
SubstantialElk6 this is odd, how are they passed ? what's the exact setup ?
Hmm yes this is exactly what should not happen 🙂
Let me check it
MoodyCentipede68 is diagram 2 a batch processing workflow?