It seems that the agent uses the remote repository 's lock file. We've removed and renamed the file locally (caught under local changes), but it still installs from the remote lock file ๐ค
@<1539780258050347008:profile|CheerfulKoala77> you may also need to define subnet or security groups.
Personally I do not see the point in Docker over EC2 instances for CPU instances (virtualization on top of virtualization).
Finally, just to make sure, you only ever need one autoscaler. You can monitor multiple queues with multiple instance types with one autoscaler.
No it doesn't, the agent has its own clearml.conf file.
I'm not too familiar with clearml on docker, but I do remember there are config options to pass some environment variables to docker.
You can then set your environment variables in any way you'd like before the container starts
Heh, my bad, the term "user" is very much ingrained in our internal way of working. You can think of it as basically any technically-inclined person in your team or company.
Indeed the options in the WebUI are too limited for our use case, so we're developed "apps" that take a yaml configuration file and build a matching pipeline.
With that, our users do not need to code directly, and we can offer much more fine control over the pipeline.
As for the imports, what I meant is that I encounter...
I can see the task in the UI, it is not archived, and that's pretty much the snippet, but in full I do e.g.
Thanks AgitatedDove14 , I'll give it a try. Perhaps additional documentation is needed for that extra_layout
AgitatedDove14 the issue was that we'd like the remote task to be able to spawn new tasks, which it cannot do if I use Task.init
before override_current_task_id(None)
.
When would this callback be called? I'm not sure I understand the usecase.
Since this is a single process, most of these are only needed once when our "initializer" task starts and loads.
Basically when running remotely, the first argument to any configuration (whether object or string, or whatever) is ignored, right?
packages an entire folder as zip
What if I have multiple files that are not in the same folder? (That is the current use-case)
It otherwise makes sense I think ๐
Our workaround now for using a Dataset
as we do, is to store the dataset ID as a configuration parameter, so it's always included too ๐
Am I making sense ?
No, not really. I don't see how task.connect_configuration
interacts with our existing CLI? Additionally, the documentation for task.connect_configuration
say the second argument is the name of a file, not the path to it? So something is off
Aw you deleted your response fast CostlyOstrich36 xD
Indeed it does not appear in ps aux
so I cannot simply kill it (or at least, find it).
I was wondering if it's maybe just a zombie in the server API or similar
I'll try a hacky-way around it with sed -i 's/include-system-site-packages = false/include-system-site-packages = true/g' clearml_agent_venv/pyvenv.cfg
and report back.
I also tried adding gent.package_manager.system_site_packages = true
to ensure these virtual environments have access btw, still no avail
Now my extra_vm_bash_script
looks like so:
` deactivate
apt-get install -y gfortran libopenblas-dev liblapack-dev libpq-dev python-is-python3 python3-pip python3-dev proj-bin libgraphviz-dev graphviz graphviz-dev libgdal-dev
apt-get install software-properties-common -y
add-apt-repository ppa:deadsnakes/ppa -y
apt update
apt install python3.7 python3.8 python3.9 python3.7-distutils python3.8-distutils python3.9-distutils python3.10-distutils python3.7-dev python3.8-dev python3.9-dev pyt...
Sorry, not necessarily RBAC (although that is tempting ๐ ), but for now was just wondering if an average joe user has access to see the list of "registered users"?
(in the current version, that is, weโd very much like to use them obviously :D)
I'm trying to decide if ClearML is a good use case for my team ๐
Right now we're not looking for a complete overhaul into new tools, just some enhancements (specifically, model repository, data versioning).
We've been burnt by DVC and the likes before, so I'm trying to minimize the pain for my team before we set out to explore ClearML.
But to be fair, I've also tried with python3.X -m pip install poetry
etc. I get the same error.
Iโll also post this on the main channel -->
` # test_clearml.py
import pytest
import shutil
import clearml
@pytest.fixture
def clearml_task():
clearml.Task.set_offline_mode(True)
task = clearml.Task.init(project_name="test", task_name="test")
yield task
shutil.rmtree(task.get_offline_mode_folder())
clearml.Task.set_offline_mode(False)
class ClearMLTests:
def test_something(self, clearml_task):
assert True run with
pytest test_clearml.py `
I guess the thing that's missing from offline execution is being able to load an offline task without uploading it to the backend.
Or is that functionality provided by setting offline mode and then importing an offline task?
After setting the sdk.development.default_output_uri
in the configs, my code kinda looks like:
` task = Task.init(project_name=..., task_name=..., tags=...)
logger = task.get_logger()
report with logger freely `
Seems like Task.create
is the correct use-case then, since again this is about testing flows using e.g. pytest, so the task is not the current process.
I've at least seen references in dataset.py
's code that seem to apply to offline mode (e.g. in Dataset.create
there is if output_uri and not Task._offline_mode:
, so someone did consider datasets in offline mode)
I know, that should indeed be the default behaviour, but at least from my tests the use of --python ...
was consistent, whereas for some reason this old virtualenv decided to use python2.7 otherwise ๐คจ
Still failing with 1.2.0rc3 ๐ AgitatedDove14 any thoughts on your end?
Maybe it's better to approach this the other way, if one uses Task.force_requirements_env_freeze()
, then the locally updated packages aren't reflected in poetry
๐ค