Thanks @<1523701868901961728:profile|ReassuredTiger98>
From the log this is what conda is installing, it should have worked
/tmp/conda_env1991w09m.yml:
channels:
- defaults
- conda-forge
- pytorch
dependencies:
- blas~=1.0
- bzip2~=1.0.8
- ca-certificates~=2020.10.14
- certifi~=2020.6.20
- cloudpickle~=1.6.0
- cudatoolkit~=11.1.1
- cycler~=0.10.0
- cytoolz~=0.11.0
- dask-core~=2021.2.0
- decorator~=4.4.2
- ffmpeg~=4.3
- freetype~=2.10.4
- gmp~=6.2.1
- gnutls~=3.6.13
- imageio~=2.9.0
-...
Hmm maybe this is the issue, :
Conda error: UnsatisfiableError: The following specifications were found to be incompatible with a past
explicit spec that is not an explicit spec in this operation (cudatoolkit):
- pytorch~=1.8.0 -> cudatoolkit[version='>=10.1,<10.2|>=10.2,<10.3']
This makes no sense, conda is saying pytorch=1.8 needs cudatoolkit <10.2/10.3 but actually it needs cudatoolkit 11.1
Where again does clearml place the venv?
Usually ~/.clearml/venvs-builds/<python version>/
Multiple agents will be venvs-builds.1
and so on
@<1657918706052763648:profile|SillyRobin38> out of curiosity did you compare performance of tensorrt-llm vs vllm ?
(the jury is still out on that, just wondered if you had a chance)
PompousParrot44 Enterprise licensing pricing usually custom tailored to the size of the company and based on usage. If you are interested feel free to leave details in the "contact us" form on the website, and someone from sales will contact you shortly after.
And is this repo installed on the pipeline creating machine ?
Basically I'm asking how come it did not automatically detect it?
Hi TenderCoyote78
I'm trying to clearml-agent in my dockerfile,
I'm not sure I'm following, Are you traying to create a docker container containing the agent inside? for what purpose ?
(notice that the agent can spin any off the shelf container, there is no need to add the agent into the container it will take of itself when it is running it)
Specifically to your docker file:
RUN curl -sSL
| sh
No need for this line
COPY clearml.conf ~/clearml.conf
Try the ab...
One additional thing to notice, docker will Not actually limit the "vioew of the memory" it will just kill the container if you pass the memory limit, this is a limitation of docker runtime
they are just neighboring modules to the function I am importing.
So I think that is you specify the repo,, on the remote machine you will end with the code of the component sitting at the root folder of the repo, from there I assume you can import the rest, the root git path should be part of your PYTHONPATH automatically.
wdyt?
AdventurousButterfly15 this one is quite self container:
https://github.com/allegroai/clearml/blob/master/examples/reporting/scalar_reporting.py
So I guess pip install finished working
But the task is evidently not being executed.
This is very odd ... you can run the agent with debugging with --debug --foreground to see all the outputs and logs
Oh, yes, that might be (threshold is 3 minutes if no reports) but you can change that:task.set_resource_monitor_iteration_timeout(seconds_from_start=10)
GrittyStarfish67
I do not wish for data duplication. Any Idea how to do this with clearml-data CLI/GUI/python?
At least in theory creating a new version with parents from multiple Datasets should just work out of the box.
wdyt?
main clearml repo?
Yep that sounds right π thank you!
PlainSquid19 Trains will analyze the entire repository if this is a git repo code, and a single script file if there is no repository found.
It will not analyze an entire folder if it is not in a git repository, because it will not be able to recreate this folder anyhow. Make sense ?
What does spin mean in this context?
This line:docker-compose --env-file example.env -f docker-compose-triton-gpu.yml up
But these have: different task ids, same endpoints (from looking through the tabs)
So I am not sure why they are here and why not somewhere else
You can safely ignore them for the time being π
but is it true that I can have multiple models on the same docker instance with different endpoints?
Yes! this is exactly the idea (and again I'm not sure ...
Are you aware of any other way then (other than theΒ
secure: false
Β flag?
Actually self -signing and providing certificate file is already supported with boto (and thus clearml)
AWS_CA_BUNDLE
https://boto3.amazonaws.com/v1/documentation/api/latest/guide/configuration.html
But first I want to make sure the verify argument is actually used, hence False
In your code, can you print the following:import os print(os.environ.keys())
There should be a few keys the Pycharm plugin is sending from the local machine, pointing to the git repo
not sure what is the "right way" π
But I do pkill -f "trains-agent --gpus 0"
This will kill a process that started "trains-agent --gpus 0" Notice it matches the cmd pattern so it has to match the way you executed the agent. You can check it with ps -Af | grep trains-agent
yey π notice that when executed by the agent the call execute_remotely
is skipped, and so does the If statement I added (since running_locally will return False when the process is executed by the agent)
Is task.parent something that could help?
Exactly π something like:# my step is running here the_pipeline_task = Task.get_task(task_id=task.parent)
FierceHamster54 what you are saying that Inside the container it took 20 min to run? or that spinning the GCP instance until it registered as an Agent took 20min ?
Most of the time is took by building wheels for
nympy
and
pandas
...
BTW: This happens if there is a version mismatch and pip decides it needs to build the numpy from source, Can you send the full logs of that? Maybe we can somehow avoid that?
Is there a helper function option at all that means you can flush the clearml-agent working space automatically, or by command?
Every Task execution the agent clears the venv (packages are cached locally, but the actual venv is cleared). If you want you can turn on the venv cache, but there is no need to manually clear the agent's cache.
Hi RoughTiger69
Interesting question, maybe something like:
` @PipelineDecorator.component(...)
def process_sub_list(things_to_do=[0,1,2]):
r = []
for i in things_to_do:
print("doing", i)
r.append("done{}".format(i))
return r
@PipelineDecorator.pipeline(...)
def pipeline():
create some stuff to do:
results = []
for step in range(10):
r = process_sub_list(list(range(step*10, (step+1)*10)))
results.append(r)
push into one list with all result, this will ac...
Hi @<1581454875005292544:profile|SuccessfulOtter28>
Why would you archive an experiment?
Because you do not want to see it any longer (i.e. not very important) but you do not want to loose the ability to later do some forensics and look into it (meaning you do not want to completely delete it)
does that make sense ?