Reputation
Badges 1
25 × Eureka!Hi EcstaticPelican93
Sure, the model deployment itself (i.e. the serving engine) can be executed on any private network (basically like any other agent)
make sense ?
So you have two options
- Build the container from your docker file and push it to your container registry. Notice that if you built it on the machine with the agent, that machine can use it as Tasks base cintainer
- Use the From container as the Tasks base container and have the rest as docker startup bash script. Wdyt?
I'm just trying to see what is the default server that is set, and is it responsive
I'm assuming you mean your own server, not the demo server, is that correct ?
and then second part is to check if it is up and alive
Yes, you can curl
to the ping endpoint :
https://clear.ml/docs/latest/docs/references/api/debug#post-debugping
It would be nice to have some documentation proclaiming how randomness behaves when running tasks (in all their variations). E.g. Should I trust seeds to be reset or should I not assume anything and do my own control over seeds.
That is a good point, I'll make sure we mention it somewhere in the docs. Any thoughts on where?
No after, do you see the poetry lock removed in the uncommitted changes?
Hmm should not make a diff.
Could you verify it still doesn't work with TF 2.4 ?
EnviousStarfish54 could you send the conda / pip environment?
Maybe that's the diff between machine A/B ?
EnviousStarfish54 thanks again for the reproducible code, it seems this is a Web UI bug, I'll keep you updated.
EnviousStarfish54 generally speaking the hyper parameters are flat key/value pairs. you can have as many sections as you like, but inside each section, key/value pairs. If you pass a nested dict, it will be stored as path/to/key:value (as you witnessed).
If you need to store a more complicated configuration dict (nesting, lists etc), use the connect_configuration, it will convert your dict to text (in HOCON format) and store that.
In both cases you can edit the configuration and then when ru...
clearml should detect the "main" packages used in the repository (not just the main-script), the derivatives will be installed automatically by pip when the agent is installing the environment, once the agent is done setting the environment, it updates back the Task with the full list of packages including all required packages.
Hi John. sort of. It seems that archiving pipelines does not also archive the tasks that they contain so
This is correct, the rationale is that the components (i.e. Tasks) might be used (or already used) as cached steps ...
I believe that happens natively thanks to pyhocon? No idea why it fails on mac
That's the only explanation ...
But the weird thing is, it did not work on my linux box?!
Sounds good let's work on it after the weekend, š
Could you test with the latest "cleaml"pip install git+
Task.add_requirement(".") should be supported now š
Hi @<1618780810947596288:profile|ExuberantLion50>
Iām trying to containerize a task using clearml-agent build, following instructions from the docs online.
Do you mean to create a container with the Task's environment for debugging ?
If this is for running the Task there is no need to create a specific container for it, both code and python env are cached.
A quick fix will be:
` import dotenv
dotenv.load_dotenv('~/.env')
from clearml import Task # Now we can load it.
import argparse
if name == "main":
# do stuff `wdyt?
Hi @<1529633468214939648:profile|CostlyElephant1>
what seems to be the issue? I could not locate anything in the log
"Environment setup completed successfully
Starting Task Execution:"
Do you mean it takes a long time to setup the environment inside the container?
CLEARML_AGENT_SKIP_PIP_VENV_INSTALL and CLEARML_AGENT_SKIP_PYTHON_ENV_INSTALL,
It seems to be working, as you can see no virtual environment is created, the only thing that is installed is the cleartml-agent that i...
Try:task.flush(wait_for_uploads=True)
Should do the trick š
IdealPanda97 hmmm interesting, what's exactly the scenario here?
Meanwhile check CreateFromFunction(object).create_task_from_function(...)
It might be better suited than execute remotely for your specific workflow š
Any comments/ideas on how to make it better will be more than welcomed š
Hmm can you try with additional configuration, next to "secure: true" in your clearml.conf, can you add "verify: false"
Yes I think the writer.add_figure
somehow crops the image
Hi @<1523701260895653888:profile|QuaintJellyfish58>
Is there a way or a trigger to detect when the number of workers in a queue reaches zero?
You mean to spin them down? what's the rational ?
Iād like to implement a notification system that alerts me when there are no workers left in the queue.
How are they "dropping" ?
Specifically to your question, let me check I'm sure there is an API that get's that data becuase you can see it in the UI š
(the payload is not the correct form, can that be a problem?
It might, but I assume you will get a different error
LudicrousParrot69 there is already
Task.add_tags
https://github.com/allegroai/clearml/blob/2d561bf4b3598b61525511a1a5f72a9dba74953e/clearml/task.py#L964