
Reputation
Badges 1
25 × Eureka!No worries, you open the issue on pypa/pip and I will do my best to push forward π
We also have to be realistic I have a PR that is waiting for almost a year now (that said it is a major one and needed to wait until a few more features were merged), basically what I'm saying best case scenario is a month to get a PR merged
Hi @<1523701079223570432:profile|ReassuredOwl55>
I want to kick off the pipeline and then check completion
outside
of the pipeline task. (edited)
Basically the pipeline is a Task (of a certain type).
You do the "standard" thing, you clone the pipeline Task, you enqueue it, and you wait for it's status
task = Task.clone(source_task="<pipeline ID here>")
Task.enqueue(task, queue_name=services)
task.wait_for_status(...)
wdyt?
and when you remove the "." line does it work?
Hmm this is odd, when you press on the parent dataset in the UI, and go to full-details, then the INFO tab. Can you copy here everything ?
None
This seems like the same discussion , no ?
Hi JumpyDragonfly13
I don't know why I'm gettingΒ
172.17.0.2
I think it (the remote jupyter Task) fails to get the correct IP address of the server.
You can manually correct it by going to the DevOps project, look for the runnig Task there, then under Configuration/Properties change external_address
to the actual IP 10.19.20.15
Once that is done, re-run the clearml-session
, it will suggest to connect to the running session, it should work....
BTW:
I'd like...
Hi @<1547028031053238272:profile|MassiveGoldfish6>
The issue I am running into is that this command does not give me the dataset version number that shows up in the UI.
Oh no, I think you are correct, it will not return the version per dataset π (I will make sure we add it)
But with the dataset ID you can grab all the properties:Dataset.get(dataset_id="aabbcc").version
wdyt
for example train.py & eval.py under the same repo
If you do not have a lot of workers, that I would guess console outputs
I guess itβs on me to check whether this slowdown is negligible or not
Usually performance is negligible, especially with GPU
But if you really want the best:
Add --security-opt seccomp=unconfined
to the extra_docker_arguments
See detials:
https://betterprogramming.pub/faster-python-in-docker-d1a71a9b9917
There may be cases where failure occurs before my code starts to run (and, perhaps, after it completes)
Yes that makes sense, especially from IT failure perspective
HandsomeCrow5
So using the _edit
method you have the ability to add/edit the execution.script field, without worrying about the API version (I guess the name edit
is misleading, it also does add :)
The address is valid. If i just go to the files server address on my browser,
@<1729309131241689088:profile|MistyFly99> what is the exact address of those files? (including the http prefix) and what is the address of the web application ?
Do you have python 3.7 in the docker ?
Legit, if you have a cached_file (i.e. exists and accessible), you can return it to the caller
If we have the time maybe we could PR a fix?!
VexedCat68 I think this is the issue described here:
https://github.com/allegroai/clearml/issues/491
Can you test with the latest RC:pip install clearml==1.1.5rc1
UptightCoyote42 nice!
BTW: make sure to clear requirements["conda"]
(not visible on the UI but tells the agent which packages were used by conda, out effort to try and see if we could do pip/conda interoperability , not sure if it actually paid off π
I'm assuming these are the Only packages that are imported directly (i.e. pandas requires other packages but the code imports pandas so this is what listed).
The way ClearML detect packages, it first tries to understand if this is a "standalone" scrip, if it does, than only imports in the main script are logged. Then if it "thinks" this is not a standalone script, then it will analyze the entire repository.
make sense ?
WickedGoat98 what's the clearml version you are using?
Awesome! Any chance you feel like contributing it, I'm sure ppl would be thrilled π
Hi GreasyPenguin14
Could you tell me what the differences are and why we should use ClearML data?
The first difference is in the approach itself, DVC ties the data with the code (i.e. git repo), where we (ClearML - but not just us) actually think data should be abstracted from the Code-Base and become a standalone argument, allowing users to build/execute against different dataset/versions. ClearML Data becomes part of the workflow as it is visible from the UI including the abili...
Oh, did you try task.connect_configuration
?
https://allegro.ai/docs/examples/reporting/model_config/#using-a-configuration-file
Okay verified, it won't work with the demo server. give me a minute π
Hi GreasyPenguin66
Is this for the client side ? If it is why not set them in the clearml.conf ?