BitterStarfish58 could you open a GitHub issue on it? I really want to make sure we support it (and I think it should not be very difficult)
When using the UI with regex to search for experiments, due to the greedy nature of the search, it consistently pops up the "ERROR Fetch Experiments failed" window when starting to use groups in regex (that is, parentheses of any kind).
hmm that is a good point (i.e. only on enter it would actually search)
Could it be updated so that if an invalid regex pattern is given, it simply highlights the search bar in red (or similar) rather than stop us while writing the search pattern?
...
I'm hoping we are ready to release
I find it quite difficult to explain these ideas succinctly, did I make any sense to you?
Yep, I think we are totally on the same wavelength 🙂
However, it also seems to be not too prescriptive,
One last question, what do you mean by that?
Any recommended way to make a task/pipeline “pause” until some external condition is met?
RoughTiger69 I would setup a trigger on the Dataset (i.e. new version)
https://github.com/allegroai/clearml/blob/df3d3b269acd2df0f31bfe804eb54ddc84d807c0/examples/scheduler/trigger_example.py#L44
wdyt?
Hi SharpDove45
what
suggested about how it fails on bad/missing credentials
Yes, this is correct, since you specifically set the hosts worst case you will end up with wrong credentials 🙂
Correct.
It starts with the initial script (entry point), if it is self contained (i.e. does not interact with the rest of the repo) it will only analyze it, otherwise it will analyze the entire repo code.
Can you try to run the example code, see if that works for you?
. I was wondering what is the use of
PipelineController.create_draft
if you can't use it to clone and run tasks, as we have seen
I think the initial thought was to allow to create a pipeline from a pipeline programatically. Then once you have the "pipeline" you can manually enqueue it and modify it. Think a pipeline constructing other pipelines in flight based on some logic, then launching them in parallel.
make sense ?
Feel free to open an issue on GitHub making sure this is not forgotten
Also, the IDs as an entry in the Configuration will not be clickable in the web interface, right?
No, but on the other hand, it will be editable if you clone the Task.
Which brings me to a different scenario,
In the original one, the Main Task created the Dataset, i.e. Output Dataset (and stored it both ways).
I could think of a situation the Task is using the Dataset as input (say preprocessing or traing), then we might want to enable users to clone and change the Input dataset. wdyt?
SkinnyPanda43 could it be the clearml.conf is too large? how come it exceeds 16kb ?
Any hint on how you start the AWS autoscaler ?
Has anyone done this exact use case - updates to datasets triggering pipelines?
Hi TrickySheep9 seems like this is following a diff thread, am I missing something ?
Seems like passing the Task object is not working as expected (I'll make sure it is fixed).
Try:dataset._task.set_parent(Task.current_task().id)
I'm not sure about the intended use of
connect_configuration
now.
Basically here is the rationale behind it:
I have a config file that I want to log on the Task, and I Also want to be able to change this configuration file externally when launching using an agent (i.e. edit the content) I have a nested dictionary that I do not want to flatten and push as hyper-parameters because it is not very readble, so I want to store it in a more human readable form and edit it a...
Hi SubstantialElk6
clearml-agent was just updated, it should solve the issue.2. Notice that "torch" / "torchvision" packages are resolved by the agent based on the pytorch compatibility table. Is there a way to reproduce the issue where it fails resolving the torch version? could you send a full log?
3. If you want a specific torch version , you can put a direct link to the torch wheel, for example: https://download.pytorch.org/whl/cu102/torch-1.6.0-cp37-cp37m-linux_x86_64.whl
The issue is the 400 returned form the server, let me check with backend guys
I cannot modify an autoscaler currently running
Yes this is a known limitation, and I know they are working on fixing it for the next version
We basically have flask commands allowing to trigger specific behaviors. ...
Oh I see now, I suspect the issue is that the flask command is not executed from within the git project?!
Hi @<1792364603552829440:profile|TestyBeetle31>
Yeah so sorry we finally changed the repository name:
None
Where is the broken this link coming from, we will fix it (we are working on it, and some of the services do not auto forward
Okay, let me see...
RoughTiger69
move the files locally (i.e. based on the example move folder b into folder a ) Create a new version with two parents ('a' and 'b') then sync the local root folder ('a' in your case). Only the meta-data should change (because the referenced files are already in one of the datasets)wdyt?
For example, the
Task
object is heavily overloaded and its documentation would benefit from being separated into logical units of work. It would also make it easier for the ClearML team to spot any formatting issues.
This is a very good point (the current documentation is basically docstring, but we should create a structured one)
... but some visualization/inline code with explanation is also very much welcome.
I'm assuming this connected with the previous po...
ContemplativeCockroach39 unfortunately No directly as part of clearml 😞
I can recommend the Nvidia triton serving (I'm hoping we will have the out-of-the-box integration soon)
mean while you can manually run it , see docs:
https://developer.nvidia.com/nvidia-triton-inference-server
docker here
https://ngc.nvidia.com/catalog/containers/nvidia:tritonserver
Are you running the agent in docker mode or venv mode?
a task of queue B if the next task is of type A it will have to wait,
It seems you imply there are two types of Tasks and they need to be executed one after the other ?
CheerfulGorilla72 my guess is the Slack token does not have credentials for the private channel, could that be ?