Ohh yes, if the execution script is not on git and git exists, it will not add it (it will add it if it is in a tracked file via the uncommitted changes section)
ZanyPig66 in order to expand the support to your case. Can you explain exactly which files are on git and which are not?
And it is not working ? what's the Working Dir you have under the Execution Tab ?
Why can we even change the pip version in the clearml.conf?
LOL mistakes learned the hard way 🙂
Basically too many times in the past pip versions were a bit broken, which is fine if they are used manually and users can reinstall a diff version, but horrible when you have an automated process like the agent, so we added a "freeze version" option, only with greater control. Make sense ?
is there a way for me to get a link to the task execution? I want to write a message to slack, containing the URL so collaborators can click and see the progress
WackyRabbit7 Nice!
basically you can use this one:task.get_output_log_web_page()
WickedGoat98 Nice!!!
BTW: The fix should solve both (i.e. no need to manually cast), I'll make sure the fix is on GitHub so you'll be able to verify 🙂
So you have two options
- Build the container from your docker file and push it to your container registry. Notice that if you built it on the machine with the agent, that machine can use it as Tasks base cintainer
- Use the From container as the Tasks base container and have the rest as docker startup bash script. Wdyt?
If I try to connect a dictionary of typeÂ
dict[str, list]
 withÂ
task.connect
, when retrieving this dictionary with
Wait, this should work out of the box, do you have any specific example?
Hi ResponsiveHedgehong88
With clearml-task the assumption is that you are using argparse. Does that make sense? You can also manually access it with task.get_parameters
https://clear.ml/docs/latest/docs/references/sdk/task#get_parameters
Hi @<1691620877822595072:profile|FlutteringMouse14>
Yes, feast has been integrated by at least a couple if I remember correctly.
Basically there are two ways offline and online feature transformation. For offline your pipeline is exactly what would be recommended. The main difference is online transformation where I think feast is a great start
That didn’t gave useful infos, was that docker was not installed in the agent machine x)
JitteryCoyote63 you mean "docker" was not installed and it did not throw an error ?
Hi OddShrimp85
I think numpy 1.24.x is broken in a lot of places we have noticed scikit breaks on it, TF and others 😞
I will make sure we fix this one
is there a way to increase the size of the text input for fields or a better way to handle lists?
No 😞
Maybe an easier way to use connect_configuration instead ? it will take an entire dict and store it as text (format is hocon, which is YAML/Json compatible, which means it is hard to break when editing)
You’ll just need the user toÂ
name them
 as part of loading them in the code (in case they are loading multiple datasets/models).
Exactly! (and yes UI visualization is coming 🙂 )
RobustSnake79 I have not tested, but I suspect that currently all the reports will stay in TB and not passed automagically into ClearML
It seems like something you would actually want to do with TB (i.e. drill into the graphs etc.) no?
Hi @<1524560082761682944:profile|MammothParrot39>
By default you have the last 100 iterations there (not sure why you are only seeing the last 3), but this is configurable:
None
Hmm so the SaaS service ? and when you delete (not archive) a Task it does not ask for S3 credentials when you select delete artifacts ?
HandsomeCrow5 check the latest RC, I just run the same code and it worked 🙂
Hi JitteryCoyote63
I would like to switch to using a single auth token.
What is the rationale behind to that ?
Hmm... any idea on what's different with this one ?
Hi SkinnyPanda43
I realized that the params are not being saved anymore
Could you test with clearml==1.0.4 ?
Hi ReassuredTiger98
I think it used to be the default and then it was removed, it has no real affect on performance but it remove all asserts ... what is your use case ? do you see any performance gains ?
I see, would having this feature solve it (i.e. base docker + bash init script)?
https://github.com/allegroai/trains/issues/236
the parameter datatypes are not being changed when loading them up.
These are the auto logged parameters , inside YOLO, correct?
Just to make sure, you can actually see the value None
in the UI, is that correct? (if everything works as expected, you should see empty string there)
Hi @<1547028031053238272:profile|MassiveGoldfish6>
What is the use case? the gist is you want each component to be running on a different machine. and you want to have clearml do the routing of data and logic between.
How would that work in your use case?
PlainSquid19 No worries 🙂
btw: If you could see if the mangling of workings / script path happens with the latest trains, that will be appreciated, because if you were running the script in the first place from "stages/" then the trains should have caught it ...
We actually added a specific call to stop the local execution and continue remotely , see it here: https://github.com/allegroai/trains/blob/master/trains/task.py#L2409
Hi CleanPigeon16
Yes there is, when you are cloning the pipeline in the UI, go to the Configuration/Pipeline/continue_pipeline and change it to True
Hi MinuteWalrus85
This is great question, and super important when training models. This is why we designed a whole system to manage datasets (including storage querying, balancing data, and caching). Unfortunately this is only available in the paid tier of Allegro... You are welcome to https://allegro.ai/enterprise/ the sales guys.
🙂
BTW:
Error response from daemon: cannot set both Count and DeviceIDs on device request.
Googling it points to a docker issue (which makes sense considering):
https://github.com/NVIDIA/nvidia-docker/issues/1026
What is the host OS?
I have a task where I create a dataset but I also create a set of matplotlib figures, some numeric statistics and a pandas table that describe the data which I wish to have associated with the dataset and vieawable from the clearml web page for the dataset.
Oh sure, use https://clear.ml/docs/latest/docs/references/sdk/dataset#get_logger they will be visible on the Dataset page on the version in question