CleanWhale17 per your request :)
An automated ML Pipeline 👍 Automated Data Source Integration 👍 Data Pooling and Web Interface for Manual Annotation of Images(Seg. / Classif) [Allegro Enterprise] or users integrate with open-source Storage of Annotation output files(versioned JSON) 👍 Online-Training Support(for Dataset Shifts) [Not Sure what you mean] Data Pre-processessing (filter/augment) [Allegro Enterprise] or users integrate with open-source Data-set visualization(stats...
No (this is deprecated and was removed because it was confusing)
https://github.com/allegroai/clearml-agent/blob/cec6420c8f40d92ab1cd6cbe5ca8f24cf351abd8/docs/clearml.conf#L101
error in my-package setup command:
Okay this seems like an error in the setup.py you have in the "mypackage" folder
Yes, there is no real limit, I think the only requirements id docker v19+
Yes, only task.execute_remotely()
should be the last call. because it literally will stop the local run before you add the Args section
UnevenDolphin73 FYI: clearml-data is documented , unfortunately only in GitHub:
https://github.com/allegroai/clearml/blob/master/docs/datasets.md
StraightDog31 how did you get these ?
It seems like it is coming from maptplotlib, no?
Wait ResponsiveHedgehong88 I'm confused, if you integrated your code with clearml, didn't you to run it manually even once (on any machine, local/remote)?
@<1535793988726951936:profile|YummyElephant76> oh you mean like jupyter server was running, then inside the notebook you would start a new venv, in that venv "notebook" package was missing, hence it failed detecting the notebook ?
I would suggest deleting them immediately when they're no longer needed,
This is the idea for the next RC, it will delete them after it is done using 🙂
Hi JumpyDragonfly13
I don't know why I'm getting
172.17.0.2
I think it (the remote jupyter Task) fails to get the correct IP address of the server.
You can manually correct it by going to the DevOps project, look for the runnig Task there, then under Configuration/Properties change external_address
to the actual IP 10.19.20.15
Once that is done, re-run the clearml-session
, it will suggest to connect to the running session, it should work....
BTW:
I'd like...
Hi CheerfulGorilla72
I guess this is a documentation bug, is there a stable link for the latest docker-compose ?
This is an odd error, could it be conda is not installed in the container (or in the Path) ?
Are you trying with the latest RC?
Hmmm.
could you change the api_server:
http://localhost:8008 to your host IP?
for example:api_server:
http://192.168.1.11:8008
Yep that will fi it, nice one!!
BTW I think we should addtge ability to continue aborted datasets, wdyt?
Okay that might explain the issue...
MysteriousBee56 so what you are saying ispython3 -m trains-agent --help
does NOT work
but trains-agent --help
does work?
Why would you need to manually change the current run? you just provided the values with either default/command-line ?
what am I missing here?
ResponsiveHedgehong88 I'm not sure I state dit, but the argparser arguments and values are collected automatically from your current run and put on the Task, there is no need to manually set them if you have the argparser running on your machine. Basically it collects the current (i.e. the process running on your machine) settings, and "copies" them ...
if they're mission critical, but rather the clearml cache folder?
hmmm... they are important, but only when starting the process. any specific suggestion ?
(and they are deleted after the Task is done, so they are temp)
Hi FiercePenguin76
https://allegro.ai/clearml/docs/rst/references/clearml_python_ref/model_module/model_outputmodel.html
Basically:from clearml import OutputModel model = OutputModel() model.update_weights(weights_filename='local_file_here.bin')
Hi @<1571308003204796416:profile|HollowPeacock58>
I'm assuming this is the arm support (i,e, you are running on new mac) fix we released in one one of the last clearml-agent versions. could you update to the latest clearml-agent?
pip3 install clearml-agent==1.6.0rc2
somehow set docker_args and docker_bash_setup_script equivalent??
task.set_base_docker(...)# somehow setup repo and branch to download to remote instance before running
This is automatically detected based on your local commit/branch as well ass uncommitted changes
I suspect it's the localhost - and the trains-agent is trying too hard to access the port, but for some reason does not report an error ...
I'm checking now to see where the extra ' could come from
` param = {'arg': value}
task.connect(param, section='new section')
create pipeline here
pipeline `
Hi CleanWhale17 let me see if I can address them all
Email Alert for finished Job(I'm not sure if it's already there).
Slack integration will be public by the end of the weekend 🙂
It is fully customization / extendable, I'll be happy to help .
DVC
Full dataset tracking is supported using the artifacts and the ability to integrate to any central storage (shared folders/ S3 / GS / Azure etc.)
From my experience, it is easier to work with artifacts from Data-Processing Tasks...
Hi RattyBat71
Do you tend to create separate experiments for each fold?
If you really want to parallelized the workload, then splitting it to multiple executions (i.e. passing an argument of the index of the same CV) makes sense, then you can compare / sort the results based on a specific metric. That said if speed is not important, just having a single script with multiple CVs might be easier to implement?!
BTW, we figure out that
'
is belong the echo
yep, when seeing the full command it is apparent