Reputation
Badges 1
25 × Eureka!Hi CheerfulGorilla72
is it ideological...
Lol, no π
Since some of the comparisons are done client side (browser, mostly the text comparisons) it is a bit heavy , so we added a limit. We want to change it so it does some on the backend, but in the meantime we can actually expand the limit, and maybe only lazy compare the text areas. Hopefully in the next version π€
Hi Guys,
I hear you guys, and I know this is planned but probably bump down priority.
I know the main issue is the "Execution Tab" comparison, the rest is not an issue.
Maybe a quick Hack to only compare the first 10 in the Execution, and remove the limit on the others ? (The main isue with the execution is the git-diff / installed packages comparison that is quite taxing on the FE)
Thoughts ?
task.connect
is two way, it does everything for you:base_params = dict(param1=123, param2='text') task.connect(base_params) print(base_params)
If you run this code manually, then print is exactly what you initialized base_params
with. But when the agent is running it, it will take the values from the UI (including casting to the correct type), so print will result in values/types from the UI.
Make sense ?
Hi ImpressionableRaven99
Yes, it is π
Call this one before task.init, and it will run offline (at the end of the execution, you will get a link to the local zip file of the execution)Task.set_offline(True)
Then later you can import it to the system with:Task.import_offline_session('./my_task_aaa.zip')
The import process actually creates a new Task every import, that said if you take a look here:
https://github.com/allegroai/trains/blob/10ec4d56fb4a1f933128b35d68c727189310aae8/trains/task.py#L1733
you can pass a pre-existing Task ID to "import_task" https://github.com/allegroai/trains/blob/10ec4d56fb4a1f933128b35d68c727189310aae8/trains/task.py#L1653
Hmm DepressedChimpanzee34 my bad it seems the loading is done via YAML loader, but the dumping is straight forward str casting...
https://github.com/allegroai/clearml/blob/6e6271fb91f2aeb2aa7a13c6d07d4e635baaa670/clearml/backend_interface/task/task.py#L934
What would you expect to get (BTW "value\blah"
is Not a valid string assignment in python as there is no \b escape character, it should be "value\blah" which translates into the text "value\blah")
It should preserve the order as the order of the update back (i.e. when executed by the agent) is the same as the order of the keys (obviously py3.7+ becuase it creates dict not Ordered Dicts)
It's in my local conda environment though.
Meaning this is a wheel installed manually in conda? or is it a folder inside the conda environment ?
we concluded that we don't want to run it through ClearML after all, so we ran it standalone
out of curiosity, what was the conclusion and why?
(you can find it in the pipeline component page)
Hi AstonishingSwan80 , what do you mean by "ec2 API"?
No Task.create is for creating an external Task not logging your own process,
That said you can probably override the git repo with env vars:
None
JitteryCoyote63 did you add the bash script here: https://github.com/allegroai/trains-agent/blob/master/docs/trains.conf#L99
I think you can watch it after GTC on the nvidia website, and a week after that we will be able to upload it to the youtube channel π
Hi GiganticTurtle0
I have found thatΒ
clearml
Β does not automatically detect the imports specified within the function decorated
The pipeline decorator will automatically detect the imports Inside the funciton, but not outside (i.e. global), to allow better control of packages (think for example one step needs the huge torch package, and the other does not.
Make sense ?
How can I tellΒ
clearml
Β I will use the same virtual environment in all steps...
for example, one notebook will be dedicated to explore columns, spot outliers and create transformations for specific column values.
This actually implies each notebook is a standalone "process", which makes a ton of sense. But this is where notebooks and proper SW design break, in traditional SW, the notebooks are actually python files, and then of course you can import one from another, unfortunately this does not work in notebooks...
If you are really keen on using notebooks I wou...
Hi PanickyLion56
Yep savefig also works, you can also do,from clearml import Logger Logger.current_logger().report_matplotlib_figure(title="My Plot Title", series="My Plot Series", iteration=10, figure=plt)
https://github.com/allegroai/clearml/blob/0c5d12b830987aa9bb8d44d81e92ff9198008f29/examples/frameworks/matplotlib/matplotlib_example.py#L25
I'll try to create a more classic image.
That is always better, though I remember we have some flag to allow that, you can try with:CLEARML_AGENT_SKIP_PIP_VENV_INSTALL=1 clearml-agent ...
ohh, could it be a 32bit version of python ?
JitteryCoyote63 I think this only holds for the conda distribution.
(Actually quite interesting, I wonder what happens if you already installed cudatoolkit...)
We used subprocess for it, ...
Popen? os.system? fork?
(as i see the services worker is only in the services-queue, and not my default queue (where my other servers/workers are)
So basically the service-mode is just a flag passed to the agent, and the services queue is the name of the queue it will pull from.
If i want a normal worker also
You can just add another section to the docker-compose, or run it manually after you spin the docker-compose.
LazyFox65 wdyt ?
@<1523706266315132928:profile|DefiantHippopotamus88> seems like you are missing the ports π
CLEARML_WEB_HOST="
"
CLEARML_API_HOST="
"
CLEARML_FILES_HOST="
"
It all depends how we store the meta-data on the performance. You could actually retrieve it from the say val metric and deduce the epoch based on that
Hi MotionlessSeagull22
Hmm I'm not this is possible in the UI.
You can compare multiple experiments and view the images in form of thumbnails one next to the other, But full view will be a single image...
You can however right click on the image and get a direct link, then open a new tab ... :(
So I have a task that just loads a model, but I don't see it as an artifact in the UI
You should see it under Artifacts, Input model if you are calling Keras load function (or similar)
PanickyAnt52 when the docker is loaded, it will search for the highest python version to use for the agent. Then when it is launching the Task itself, it will first try to match the python version requested by the Task. It does so by looking for "python3.7" ,
what are you getting when running "which python3.7" inside the docker ? Could it be you have a venv inside the docker with the diff python version ?