
Reputation
Badges 1
25 × Eureka!This should have worked with the latest clearml RC.
And you verified it is not working?
Switching to process Pool might be a bit of an overkill here (I think)
wdyt?
Okay now let's try the final lines:$LOCAL_PYTHON -m virtualenv /root/venv /root/venv/bin/python3 -m pip install git+
Can you see it on the console ?
Here you go π
(using trains_agent for easier all data access)from trains_agent import APIClient client = APIClient() log_events = client.events.get_scalar_metric_data(task='11223344aabbcc', metric='valid_average_dice_epoch') print(log_events)
Just curious, if
is a value I can set, where is it used?
It is used when Creating a dataset from inside the cluster (i.e. when launching using the clearml k8s glue),
it will have No effect on what users have on their local machines
i.e. they can always point to a diff server.
That said, when users create their initial clearml.conf and copy paste the info from the web UI, this value (or it might be another one, I'll double check later) will set the initial configuration the c...
GiganticTurtle0 what's the Dataset Task status?
send the agent's logs to log management and monitoring service,
These are stored into ELK, it was built to store large amounts of logs, I cannot see any reason why one would want to remove it?
Maybe if there would be a way to change their format, it could also help filtering them from my side.
You mean in the UI?
Hi GracefulDog98
Any guess why the password is "incorrect" for me?
Basically the clearml-session CLI needs to be able to access (SSH) into the host (cleaml-agent) machine,
is that possible?
I think this is the issue, it was search and replaced . The thing is I'm not sure the helm chart is updated to clearml. Let me check
For reporting the console logs you can use :logger.report_text("my log line here", print_console=False)
https://github.com/allegroai/clearml/blob/b4942321340563724bc16f60ea5dd78c9161778d/clearml/logger.py#L120
Can you send the console output of this entire session please ?
Yes this is a misleading title
What is the difference toΒ
file_history_size
Number of unique files per titles/series combination (aka how many images to store in the history, when the iteration is constantly increasing)
Thanks ScantChimpanzee51 !
Let me see what I can find, should be easy enough to fix now π
Hi ExuberantParrot61
Is the pipeline logic code running from inside the repo?
Sorry if it's something trivial. I recently started working with ClearML.
No worries, this has actually more to do with how you work with Dask
The Task ID is the unique id of the any Task in the system (task.id will return the UID str)
Can you post a toy Dash code here, I'll explain how to make it compatible with clearml π
Ohh yes, if the execution script is not on git and git exists, it will not add it (it will add it if it is in a tracked file via the uncommitted changes section)
ZanyPig66 in order to expand the support to your case. Can you explain exactly which files are on git and which are not?
Hi @<1603198134261911552:profile|ColossalReindeer77>
When you select poetry as package manager the agent passes control to poetry, this means poetry needs to decide on hte correct torch wheel based on your cuda. I do not think poetry can do that, but I do think you can specify the extra index url to take the torch wheel from:
None
WickedGoat98 the mechanism of cloning and parameter overriding is working only when the trains-agent
is launching the experiment. Think of it this way:
Manual execution: trains sends data to server
Automatic (trains-agent) execution: trains pulls data from the server
This applies for both the argparse and connect and connect configuration.
The trains code itself is acting differently when it is executed from the 'trains-agent' context.
Does that help clear things ?
FiercePenguin76 in the Tasks execution tab, under "script path", change to "-m filprofiler run catboost_train.py".
It should work (assuming the "catboost_train.py" is in the working directory).
Copy paste it here π
Hi PanickyMoth78
Hmm it I think it might be that it overrides it with the environment variables it sets ...
optional one, add:sdk.development.default_output_uri: "
"
https://github.com/allegroai/clearml-agent/blob/d96b8ff9068233103053bfe8305fb88274c2c9bf/docs/clearml.conf#L404
Option two (which should work as well):environment { CLEARML_FILES_HOST: "
" }
https://github.com/allegroai/clearml-agent/blob/d96b8ff9068233103053bfe8305fb88274c2c9bf/docs/clearml.conf#L421
task.models["outputs"][-1].tags
(plural, a list of strings) and yes I mean the UI π
I get the n_saved
what's missing for me is how would you tell the TrainsLogger/Trains the current one is the best? Or are we assuming the last saved model is always the best ? (in that case there is no need for tag, you just take the last in the list)
If we are going with: "I'm only saving the model if it is better than the previous checkpoint" then just always use the same name i.e. " http:/...
We are planning an RC later this week, I'll make sure this fix is part of it
Basically what I want is aΒ
clearml-session
Β but with a docker container running JupyterHub instead of JupyterLab.
I missed that π
The idea of clearml-session
is to launch a container with jupyterlab (or vscode) on a remote machine, and connect the users machines (i.e. the machine executed the clearml-session
CLI) directly into the container.
Pleacing the jupyterlab with JupyterHub will be meaningless here, becuase the idea it spins an instance (contai...