You are using CLEARML_AGENT_SKIP_PYTHON_ENV_INSTALL the wrong way
Deployed fresh new and ran nginx -T in the container:
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful
# configuration file /etc/nginx/nginx.conf:
user www-data;
worker_processes auto;
pid /run/nginx.pid;
include /etc/nginx/modules-enabled/*.conf;
error_log stderr notice;
events {
worker_connections 768;
# multi_accept on;
}
http {
client_max_body_size 100M;
rewrite_l...
if you want plot, you can simply generate plot with matplotlib and clearml can upload them in the Plot or Debug Sample section
Please refer to here None
The doc need to be a bit clearer: one require a path and not just true/false
with
df = pd.DataFrame({'num_legs': [2, 4, 8, 0],
'num_wings': [2, 0, 0, 0],
'num_specimen_seen': [10, 2, 1, 8]},
index=['falcon', 'dog', 'spider', 'fish'])
import clearml
task = clearml.Task.current_task()
task.get_logger().report_table(title='table example', series='pandas DataFrame', iteration=0, table_plot=df)
# logger.report_table(title='table example',series='pandas DataFrame',iteration=0,tabl...
I know that git clone and pip verify all installed is normal. But for some reason in Michael screenshot, I don't see those steps ...
@<1523701087100473344:profile|SuccessfulKoala55> I managed to make this working by:
concat the existing OS ca bundle and zscaler certificate. And set REQUESTS_CA_BUNDLE to that bundle file
python library don't always use OS certificates ... typically, we have to set REQUESTS_CA_BUNDLE=/path/to/custom_ca_bundle_crt because requests ignore OS certificates
ClearML staff may have better solution as I am not familiar with the docker mode
please share your .service content too as there are a lot of way to "spawn" in systemd
you can upload the df as artifact.
Or the statistics as a DataFrame and upload as artifact ?
most of the time, "user" would expect that clearml handle the caching by itself
What error do you have in the Console log tab, in the Web UI ?
Sounds like your docker image is missing some package. This is un-related to clearml.
AS for what package is missing, see here
one the same or different machine !
wow , did not know that vscode have a http "interface" !!! Make kind of sense as vscode is just a Chrome rendering webpage behind the scene ?
please provide the full logs and error message.
by manually updating it like for any app that are on off-line computer ?
interesting, the issue happen with mamba venv. Now I use a python native venv and it is detecting correctly
from the logs, it feels like after git clone, it spend minutes without outputting anything. @<1523701205467926528:profile|AgitatedDove14> Do you know what is the agent suppose to do after git clone ?
I guess a check that all packages is installed ? But then with CLEARML_AGENT_SKIP_PYTHON_ENV_INSTALL=1, what is the agent doing ??
Ok I think I found the issue. I had to point the file server to azure storage:
api {
# Notice: 'host' is the api server (default port 8008), not the web server.
api_server:
web_server:
files_server: "
"
credentials {"access_key": "REDACTED", "secret_key": "REDACTED"}
}
Are you running within a zero-trust environment like ZScaler ?
Feels like your issue is not ClearML itself, but issue with https/SSL and certificate from your zero-trust system
are you using the agent docker mode ?
Do I need not make changes into clearml.conf so that it doesn't ask for my credentials or is there another way around
You have 2 options:
- set credential inside cleaml.conf : i am not familiar with this and never test it.
- or setup password less ssh with public key None
(I never played with pipeline feature so I am not really sure that it works as I imagined ...)
Try to set CLEARML_AGENT_SKIP_PYTHON_ENV_INSTALL=true in the terminal start clearml-agent
See None
not sure if related but clearml 1.14 tend to not "show" the gpu_type
Nice ! That is handy !!
thanks !