We use task.export_task()
and a hacked version to get console log:
def save_console_log(task: clearml.Task, fs, remote_path, number_of_reports=10000):
from clearml.backend_api.services import events
from clearml.backend_api import Session
# Stollen from Task.get_reported_console_output()
if Session.check_min_api_version('2.9'):
request = events.GetTaskLogRequest(
task=task.id,
order='asc',
navigate_earlier=True,
...
Are you talking about this: None
It seems to not doing anything aboout the database data ...
some clearml cache folder
oh, looks like I need to empty the Installed Package before enqueue the cloned task
what is the difference between vscode via clearml-session and vscode via remote ssh extension ?
For #2: it's a pull rather than a push system: you need to have a script that do pulling at regular interval and need to keep track what new and what not?
@<1523701868901961728:profile|ReassuredTiger98> I found that you an set the file_server
in your local clearml.conf
to your own cloud storage. In our case, we use something like this in our clearml.conf:
api {
file_server: "azure://<account>..../container"
}
All non artifact model are then store in our azure storage. In our self-hosted clearml setup, we don't even have a file server running alltogether
You are using CLEARML_AGENT_SKIP_PYTHON_ENV_INSTALL the wrong way
Artifact can be anything, that you can use clearml SDK to upload to storage. Which storage is used is defined by your clearml.conf (with its credentials) ClearML web and api server do not store those files
Model is a special artifact: None
Example you have the lineage feature where if you train model B using model A as starting point (aka pre-trained) , and model C from model B, ... The lineage will track modelC was built on...
you should be able to use as many agent as you want.
On the same or different queue
Found a trick to have empty Installed package:clearml.Task.force_requirements_env_freeze(force=True,requirements_file="/dev/null")
Not sure if this is the right way or not ...
you may want to share your config (with credential redacted) and the full docker compose start up log ?
with
df = pd.DataFrame({'num_legs': [2, 4, 8, 0],
'num_wings': [2, 0, 0, 0],
'num_specimen_seen': [10, 2, 1, 8]},
index=['falcon', 'dog', 'spider', 'fish'])
import clearml
task = clearml.Task.current_task()
task.get_logger().report_table(title='table example', series='pandas DataFrame', iteration=0, table_plot=df)
# logger.report_table(title='table example',series='pandas DataFrame',iteration=0,tabl...
I don't have it so I don't know how things are setup and how to pass on credentials in this case
About the caching: how does it work ? ClearML maintain it own cache and monitor if any of you code changes? Even code that get change inside an import ?
ok, so if git commit or uncommit changes differ from previous run, then the cache is "invalidated" and the step will be run again ?
in my case, I set eveything inside the container, including the agent and not using docker mode altogether.
When my container start, it start the agent inside it in "normal" mode
are you using the agent docker mode ?
you should be able to test your credential first using something like rclone or azure-cli
I am more curious about how to migrate all the information stored in the local clearml server to the clearml server in the cloud
i need to do a git clone
You need to do it to test if it works. Clearml-agent will run it itself when it take in a task
do you mean having the ClearML FileServer store on azure blob instead of on the local drive?
Yes, that is what I wanted.
If so, that's not possible. You can however point the fileserver data folder to some mounted folder - if you have something that can create a mount from a filesystem folder to azure blob, it will work (the file server will always treat it as a local file system)
Thanks for confirming that it's the only solution. 👍
Nevermind: None
By default, the File Server is not secured even if Web Login Authentication has been configured. Using an object storage solution that has built-in security is recommended.
My bad
Found the issue: my bad practice for import 😛
You need to import clearml before doing argument parser. Bad way:
import argparse
def handleArgs():
parser = argparse.ArgumentParser()
parser.add_argument('-c', '--config-file', type=str, default='train_config.yaml',
help='train config file')
parser.add_argument('--device', type=int, default=0,
help='cuda device index to run the training')
args = parser....
had you made sure that the agent inside GCP VM have access to your repository ? Can you ssh into that VM and try to do a git clone ?
Can you paste here what inside "Installed package" to double check ?
with ssh public key, if from a terminal, I can do git clone, then so do the clearml agent, as it run on behalf of an local user. That apply to both local and VM
In summary:
Spin down the local server
Backup the data folder
In the cloud, extract the data backup
Spin up the cloud server
wow , did not know that vscode have a http "interface" !!! Make kind of sense as vscode is just a Chrome rendering webpage behind the scene ?