Hi ThickDove42 ,
The SETUP SHELL SCRIPT is the bash script to run at the beginning of the docker before launching the Task itself.
You can just try edit it, for example:
apt update apt-get install -y gcc
Hi UnevenDolphin73
I’m not sure I understand - can you share the use case you are looking for? You want to interact with the ClearML-agent?
Hi SubstantialElk6 ,
You can configuration S3 credentials on your ~/clearml.conf
file, or with environment variables:os.environ['AWS_ACCESS_KEY_ID'] ="***" os.environ['AWS_SECRET_ACCESS_KEY'] = "***" os.environ['AWS_DEFAULT_REGION'] = "***"
try:
dataset = Dataset.create(data_name, project_name) dataset_id = dataset.id
Which storage are you using? ClearML files server?
One of the following objects Numpy.array, pandas.DataFrame, PIL.Image, dict (json), or pathlib2.Path
Also, if you used pickle
, the pickle.load
return value is returned. and for strings a txt
file (as it stored).
From the UI, clone the task you have, and after hit the edit
in the uncommitted changes section (if you can send this file it could be great 🙂 )
try pip install clearml==0.17.6rc1
Hi ElegantCoyote26 ,
` - cleanup_period_in_days (float): The time period between cleanups. Default: 1.
- run_as_service (bool): The script will be execute remotely (Default queue: "services"). Default: True.
so
run_as_servicewill not run the script locally on your machine but just enqueue the script to the
services ` queue (you should have clearml-agent in services mode listening to this queue, and the agent will run this service)
Hi SpotlessLeopard9 ,
You can disable the joblib
connection with ClearML in your Task.init
call (need to disable scikit
):
task = Task.init(project_name='example project', task_name='task without joblib binding', auto_connect_frameworks={'scikit': False})
The fileserver will store the debug samples (if you have any).
You'll have cache too.
btw my site packages is false - should it be true? You pasted that but I’m not sure what it should be, in the paste is false but you are asking about true
false
by default, when you change it to true
it should use the system packages, do you have this package install in the system? what do you have under installed packages for this task?
DefeatedCrab47 can you share model.hparams
format?
You can send "yet_another_property_name": 1
too, or you can do"another_property_name": {"description": "This is another user property", "value": "1", "type": "int"}
How do you load the file? Can you find this file manually?
For the trains-agent
, you have an option to specify the trains.conf
file you want it to run with. just start the trains-agent
with trains-agent --config ~/trains_agent.conf
(where ~/trains_agent.conf
is your ~/trains.conf
file for the agent run).
how could I configure this in the docker compose?
Do you mean to env vars?
The state folder is not affected
this is /mnt/machine_learning/datasets
folder?
Hi RoughTiger69 , when you click on the app “3 dots” link, you can open the configuration and the View details
button will open you the original task.
it has like 10 fields of json configurations
under configuration objects, you can find the pipeline configuration.
CostlyOstrich36 did you succeed to reproduce such issue? RoughTiger69 have you made any changes in your workspace (share with someone? remove sharing?)?
So just after the clone, before creating the env?
Hi EnviousStarfish54 ,
You can add environment vars in you code, and trains will use those (no configuration file is needed)
import os os.environ["TRAINS_API_HOST"] = "YOUR API HOST " os.environ["TRAINS_WEB_HOST"] = "YOUR WEB HOST " os.environ["TRAINS_FILES_HOST"] = "YOUR FILES HOST "
Can this do the trick?
TrickySheep9 you can also add the queue to execute this task:
task.execute_remotely(queue_name="default")
So it will enqueue it too 🙂
you need to run it, but not actually execute it. You can execute it on the ClearML agent with task.execute_remotely(queue_name='YOUR QUEUE NAME', exit_process=True)
.
with this, the task wont actually run from your local machine but just register in the ClearML app and will run with the ClearML agent listening to 'YOUR QUEUE NAME'
.
Is this the only empty line in the file?
It doesn’t, but if your issue is
since the task requirements is not logged correctly and then when it is cloned it fails
this should log the same requirements as you have in your machine, without any analysis. with the same environment the agent shouldn’t have this issue
Hi CheekyToad28 ,
None of the options https://clear.ml/docs/latest/docs/deploying_clearml/clearml_server#deployment works for you?