Is there a simple way to get the response of the MinIO instance? Then I can verify whether it is the MinIO instance or my client
I will debug this myself a little more.
name: core
channels:
- pytorch
- anaconda
- conda-forge
- defaults
dependencies:
- _libgcc_mutex=0.1
- _openmp_mutex=4.5
- blas=1.0
- bzip2=1.0.8
- ca-certificates=2020.10.14
- certifi=2020.6.20
- cloudpickle=1.6.0
- cudatoolkit=11.1.1
- cycler=0.10.0
- cytoolz=0.11.0
- dask-core=2021.2.0
- decorator=4.4.2
- ffmpeg=4.3
- freetype=2.10.4
- gmp=6.2.1
- gnutls=3.6.13
- imageio=2.9.0
- jpeg=9b
- kiwisolver=1.3.1
- lame=3.100
- lcms2=2.11
-...
Thank you very much! 😃
Setting the api.files_server: s3://myhost:9000/clearml in clearml.conf works!
Nvm, that does not seem to be a problem. I added a part to the logs in the post above. It shows that some packages are found from conda.
Is there a way to capture uncommited changes with Task.create just like Task.init does? Actually, I would like to populate the repo, branch and packages automatically...
When I passed specific arguments (for example --steps) it ignored them...
I am not sure what you mean by this. It should not ignore anything.
One question: Does clearml resolve the CUDA Version from driver or conda?
` args = parser.parse_args()
print(args) # args PRINTED HERE ON LOCAL
command = args.command
enqueue = args.enqueue
track_remote = args.track_remote
preset_name = args.preset
type_name = args.type
environment_name = args.environment
nvidia_docker = args.nvidia_docker
# Initialize ClearML Task
task = (
Task.init(
project_name="reinforcement-learning/" + type_name,
task_name=args.name or preset_name,
tags=...
That seems to be the case. After parsing the args I run task = Task.init(...) and then task.execute_remotely(queue_name=args.enqueue, clone=False, exit_process=True) .
Python 3.8.8, clearml 1.0.2
Good, at least now I know it is not a user-error 😄
Thank you, good to know!
(btw: the simulator is called carla, not clara :))
I use this snippet:
Logger.current_logger().set_default_upload_destination(
"
" # or
)
Artifact Size: 74.62 MB
Thank you very much!
So if understand correctly, something like this should work?
task = Task.init(...) task.connect_configuration( {"agent.package_manager.system_site_packages": False} ) task.execute_remotely(queue_name, clone=False, exit_process=True)
Exactly. I don't want people to circumvent the queue 🙂
But I do not have anything linked correctly since I rely in conda installing cuda/cudnn for me
You can add and remove clearml-agents to/from the clearml-server anytime.
So actually deleting from client (e.g. an dataset with clearml-data) works.
I am not sure what happened, but my experiments are gone. However, the data directory is still filled.
Here it is