
Reputation
Badges 1
5 × Eureka!You can get all tasks: https://clear.ml/docs/latest/docs/references/sdk/task#taskget_all
You can search tasks: https://clear.ml/docs/latest/docs/clearml_sdk/task_sdk#querying--searching-tasks
And you can get the status:
https://clear.ml/docs/latest/docs/references/sdk/task#get_status
ExasperatedCrab78 do you know how this could be?
And pandas is in your requirements.txt?
This ^
If you're not getting any errors, it should work just fine 🙂
In https://github.com/thepycoder/urbansounds8k/blob/main/preprocessing.py i'm seeing dataset_task.get_logger().report_image
, dataset_task.get_logger().report_table
, dataset_task.get_logger().report_histogram
and dataset_task.get_logger().report_media
which are all manual loggings. Hence, why the author probably didn't use any automatic logging.
I'm afraid what you're trying to do isn't a supported implementation.
You'll have to choose between using docker mode to have one virtual environment for everything or using the pip mode where you can used the cached virtual environments but you can't reuse the one you currently have.
If you're looking for what docker volumes were used, that's in the docker compose file:
https://github.com/allegroai/clearml-server/blob/master/docker/docker-compose.yml
It should, or you might need to nest the objects.
Edit: I asked, it won't there's a difference in configs I mixed up.
There seems to be a discrepancy in the docs I'm trying to figure out and solve.
Most of the statuses are more explained here: https://clear.ml/docs/latest/docs/fundamentals/task/#task-states
Closed isn't yet.
Close is normally for manually closing a task: https://clear.ml/docs/latest/docs/references/sdk/task#close
You'll find more info here: https://clear.ml/docs/latest/docs/clearml_sdk/task_sdk/ and here: https://clear.ml/docs/latest/docs/guides/advanced/multiple_tasks_single_process
I'm not sure if you can delete it when using pipelines but I would say try it on a new project?
Well seems like you have a solution for now?
If you still want to run it as a notebook, the following should make pip install the required packages:
import sys !{sys.executable} -m pip install -r requirements.txt
I'll check if this something we need to update in our documentation or if it's a bug.
Is this after you've started the clearML server that you can't find the experiments?
That's pretty weird. I don't see any clear indications something is wrong, it simply doesn't execute the rest it would seem. Did it successfully run the first time before cloning it?
Also have a look at --memory-swap
It seems you might not anticipated this usage:
If --memory-swap is unset, and --memory is set, the container can use as much swap as the --memory setting, if the host container has swap memory configured. For instance, if --memory="300m" and --memory-swap is not set, the container can use 600m in total of memory and swap.
Could it be multiple metrics that were combined into a single metric later on? Before the optimizer?
Could you test the following:
Without reusing the virtual environment you made manually:
Can you run a task twice and see if the second run is at least reusing the virtual environment of the first run?
So for notebooks requirements are indeed not checked elsewhere.
You can however include them with using this line before Task.init
Task.force_requirements_env_freeze(requirements_file=requirements.txt)
Both server and agent can be configured with different ports. Which is it you`re looking for?
ThoughtfulBadger56 Have you uncommented the existing venvs_cache section in the config file?
https://clear.ml/docs/latest/docs/clearml_agent#virtual-environment-reuse
you can pass use the compression
parameter in dataset.upload
. The supported values are:ZipFile.ZIP_STORED (no compression) ZipFile.ZIP_DEFLATED (requires zlib) ZipFile.ZIP_BZIP2 (requires bz2) ZipFile.ZIP_LZMA (requires lzma)
Note that you need to import ZipFile
beforehand: from zipfile import ZipFile
You're probably looking for ZIP_BZIP2
, but I'm not sure about that.
Well you could let ClearML create the config file with: https://clear.ml/docs/latest/docs/references/sdk/task#taskset_credentials
store_conf_file=True
And then go edit the file.
But it's probably easier in your case to use https://clear.ml/docs/latest/docs/references/sdk/task#connect_configuration
and pass it your full configuration?
Did you use --git-credentials ?
https://clear.ml/docs/latest/docs/apps/clearml_session#accessing-a-git-repository
Just checking, are you just trying to use a different docker image in a task? Because then you might want to use this: https://clear.ml/docs/latest/docs/apps/clearml_task/#docker
https://clear.ml/docs/latest/docs/clearml_agent#docker-mode
PIP can install from git repositories!
So you can point to your own repository or even a specific commit hash.
You can configure what to log and what not in the task init: https://clear.ml/docs/latest/docs/clearml_sdk/task_sdk/#automatic-logging
You can turn it all off by setting auto_connect_frameworks to false but you can do a finer grained control of logged frameworks with framework-boolean pairs
Or you can just load a config file or object: https://clear.ml/docs/latest/docs/references/sdk/task/#connect_configuration
I'm not sure about the preview part but after uploading I think you might find the images with list
and --filter
https://clear.ml/docs/latest/docs/clearml_data/clearml_data_cli#list
I'm not sure if that helps?
Do you get any error when uploading?
It looks like it can upload but can't download afterwards.