You need to use the API for exporting experiments to csv/excel. I am preparing an example for you
yes it is 🙂 do you manage to upgrade ?
We also brought a lot of new features in the datasets in 1.6.2 version !
btw can you screenshot your clearml-agent list and UI please ?
hi FiercePenguin76
Can you also send your clearml packages versions ?
I would like to sum your issue up , so that you could check i got it right
you have a task that has a model, that you use to make some inference on a dataset you clone the task, and would like to make inferences on the dataset, but with another modelthe problem is that you have a cloned task with the first model....
How have you registered the second model ? Also can you share your logs ?
Hi,
It would be great if you could also send your clearml package version 🙂
it works locally and not on a remote exec : can you check that the machine that the agent if executed from is correctly configured ? the agent there needs to be provided with the correct credentials the autolog uses the file extension to determine what it is reporting. can you try to use the regular .pt extension ?
Hello DepravedSheep68 ,
In order to store your info into the S3 bucket you will need two things :
specify the uri where you want to store your data when you initialize the task (search for the parameter output_uri in the Task.init function https://clear.ml/docs/latest/docs/references/sdk/task#taskinit ) specify your s3 credentials into the clear.conf file (what you did)
hi OutrageousSheep60
sounds like the agent is in reality ... dead. It sounds logical, because you cannot see it using ps
however, it would worth to check if you still can see it in the UI
Hi MoodySparrow34
We have an user that wrote this example https://github.com/marekcygan/clearml-slurm-workers
It is a simple glue code to spin SLURM workers when the tasks are enqueued. Hope it will help
when you spin a container , you map a host port with a container port using -p parameterdocker run -v ~/clearml.conf:/root/clearml.conf -p 8080:8080 -e CLEARML_SERVING_TASK_ID =<service_id> -e CLEARML_SERVING_POLL_FREQ =5 clearml-serving-inference:latest
Here you map your computer's port 8080 with the container port 8080. If your 8080 port is already used, you can use another, using for example -p 8081:8080
Hey TartSeagull57
We have released a version that fixes the bug. It is a RC but it is stable. Version number is 1.4.2rc1
Of course. Here it is
https://github.com/allegroai/clearml/issues/684
I'll keep you updated
i have found some threads that deal with your issue, and propose interesting solutions. Can you have a look at this ?
hey UnevenDolphin73
you can mount your s3 in a local folder and provide that folder to your clearml.conf file.
I used s3fs to mount by s3 bucket as a folder. Then i modified agent.venvs_dir and agent.venvs_cache
(As mentionned here https://clear.ml/docs/latest/docs/clearml_agent#environment-caching )
In the meantime, it is also possible to create a figure that will contain +2 histo and then report it to the logger using report_plotly.
You can have a look there :
https://plotly.com/python/histograms/#overlaid-histogram
https://plotly.com/python/histograms/#stacked-histograms
` log = task.get_logger()
x0 = np.random.randn(1500)
x1 = np.random.randn(1500) - 1
fig = go.Figure()
fig.add_trace(go.Histogram(y=x0))
fig.add_trace(go.Histogram(y=x1))
fig.update_layout(barmode='overlay') ...
Hi SmugSnake6
I might have found you a solution 🎉 I answered on the GH thread https://github.com/allegroai/clearml-agent/issues/111
this is because the server is thought as a bucket too = the root to be precise. Thus you will always have at least a subfolder created in local_folder - corresponding to the bucket found at the server root
Hi TenderCoyote78
Here is a snippet to illustrate how to retrieve the scalars and the plots from a task
` from clearml.backend_api.session.client import APIClient
from clearml import Task
task = Task.get_task(project_name=xxxx, task.name=xxxx) #or task_id=xxxx
client = APIClient()
#retrieving the scalars
client.events.scalar_metrics_iter_histogram(task=task.id)
#retrieving the plots
client.events.get_task_plots(task=task.id) `
can you try to create an empty text file and provide its path to Task.force_requirements_env_freeze( your_empty_txt_file) ?
Hi SmugTurtle78
We currently don't support GitHub deploy keys, but there might be a way to make the task use SSH (and not HTTPS), so that you could put the SSH key on the AWS machine. Please let me check if I can find such a solution, and come back to you after.
Hi UnevenDolphin73
I am going to try to reproduce this issue, thanks for the details. I keep you updated
hey
"when cloning an experiment via the WebUI, shouldn't the cloned experiment have the original experiment as a parent? It seems to be empty"
you are right, i think there is a bug here. We will release a fix asap 🙂
hey Maximilian,
which version of clearml are you using ?
If the AWS machine has an ssh key installed, it should work - I assume it's possible to either use a custom AMI for that, or you can use the autoscaler instance startup bash script
You can initiate your task as usual. When some dataset will be used in it - for example it could start by retrieving it using Dataset.get - then the dataset will be registered in the Info section (check in the UI) 😊
You can force to install only the packages that you need using a requirements.txt file. Type into what you want the agent to install (pytorch and eventually clearml). Then call that function before Task.init :Task.force_requirements_env_freeze(force=True, requirements_file='path/to/requirements.txt')
hi PanickyMoth78
from within your function my_pipeline_function here is how to access the project and task names :
task = Task.current_task()task_name = task.namefull_project_path = task.get_project_name()project_name = full_project_path.split('/')[0]
Note that you could also use the full_project_path to get both project and task nametask_name = full_project_name.split('/')[-1]