hey H4dr1en
you just specify the packages that you want to be installed (no need to specify the dependancies) and the version if needed.
Something like :
pytorch==1.10.0
hi WickedElephant66
you can log your models as artifacts on the pipeline task, from any pipeline steps. Have a look there :
https://clear.ml/docs/latest/docs/pipelines/pipelines_sdk_tasks#models-artifacts-and-metrics
I am trying to find you some example, hold on π
hey TenderCoyote78
Here is an example of how to dump the plots to jpeg files
` from clearml.backend_api.session.client import APIClient
from clearml import Task
import plotly.io as plio
task = Task.get_task(task_id='xxxxxx')
client = APIClient()
t = client.events.get_task_plots(task=task.id)
for i, plot in enumerate(t.plots):
fig = plio.from_json(plot['plot_str'])
plio.write_image(fig=fig, file=f'./my_plot_{i}.jpeg') `
Hi MotionlessCoral18
You need to run some scripts when migrating, to update your old experiments. I am going to try to find you soem examples
Hi Max
you can configure a clearml agent to pull your docker image from ECR and run the experiment into it. Is that answering your question ?
hi SoggyBeetle95
i reproduced the issue, could you confirm me that it is the issue ?
here is what i did :
i declared some secret env var in the agent section of clearml.conf i used extra_keys to have hidden on the console, it is indeed hidden but in the execution -> container section it is clear
oups yes, you are right. output_uri is used for the artifacts
for the logger it is https://clear.ml/docs/latest/docs/references/sdk/logger#set_default_upload_destination
btw what do you get when you do task.get_logger().get_default_upload_destination() ?
hey SoggyBeetle95
You're right that's an error on our part π
Could you please open an issue in https://github.com/allegroai/clearml-server/issues so we can track it?
We'll update there once a fix for that issue will be released! π
it is for the sack of the example. It permits to fire the agents in background, and thus to have several agents fired from the same terminal
If the AWS machine has an ssh key installed, it should work - I assume it's possible to either use a custom AMI for that, or you can use the autoscaler instance startup bash script
Hi MotionlessCoral18
Have these threads been useful to solve your issue ? Do you still need some support ? π
hi GentleSwallow91
Concerning the warning message, there is an entry in the FAQ. Here is the link :
https://clear.ml/docs/latest/docs/faq/#resource_monitoring
We are working on reproducing your issue
can you please provide the apiserver log and the elasticsearch log?
hey
you can allocate ressources to worker by adding the --gpus parameter to the command line, when you fire the agent. The gpus are designed by a number.
Example: spin two agents, one per gpu on the same machineclearml-agent daemon --detached --gpus 0 --queue default clearml-agent daemon --detached --gpus 1 --queue default
hi VexedKoala41
Your agent is running into a docker container that may have a different version of python installed. It tries to install a version of the package that doesn't exist for this python version.
Try to specify the latest matching version Task.add_requirements( βipythonβ , '7.16.3')
You can force to install only the packages that you need using a requirements.txt file. Type into what you want the agent to install (pytorch and eventually clearml). Then call that function before Task.init :Task.force_requirements_env_freeze(force=True, requirements_file='path/to/requirements.txt')
Can you try to add the flagauto_create=True
when you call Dataset.get ?
yep i am working on it - i have something that i suspect not to work as expected. nothing sure though
for the step that reports the model :
`
@PipelineDecorator.component(return_values=['res'],
parents=['step_one'],
cache=False,
monitor_models=['mymodel'])
def step_two():
import torch
from clearml import Task
import torch.nn as nn
class nn_model(nn.Module):
def init(self):
...
Hello DepravedSheep68 ,
In order to store your info into the S3 bucket you will need two things :
specify the uri where you want to store your data when you initialize the task (search for the parameter output_uri in the Task.init function https://clear.ml/docs/latest/docs/references/sdk/task#taskinit ) specify your s3 credentials into the clear.conf file (what you did)
To provide an upload destination for the artifact, you can :
add the parameter default_output_uri to Task.init ( https://clear.ml/docs/latest/docs/references/sdk/task#taskinit ) set the destination into clearml.conf : sdk.development.default_output_uri
( https://clear.ml/docs/latest/docs/configs/clearml_conf#sdkdevelopment )
To enqueue the pipeline, you simply call it, without run_locally or debug_pipeline
You will have to provide the parameter execution_queue to your steps, or defau...
Hi EnormousWorm79
The Pycharm testrunner wraps the script into a local cript, and thats what you are getting.
(jb pytest runner). Because it is local, you lose the source info
Let me check if I have a workaround or solution for you. I keep you updated
Hi WickedElephant66
When you are in the Projects section of the WebApp (second icon on the left), enter either "All Experiments" or any project you want to access to. Up on the center is the Models section. You csn find the url the model can be downloaded from, in the details, section
yes everything that is downloaded is cached. The cache folder is in your config file :
` sdk {
# ClearML - default SDK configuration
storage {
cache {
# Defaults to system temp folder / cache
default_base_dir: "~/.clearml/cache"
size {
# max_used_bytes = -1
min_free_bytes = 10GB
# cleanup_margin_percent = 5%
}
}
direct_access: [
# Objects matching are...
hi MoodySheep3
I think that you use ParameterSet the way it is supposed to be π
When I run my examples, I also get this warning - which is weird ! because
This is just a warning, the script continues anyway (and reaches end without issue) Those HP exists - and all the sub tasks corresponding to a given parameters set find them !
i have found some threads that deal with your issue, and propose interesting solutions. Can you have a look at this ?
thanks ! we have added quite a lot of new features on datasets on our last releases. I would encourage you to update your clearml packages π