hey RoughTiger69
Can you describe me how you are setting up the environment variable please ?
Setting up that flag will skip the virtual env installation : the agent will use your environment and the packages installed into it.
Using Task.add_requirements(requirements.txt) allows to add specific packages at will. Note that this function will be executed even with the flag CLEARML_AGENT_SKIP_PIP_VENV_INSTALL set
you can try something like this - which reproduces the gui behavior
` import math
import datetime
from clearml.backend_api.session.client import APIClient
client = APIClient()
q = client.queues.get_all(name='queue-2')[0]
n = math.floor(datetime.timestamp(datetime.now()))
res = client.queues.get_queue_metrics(from_date=n-1, to_date=n, interval=1, queue_ids=[q.id]) `Be careful though of the null value in the results. It happens when the there are values in the res than intervals between start...
Concerning how to use ParameterSet :
I first declare the setmy_param_set = ParameterSet([ {'General/batch_size': 32, 'General/epochs': 30}, {'General/batch_size': 64, 'General/epochs': 20}, {'General/batch_size': 128, 'General/epochs': 10} ])
This is a very basic example, it is also possible to use more complex things into the set (see https://clear.ml/docs/latest/docs/references/sdk/hpo_parameters_parameterset/ for UniformParameter Range usage in ParameterSet).
Then i do ...
hi ObedientToad56
the API will return you raw objetcs, thus not dictionary
you can use the SDK. For example, if task_id is your pipeline main task id, then you can retrieve the configuration objects this way :
task = Task.get_task(task_id=task_id) config = task.get_configuration_object_as_dict('Pipeline') for k in list(config.keys()): print(f'Step {k} has job id {config[k]["job_id"]}')
Hi VexedPeacock35
can you share some more precisions about what occurs ?
What are you trying to do (or to be precise when does this error appeared ?)
What are your packages versions (clearml, and server if you are self-hosted)
i guess so. make your tests and please keep us updated if you still encounter issues 🙂
for the scalars :
` import pandas as pd
import plotly.graph_objects as go
scalars = client.events.scalar_metrics_iter_histogram(task=task.id).to_dict()['metrics']
for graph in scalars.keys():
for i, metric in enumerate(scalars[graph].keys()):
df = pd.DataFrame(scalars[graph][metric]).iloc[:, 1:]
fig = go.Scatter(
scalars[graph][metric],
mode='lines',
name=metric,
showlegend=True
)
plio.write_image(fig=go.Fi...
it is a bit old - i recommand you to test again with the latest version 1.4.1
can you please give me some more details about what you intent to do ? it would be easier then to reproduce the issue
Hi
could you please share the logs for that issue (without the cred 🙂 ) ?
can you try to create an empty text file and provide its path to Task.force_requirements_env_freeze( your_empty_txt_file) ?
what do you mean ? the average time that the tasks are waiting before being executed by an agent ? that is to say the average difference between enqueue time and beginning time ?
If the AWS machine has an ssh key installed, it should work - I assume it's possible to either use a custom AMI for that, or you can use the autoscaler instance startup bash script
Hi Jax
A worker is the ressources allocated to one agent. An active worker is working on an enqueued job.
This graph shows that some of your agents are working on tasks, and some are idling. Those ones might be assigned to empty queues. does that sound logical ?
SubstantialElk6
Can you provide us a screenshot with a better resolution, to check the ratio between the total workers/active workers ?
Hi SmugTurtle78
We currently don't support GitHub deploy keys, but there might be a way to make the task use SSH (and not HTTPS), so that you could put the SSH key on the AWS machine. Please let me check if I can find such a solution, and come back to you after.
Hi WittyOwl57 ,
The function is :
task.get_configuration_object_as_dict ( name="name" )
with task being your Task object.
You could find a bunch of pretty similar functions in the docs. Have a look at here : https://clear.ml/docs/latest/docs/references/sdk/task#get_configuration_object_as_dict
hey GiganticMole91
you can set the logger to set your bucket as your default upload destination :
task.get_logger().set_default_upload_destination(' s3://xxxxx ')
Last (very) little thing : could you please open a Github issue for this irrelevant warning 🙏 ? It makes sense to register on GH those bugs, because our code and releases are hosted there.
Thank you !
http://github.com/allegroai/clearml/issues
you also have to set your agent to use you partner credentials
simply do a :
clearml-agent init
and paste your partner's credentials
Hi CrabbyKoala94
I am working on your issue, I will update you asap. Thanks
JuicyFox94
Hey Igor
I am not the expert about this topic. I have someone who better knows the topic that is coming back to you straight after his meeting. 🙂
you can specify the destination of the uploading like that :
when you initiate a task, you can set the parameter output_uri. If you set it to True, then the model will be uploaded to the uri specified in your conf file. Youcan also directly specify an url or you can use OutputModel.set_default_upload_uri or set_upload_destination ( https://clear.ml/docs/latest/docs/references/sdk/model_outputmodel#outputmodelset_default_upload_uri or https://clear.ml/docs/latest/docs/references/sdk/model_...
You need to use the API for exporting experiments to csv/excel. I am preparing an example for you
Hi CourageousKoala93
Yes, you can use Google as a storage. You can have a look at the docs https://clear.ml/docs/latest/docs/integrations/storage/#configuring-google-storage
Basically, this part of the doc will show you how to set the credentials into the configuration file.
You will also have to specify the destination uri, by adding to Task.init() : output_uri="path to my bucket"
Do not hesitate to ask for some precisions if needed
Hi RobustRat47
Is your issue solved ? 🙂
hi OutrageousSheep60
sounds like the agent is in reality ... dead. It sounds logical, because you cannot see it using ps
however, it would worth to check if you still can see it in the UI