But, if you like, you can connect a remote interpreter and debug with PyCharm, locally, without clearml-agent
š
I can not see google package, can you try clone and add it manually? You can always add any package you like to any task with Task.add_requirements('package name', 'package version')
,
Do you have a toy example so I can reproduce it my side (using google.cloud but package is not listed in task)?
ok, I think I missed something on the way then.
you need to have some diffs, because
Applying uncommitted changes Executing: ('git', 'apply', '--unidiff-zero'): b"<stdin>:11: trailing whitespace.\n task = Task.init(project_name='MNIST', \n<stdin>:12: trailing whitespace.\n task_name='Pytorch Standard', \nwarning: 2 lines add whitespace errors.\n"
can you re-run this task from your local machine again? you shouldnāt have anything under UNCOMMITTED CHANGES
this time (as we ...
Hi LethalCentipede31
You can report plotly with task.get_logger().report_plotly
, like in https://github.com/allegroai/clearml/blob/master/examples/reporting/plotly_reporting.py
For seaborn, once you use plt.show
it will be in the UI (example https://github.com/allegroai/clearml/blob/master/examples/frameworks/matplotlib/matplotlib_example.py#L48 )
Hi SmarmySeaurchin8 ,
The trains-agent
default uses the ~trains.conf
file for credentials, can you verify the api section in this file?
Hi MysteriousBee56 .
What trains-agent version are you running? Do you run it docker mode (e.g.trains-agent daemon --queue <your queue name> --docker
?
Can you check the api version?from trains.backend_api import Session print(Session.api_version)
Hi RoughTiger69 , when you click on the app ā3 dotsā link, you can open the configuration and the View details
button will open you the original task.
it has like 10 fields of json configurations
under configuration objects, you can find the pipeline configuration.
CostlyOstrich36 did you succeed to reproduce such issue? RoughTiger69 have you made any changes in your workspace (share with someone? remove sharing?)?
Hi UnevenDolphin73 ,
If the ec2 instance is up and running but no clearml-agent is running, something in the user data script failed.
Can you share the logs from the instance (you can send in DM if you like)?
With this scenario, your data should be updated when running the pipeline
thanks SmugTurtle78 , checking it
Hi GhastlySquirrel83 ,
You can specify the repository ( repo=None, repo_branch=None, repo_commit=None
parameters) in the add_function_step
for connecting the specific repo into the step, you can view all the options with some examples here - https://clear.ml/docs/latest/docs/references/sdk/automation_controller_pipelinecontroller#add_function_step
in the agentās clearml.conf
file, set agent.docker_force_pull
to true
.
You can also try in the machine running the ClearML agent to run:docker pull nvidia/cuda:10.1-runtime-ubuntu18.04
can you tryprint(task.data.hyperparams)
instead of the last line?
UnevenDolphin73 I cant reproduce this issue on my side š can you give me some hints how to?
I can help you with that š
task.set_base_docker("dockerrepo/mydocker:custom --env GIT_SSL_NO_VERIFY=true")
Not sure getting that, if you are loading the last dataset task in your experiment task code, it should take the most updated one.
Hi ProudChicken98 ,
You want the agent NOT to create a new env?
according to the logs, the issue is when installing inplace-abn
packages. let me check the error
Hi MinuteWalrus85 .
Good news about fastai
, the integration in almost done and a version will be release in the coming days :)
resources configuration, so you have subnet ID and the security group ID and it failed with it?
In the self-hosted we do not have user permissions, so every user sees all the data.
yes, you could also use the containerās SETUP SHELL SCRIPT
and run command to install your python version (e.g.sudo apt install python3.8
for example)
So running the docker with āādevice=0,1āā works? We will check that
you can this description as the preview, can this help?
task.upload_artifact(name='artifact name', artifact_object=artifact_object, preview="object description")
The preview will show text š
Is this image for debugging? Can debug samples section help with that? Or it used for data?
When you are not using the StorageManager you donāt get the OSError: [Errno 9] Bad file descriptor
errors?
Unfortunately, it is not possible to delete an experiment using the UI. You can run the script as a service like in the example or run it with job scheduler (crontab for example in linux) to execute it.
Can this do the trick?
not yet, will update here once this is fixed