How do you currently save artifacts now?
The reports is a separate area, it's between 'Pipelines' and 'Workers & Queues' buttons on the bar on the left 🙂
maybe SuccessfulKoala55 might have some input here. But this docker image is designed to be run from k8s glue from my understanding. To run it standalone you have to play with it a bit I think. Maybe try adding -it and /bin/bash at the end
ContemplativeGoat37 , Hi 🙂
You can do the following configuration in your ~/clearml.confsdk.development.default_output_uri: " s3://my_bucket/ "
When looking at the base task, do you have that metric there?
If it's metrics why not report them as scalars?
https://clear.ml/docs/latest/docs/references/sdk/logger#report_scalar
Hi @<1523701083040387072:profile|UnevenDolphin73> , not in the open source
Hi @<1819543688414498816:profile|ScatteredOctopus61> , what are your and your colleague's user IDs?
docker-compose.yml file you used to set up the server
MuddySquid7 , I couldn't reproduce case 4.
In all cases it didn't detect sklearn.
Did you put anything inside _init_.py ?
Can you please zip up the folder from scenario 4. and post it here?
MuddySquid7 , Yes! Reproduced like a charm. We're looking into it 🙂
Hi @<1587977852635058176:profile|FloppyTurtle49> , yes same would be applicable. Regarding communication: It is one way communication from the agent to the clearml server done directly to the API server - basically what is defined in clearml.conf
Hope this clears things up
Hi @<1785841629471444992:profile|CluelessSheep59> , looks OK. Give it a try and see what happens 🙂
You can clone it via the UI, enqueue it to a queue that has a worker running against that queue. You should get a perfect 1:1 reproduction
Hi GrittyHawk31 , can you elaborate on what you mean by metadata? Regarding models you can achieve this by defining the following in Task.init(output_uri="<S3_BUCKET>")
Hmmm, but what should be the default task state? What is the use case by the way?
If you upgrade to the latest versions and this issue still occurs it will be possible to debug but your current version is very old
Hi CrookedWalrus33 , I think this is what you're looking for:
https://github.com/allegroai/clearml-agent/blob/master/docs/clearml.conf#L78
Hi @<1659005876989595648:profile|ExcitedMouse44> , you can simply configure the agent not to install anything and just use the existing environment 🙂
The relevant env variables for this are: CLEARML_AGENT_SKIP_PIP_VENV_INSTALL
CLEARML_AGENT_SKIP_PYTHON_ENV_INSTALL
None
Hi @<1590514584836378624:profile|AmiableSeaturtle81> , you need to add the port to the credentials when you input them in the webUI
Can you give a small snippet to play with? Just to understand, when you run on local machine everything works fine? What do you do with Google Colab?
Hi @<1603560525352931328:profile|BeefyOwl35> , can you please elaborate on what you mean by running the build command?
Can you add a full log?
ScaryBluewhale66 , please look in:
https://clear.ml/docs/latest/docs/references/sdk/task#taskinit
The relevant section for you is auto_connect_frameworks
The usage would be along these lines:Task.init(..., auto_connect_frameworks={'matplotlib': False})
Hi @<1545216070686609408:profile|EnthusiasticCow4> , start_locally() has the run_pipeline_steps_locally parameter for exactly this 🙂
You're looking to avoid running an agent this entire time though, correct?
Hi @<1797438038670839808:profile|PanickyDolphin50> , can you please elaborate? What is this accelerate functionality?
Hi @<1580367711848894464:profile|ApprehensiveRaven81> , for a frontend application you basically need to build something that will have access to the serving solution.
DeliciousBluewhale87 , I believe so, yes 🙂