Reputation
Badges 1
53 × Eureka!It return false. Just to share abit more, I have the requirements.txt in gitlab with my codes and are in folders. Do I need to provide a gitlab path?
Hi ExasperatedCrab78 I managed to get it. It was due to ip address set in examples.env.
Hi Bart, yes. Running with inference container.
This is what I got. and when I see http400 error in the console.
Hi TimelyPenguin76 , nope. I don't see any errors. That's why not sure what went wrong
Cool thanks guys. I am clearer now. Was confused by the obsolete info. Thanks for the clarification.
Hi SuccessfulKoala55 Thanks for pointing me to this repo. Was using this repo.
I didn't manage to find in this repo that if we still require to label the node app=clearml, like what was mentioned in the deprecated repo. Although from the values.yaml, the node selector is empty. Would u be able to advise?
How is the clearml data handled now then? Thanks
@<1526734383564722176:profile|BoredBat47> Just to check if u need to do update-ca-certificates or equivalent?
SuccessfulKoala55 i tried comment off fileserver, clearml dockers started but it doesn't seems to be able to start well. When I access clearml via webbrowser, site cannot be reached.
Just to confirm, I commented off these in docker-compose.yaml.
apiserver:
command:
- apiserver
container_name: clearml-apiserver
image: allegroai/clearml:latest
restart: unless-stopped
volumes:
- /opt/clearml/logs:/var/log/clearml
`...
Thanks. The examples uses upload_artifact which stores the files in output_uri. What if I do not want to save it but simply pass to next step, is there a way to do so?
SuccessfulKoala55 Nope. I didn't even get to enter my name. I suspect there is some mistake in mapping the data folder.
Was using the template in https://github.com/allegroai/clearml-helm-charts to deploy.
JuicyFox94 and SuccessfulKoala55 Thanks alot. Indeed it is caused by dirty cookies.
Clearml 1.1.1. Yes, i have boto3 installed too.
It gets rerouted to http://app.clearml.home.ai/dashboard . with the same network error.
I figured out that it maybe possible to do theseexperiment_task = Task.current_task()
OutputModel(experiment_task ).update_weights('
http://model.pt ')
to attach it to the ClearML experiment task.
When I run as regular remote task it works. But when I run as a step in pipeline, it cannot access the same folder in my local machine.
Just to add, when I run the pipeline locally it works as well.
Ah I think I was not very clear on my requirement. I was looking at porting project level, not entire clearml data over. Is it possible instead?
By the way, will downloading still happen if the datasets is available in the cache folder? Any specific settings to add to Dataset.get_local_copy()?
Hi @<1523701070390366208:profile|CostlyOstrich36> , basically
- I uploaded dataset using clearml Datasets. The output_uri is pointed to my s3, thus the dataset is stored in s3. My s3 is setup with http only.
- When I retrieve the dataset for training, using
Dataset.get()
, I encountered ssl cert error as the url to retrieve data washttps://<s3url>/...
instead ofs3://<s3url>/...
which is http. This is weird as the dataset url is without https. - I am not too sure why and I susp...
@<1523701070390366208:profile|CostlyOstrich36> Yes. I'm running on k8s
Thanks AgitatedDove14 . Specifically, I wanted to use my own clearml server and Triton. Thus, I attempted to use --engine-container-args during launch but error saying no such flag. Looked into --help but I guessed it is not updated yet.
CostlyOstrich36 I mean the dataset object in clearml as well as the data that is tied to this object.
The intent is to bring over to another clearlml setup and keep some form of traceability.
And just a suggestion which maybe I can post in GitHub issue too.
It is not very clear what are the purpose of the project name and name, even after I read the --help. Perhaps this is something that can be made clearer when updating the docu?