
Reputation
Badges 1
104 × Eureka!@<1523701070390366208:profile|CostlyOstrich36> Yes, I know. Above I posted a link where there's a solution. DB request to elastic to change those URLs. My question is: where to send this DB request? What endpoint? Request provided in FAQ in incomplete. It lacks URL where to send the request to.
curl --header "Content-Type: application/json" \
--request POST \
--data '{
"script": {
"source": "ctx._source.url = ctx._source.url.replace('
.<OLD_ADDRESS>', '
...
` from random import random
from clearml import Task, TaskTypes
args = {}
task: Task = Task.init(
project_name="My Proj",
task_name='Sample task',
task_type=TaskTypes.inference,
auto_connect_frameworks=False
)
task.connect(args)
task.execute_remotely(queue_name="default")
value = random()
task.get_logger().report_single_value(name="sample_value", value=value)
with open("some_artifact.txt", "w") as f:
f.write(f"Some random value: {value}\n")
task.upload_artifact(name="test...
@<1523701435869433856:profile|SmugDolphin23> Hello, again! I tried to fill the values by your example. Still no luck. I noticed console log on my task says that I have certificate error. I disabled it in api section in clearml.conf like this: verify_certificate = false
and I still have SSL error. Any clues why would that be?
After I run my experiment I have a console error that says I am missing security headers. This is a custom XML response. The same behaviour could be achieved when just trying to curl the endpoint or plug it in the browser. When I run e.g. boto3 client where I explicitly specify endpoint, ak, sk and bucket I could do whatever I want. So it seems to me ClearML is trying to get to this endpoint in some incorrect way
Thanks for the reply! We have a custom S3 server, it has an URL — endpoint like https://<some-domain>.<sub-domain>. I've read in docs that when you provide credentials.host
— port must be specified. @<1523701070390366208:profile|CostlyOstrich36>
Docstring from inside the boto3 lib says:
:param endpoint_url: The complete URL to use for the constructed
client. Normally, botocore will automatically construct the
appropriate URL to use when communicating with a service. You
can specify a complete URL (including the "http/https" scheme)
to override this behavior. If this value is provided,
then ``use_ssl`` is ignored.
I want ClearML to use my endpoint
@<1523701087100473344:profile|SuccessfulKoala55> No port needed when accessing this URL from things like boto3 or s3-client CLI
@<1523701070390366208:profile|CostlyOstrich36>
@<1523701087100473344:profile|SuccessfulKoala55> So I have to provide a host for it to work and no other way around it?
Thanks a lot. I see that ClearML apiserver is up for 7 months, could it be that it runs on a version that was recent 7 month ago?
SuccessfulKoala55 So my question is how to setup auto-detection properly so worker knows what git repo to pull from
Sorry, guys, maybe I am not expressing myself clear or it's something I am missing, I am not a native speaker so I'll try to reformulate. What we have is enterprise solution built on S3 technology, I don't have an access to servers on where it's run, I don't have a port. All I have been provided with are: secret key, access key, endpoint that looks like a regular web URL and a bucket name. Using these creds I can access this cloud storage just fine by any means except ClearML
He tried to help me in another thread but I still couldn't make things work
My current setup is:
sdk.development.default_output_uri=< None > # no port, no bucket
sdk.aws.s3.key=<my-access-key>
sdk.aws.s3.secret=<my-secret-key>
sdk.aws.s3.region=<my-region> # I think it can be skipped but somewhere in the clearml code it says that it must be specified if it's not default like us-east-1 or something
sdk.aws.s3.credentials.bucket=<my-bucket> # just a bucket name
sdk.aws.s3.credentials.host=< None : 443> # the same as output...
@<1523701087100473344:profile|SuccessfulKoala55> It's the URL I use when creating boto3 session from Python like this fro example
s3 = self.session.client(
service_name='s3',
endpoint_url=endpoint,
verify=False
)
@<1523701087100473344:profile|SuccessfulKoala55> I reloaded agent couple of times, cleared cache and for some reason it works now! Anyways, thanks for your help!
Thank you, got it. I tried it because I couldn't figure out how to make auto-detection work. When I run a task from my local project folder (which is also a git repo) via Task.init
it says that no repository was found. Also there is Task.create
method which lets you pass git URL but I suspect the Task.init
is more preferrable method
@<1523701087100473344:profile|SuccessfulKoala55>
from random import random
from clearml import Task, TaskTypes
import pandas as pd
task: Task = Task.init(
project_name="My Project",
task_name='Sample task',
task_type=TaskTypes.inference
)
task.connect(args)
task.execute_remotely(queue_name="default")
value = random()
task.get_logger().report_single_value(name="sample_value", value=value)
df = pd.DataFrame.from_dict({'col_1': [3, 2, 1, 0], 'col_2': ['a', 'b', 'c', 'd']})...
@<1523701087100473344:profile|SuccessfulKoala55> I run it from local machine, that's right. When I run the task it says it can't clone repository. In the web UI on my task there's a REPOSITORY string. It's a correct ssh URL to my repo but it's missing git@
after ssh://
If I add the git part to it by editing the task and queuing again it works. In my config file I have option force_git_ssh_user: git
enabled.
All log entries have "level": "INFO"
@<1722061389024989184:profile|ResponsiveKoala38> Sure, I'll get back to you as it finishes
The terminal hangs on the command
@<1523701070390366208:profile|CostlyOstrich36> I understand but the description of the error seems to indicate not about database conflicts but about connectivity to elastic by apiserver. I couldn't find info about this on the internet. I think I ruled out incosistent image versions. Are there any more suggestions? Thanks.
Sorry, forgot to mention. I used the command with --foreground tag. It is the same. Terminal just sits at a new line, no logs, no worker in UI
I'll get back to you in a minute
@<1523701070390366208:profile|CostlyOstrich36>
Should I leave as is or fill the values in docker-compose for agent-services? I set it to localhost since agent-services is running together with other clearml containers on one machine. Not sure why do you have to fill those values.
CLEARML_HOST_IP: "<my_clearml_server_ip>"
CLEARML_WEB_HOST: " None "
CLEARML_API_HOST: " None "
CLEARML_FILES_HOST: " [None](http://127.0.0.1...
@<1523701087100473344:profile|SuccessfulKoala55> Right