Aren't they two different auth systems? One for humans and one for machines?
But I actually wish the interface were more like the apiserver.conf
file--specifically, that you can define hard-coded credentials in this file in advance. Except, I wish that you could define API keys this way (or some other way)
auth {
# Fixed users login credentials
# No other user will be able to login
fixed_users {
enabled: true
pass_hashed: false
users: [
{
username: "test"
password: "test"
name: "Test User u:test p:test"
}
]
}
}
I ultimately resorted to creating a selenium script combined with docker-compose. Not a beautiful solution but I can confirm that it works 😕 None
The goal is to be able to run
docker-compose up
in CI, which starts a clearml-server. And then make several API calls to the started ClearML server to prove that the VS Code extension code is working.
Oh I see, if this is CI workflow, why not run in offline mode ?
None
I could potentially write a selenium script to make a set of keys, but I'd prefer to avoid that 😅
Okay, I think this might be a bit of an overkill, but I'll entertain the idea 🙂
Try passing the user as key, and password as secret?
For now, I've written a headless selenium script to generate credentials for the fresh ClearML instance in CI.
Oh I wasn’t aware of that. I don’t think it’d work for this use case though. We’re trying to test the behavior you can see here in this extension https://share.descript.com/view/g0SLQTN6kAk so basically the examples I said in that earlier message
Does this mean that none of the credientials in this file can be used with the clearml SDK when the docker-compose.yaml starts up with a fresh state?
Is there anyway to achieve such a behavior? Or are manual steps simply required to get a working set of keys. I'm trying to prepare a docker-compose file that I can use for automated tests of our VS Code extension.
You described getting a secret key pair from the UI and feeding it back into the compose file. Does this mean it's not possible to seed the secrets in the compose file, starting from clean state? If so, that would explain why I can't get it to work.
Long story short, no. This would basically mean you have a pre-build credentials in the docker, this sounds dangerous 🙂
I'm not sure I'm following the use case here, what exactly are we trying to do?
(or maybe I missed something here?)
When you login with user/pass in the UI the same "process" happens and you get a Token to work with, this is the same as secret/key
Since in both cases you provide credentials and get back access token, it should work
(This is of course only if you are setting user/pass manually and disabling pass_hashed
as you have)
Oh interesting. Is the hope that doing that would somehow result in being able to use those credentials to make authenticated API calls?
Oh wow. If this works, that will be insanely cool. Like, I guess what I'm going for is that if I specify "username: test" and "password: test" in that file, that I can specify "api.access_key: test" and "api.secret_key: test" in the clearml.conf used for CI. I'll give it a try tonight!
a CI for the vscode extension ? So spin server + agent & connect to it as part of the CI?
But the extension will need credentials to connect to it.
I don't know that you'd have to pre-build credentials into docker. If you could specify a set of credentials as environment variables to the docker run ...
command or something, that would work just fine.
The goal is to be able to run docker-compose up
in CI, which starts a clearml-server. And then make several API calls to the started ClearML server to prove that the VS Code extension code is working.
Examples:
- Assert that the extension can auth with ClearML
- Assert that the extension can create, list, and delete ClearML Sessions
Each of these require ClearML credentials.
Hi @<1541954607595393024:profile|BattyCrocodile47> , setting the initial keys for the apiserver component is indeed a part of the initial setup and works as you described, it's just that this is for internal system components, and not user entities