-
Notifications
You must be signed in to change notification settings - Fork 268
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update Key Vault Secrets README #2047
Update Key Vault Secrets README #2047
Conversation
…ames have been switched from secret_name to secret-name
See https://github.com/Azure/azure-sdk-for-rust/pull/2051/files#diff-e68c23fe7f799586bbbcaf760ba0cd25ac804e07b08c0c4519192f0ae3d00a2a for some examples. The pager needs work. Especially if we're making all fields |
I'm almost never a fan of SDK magic. The service offers a "get a page" operation and we need to expose this. The SDK can expose it with a pageable type that can optionally be used to get the nextLink page. In a single app, it is likely (but not mandatory; see example at bottom) that the app will loop over the page of items and then get the next page to loop of its results, and so on. But knowing where in the code the high latency with many possible failures that might need recovery is happening (the get page operations) and knowing where in the code the low latency and no chance of failures (iterating over the page of items) is happening is critical especially in a language like Rust. Obfuscating this with some magical iterator that combines these 2 very different things is a mistake. If a customer wants to write code to obfuscate these 2 things to make their code "easier" (while hiding failure modes and latency issues), that is OK but our SDK should not encourage or force this obfuscation. I'll also add that a page with no items is actually how the service works, and we should not hide this either. We MUST document it, so customers write code properly to deal with an empty page, but we do not fix it for them (as ding so hide additional latency and failures). Another reason not to magically conflate get-a-page & iterate-over-a-page is because it is also fairly common to have one PC initiate listing, process the page, and then another PC processes the next page. This is why our pager types have a way of getting a resume token and re-hydrating a pager from this resume token. I can give a real-life concrete example using 2 PCs and a resume token: Imagine an e-commerce website (like Amazon). The customer searches for some product, a server PC initiates the search, gets results and constructs an HTML page returning it to the browser. This webpage has the resume token embedded in it. The customer now clicks a button to look at the next page of results. When they click this button, a new web request is sent back to the service with the resume token where it goes through a load balancer and reaches a different server PC. This different server PC needs to rehydrate the pager type from the resume token and get the next page in order to return a new HTML page with the 2nd page of results and the latest resume token embedded in it so the customer could click "next page" again if they desire. For KeyVault, imagine a Portal (like the Azure Portal) showing 1 page of secrets/certificates at a time. My example above explains is exactly how this would be implemented. |
@JeffreyRichter wrote,
Based on what? Even if we ignore client applications, many service schedulers I've seen documented seem to shuffle clients off to one machine or another for persistent connections. While I agree we should support rehydratable pagers, the fact we don't have them in all languages - and for a time didn't in any languages - and they still worked proves that they, well, work. While anecdotal, I didn't see any user feedback for .NET about wanting rehydratable pagers either. As stated in another issue in which we discussed this, I agree we shouldn't hide getting page by page but iterating over a Still, part of my feedback above was also that returning an |
@ronniegeraghty I updated |
…to ronniegeraghty/keyvault_secrets_readme_update
…ct parameter names, and improve clarity
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few nits but otherwise LGTM
… when the additional docs actually exist.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Best practice is to clone credentials, which is why we return a Result<Arc<T>>
.
Unnecessary here, but we should show best practices regardless.
Adding initial Key Vault Secrets create
README.md
.Fixes #2040