-
Notifications
You must be signed in to change notification settings - Fork 18
Adding a competing outbound calls example which uses streaming #103
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding a competing outbound calls example which uses streaming #103
Conversation
Differs from the concurrent outbound examples as the server returns once the first response has been completed, but continues waiting to handle the second response server-side. Signed-off-by: Mikkel Mørk Hegnhøj <[email protected]>
|
Making this a draft PR, to get feedback on relevancy and maintainability of this sample. As it is essentially a combination of two existing samples. @fibonacci1729 @itowlson would like to hear your opinions and feedback on the code. |
Signed-off-by: Mikkel Mørk Hegnhøj <[email protected]>
f69dcb0 to
5e3be4c
Compare
itowlson
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd suggest that we either replace the existing "concurrent outbound" sample with this, or find names that more clearly distinguish them. (How does a developer looking for a how-to know if they want "concurrent" or "competing" outbound calls?)
I do prefer this sample to the existing one, although it's significantly more complex and so maybe there's value in keeping the simpler one? Not sure. However there are a few quirks and style nits that it might be good to iron out, although I can do that after merge if that will facilitate landing this.
| ) -> impl IntoResponse { | ||
| println!("Handling reuqest"); | ||
|
|
||
| // A lot of code taken from: https://github.com/spinframework/spin-rust-sdk/blob/main/examples/wasip3-streaming/src/lib.rs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd drop this - if we were crediting a third party then for sure but not really needed here I feel.
| async fn handle_competing_outbound_http_calls( | ||
| _req: spin_sdk::http_wasip3::Request, | ||
| ) -> impl IntoResponse { | ||
| println!("Handling reuqest"); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| println!("Handling reuqest"); | |
| println!("Handling request"); |
or drop it altogether
| // Use wit_bindgen::spawn to allow the async block to keep running | ||
| // after the handler returns. | ||
| spin_sdk::http_wasip3::wasip3::wit_bindgen::spawn(async move { | ||
| // The two outbound calls |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think setting up the futures needs to be in the spawn block.
| // Handles the secons request (future) as it returns a reponse | ||
| let second_response = first_result.1.await.expect("Failed to get second response"); | ||
|
|
||
| // And printing stats to the server console, as the client connection is already closed |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The async block is quite long - by this point I'd kinda lost track that this was all in the spawn. Consider pulling the spawned task out to a separate function?
| // hence instantiating a new variable for the ContentLength struct | ||
| let first_response = first_result.0.unwrap(); | ||
|
|
||
| // Let's print some stats to the server console |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| // Let's print some stats to the server console | |
| // Print some stats to the server console |
| first_response.url, first_response.ms_taken, first_response.content_length | ||
| ); | ||
|
|
||
| // Serializing the struct as JSON represented as bytes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| // Serializing the struct as JSON represented as bytes | |
| // Serialize the struct as JSON represented as bytes |
I'm not sure what "represented as bytes" is trying to say here. Everything is represented as bytes: is it alluding to the use of the Bytes struct. In that case maybe "represented as a Bytes"?
| // Serializing the struct as JSON represented as bytes | ||
| let bytes: Bytes = serde_json::to_vec_pretty(&first_response) | ||
| .expect("Failed to serialize!") | ||
| .try_into() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This appears to be converting a Vec<u8> to a Bytes? If so you should be able to into() here rather than try_into().
| let first_result = futures::future::select(spin, book).await.factor_first(); | ||
|
|
||
| // We need to keep the future around to also handle the second response, | ||
| // hence instantiating a new variable for the ContentLength struct |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The names heree are not helping me. Line 39 implies that select().factor_first() has given us the "first result", which the doc comment explains as the "first response." But then line 43 unpacks a "first response." What's the difference between the first result and the first response? What "future" are we keeping around? Context suggests it's maybe "first result" but we're keeping it around for the "second response" but the name says it's "first"?
Deconstructing the tuples would help a lot here I think, e.g.
let first_completion = futures::future::select(...).await; // okay this is a crappy name too
let (first_response, second_fut) = first_completion.factor_first();
then later you can await second_fut instead of first_result.1 which feels more explanatory to me.
| // Handles the secons request (future) as it returns a reponse | ||
| let second_response = first_result.1.await.expect("Failed to get second response"); | ||
|
|
||
| // And printing stats to the server console, as the client connection is already closed |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a pedagogical purpose in closing the response rather than streaming race results as they come in? (E.g. we want to demonstrate how to continue processing after sending the response, e.g. the Slack slash command scenario.) If so, call that purpose out.
| authors = ["mikkelhegn <[email protected]>"] | ||
| description = "" | ||
| version = "0.1.0" | ||
| rust-version = "1.90" # required for `wasm32-wasip2` target to build WASIp3 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a reason for this to be sooooooooo far over to the right?
|
Oh, re relevance and maintainability, I think it's highly relevant and I'd have no concerns about maintaining it. It feels like finishing the concurrency sample I started. The more I think about it the more I think we should replace my sample with this, or a slightly simplified variant of it - showing processing of all the responses, not just the first as I did, is something we should 100% have. @mikkelhegn If you prefer I could update the existing sample with all the good good stuff from this one. But don't want to steal your thunder! |
|
@itowlson Thanks for all the feedback and thoughts. Feel free to update your existing example. That will probably be much faster than you having to deal with reviewing my code 😁 |
|
Closed in favor of #105 |
Differs from the concurrent outbound examples as the server returns once the first response has been completed, but continues waiting to handle the second response server-side.
This is essential a combined example of wasip3-concurrent-outbound-http-calls and wasip3-streaming.
Signed-off-by: Mikkel Mørk Hegnhøj [email protected]