You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Have you put any thought into how this could be used with an async IO model, such as with Tokio? Right now it seems like it's all implicitly synchronous, but perhaps it's just a matter of making the various functions take/return Futures?
Thanks
musicformellons, LooMaclin, theduke, mmacedoeu, dovahcrow and 67 more
The big thing here is that this would also enable an implementation of something like dataloader, which Facebook has deemed the best practice for handling remote data fetching for a GraphQL query.
Based on my (limited) experience with GraphQL it's the only sane way to support large/complex backends and multiple queries per request in a sane way.
I've made a few attempts locally at integrating futures-rs into Juniper, but there are a lot of stumbling blocks that need to be overcome. There are also some nice properties the current synchronous implementation has that we'll probably lose, particularly around memory usage and heap allocations. This might not be a big issue, but I like the "you don't pay for what you don't use" mentality of Rust and C++ in general.
So, these are some issues that have prevented me from building this. There are probably more :)
Many AST nodes, particularly identifiers, use references to the original query string to avoid copying. Rust statically guarantees that no AST nodes will outlive the query string, and it's easy to reason about when execute just parses and executes the query synchronously.
GraphQLType::resolve* would need to return a Box<Future>, which means that all fields will cause extra heap allocations, even if it's just field like_count() -> i32 { self.like_count }. Now that I think about it, maybe you could change FieldResult to something like enum { Immediate(Value), Err(String), Deferred(Box<Future<Item=Value, Error=String>>) }.
The core execution logic would need to be transformed into something that joins multiple futures together and emits a result. No fundamental problem here, it was just a very difficult programming task :)
The futures-rs crate changing around a bit while working on this. This shouldn't be the case anymore, but I would be weary releasing Juniper 1.0 while depending on a pre-1.0 release of futures-rs.
I've been dabbling with futures-rs and Tokio in another project that I'm working on, and the general feeling I get is that's it's pretty unergonomic to use. I've stumbled upon cases where I couldn't break out a big and_then callback to a separate function because of various reasons: either the type was "untypeable" - i.e. containing closures, BoxFuture requires Send despite the name, Box<Future> did not work because of lifetime issues, and even when switching to nightly and returning impl Future there have been cases where returning lifetimes have been an issue.
Despite all of this, I still think this a feature we want to have! :) However, there are some constraints on the implementation:
Minimal impact on the non-async path. Ideally, Juniper should only create a new heap-allocated future for each object to resolve, not each field. Scalars should not cause any overhead at all.
No reliance on unstable compiler features. Juniper should not require a nightly compiler.
This became a long response with little amount of actual content :) I might open up a branch to do some work on this, but it's been hard trying work in incremental pieces since it's such a cross-cutting change.
Juniper should only create a new heap-allocated future for each object to
resolve, not each field.
What about fields that require an expensive calculation that you'd rather
define with an async method? After all, fields are how you *get* objects in
the first place. :-)
Den ons 12 juli 2017 11:37Magnus Hallin <notifications@github.com> skrev:
I've made a few attempts locally at integrating futures-rs into Juniper,
but there are a lot of stumbling blocks that need to be overcome. There are
also some nice properties the current synchronous implementation has that
we'll probably lose, particularly around memory usage and heap allocations.
This might not be a big issue, but I like the "you don't pay for what you
don't use" mentality of Rust and C++ in general.
So, these are some issues that have prevented me from building this. There
are probably more :)
- Many AST nodes, particularly identifiers, use references to the
original query string to avoid copying. Rust statically guarantees that no
AST nodes will outlive the query string, and it's easy to reason about when
execute just parses and executes the query synchronously.
- GraphQLType::resolve* would need to return a Box<Future>, which
means that *all* fields will cause extra heap allocations, even if
it's just field like_count() -> i32 { self.like_count }. Now that I
think about it, maybe you could change FieldResult to something like enum
{ Immediate(Value), Err(String), Deferred(Box<Future<Item=Value,
Error=String>>) }.
- The core execution logic
<https://github.com/mhallin/juniper/blob/master/src/types/base.rs#L291>
would need to be transformed into something that joins multiple futures
together and emits a result. No fundamental problem here, it was just a
very difficult programming task :)
- The futures-rs crate changing around a bit while working on this.
This shouldn't be the case anymore, but I would be weary releasing Juniper
1.0 while depending on a pre-1.0 release of futures-rs.
I've been dabbling with futures-rs and Tokio in another project that I'm
working on, and the general feeling I get is that's it's pretty unergonomic
to use. I've stumbled upon cases where I couldn't break out a big and_then
callback to a separate function because of various reasons: either the type
was "untypeable" - i.e. containing closures, BoxFuture requires Send
despite the name, Box<Future> did not work because of lifetime issues,
and even when switching to nightly and returning impl Future there have
been cases where returning lifetimes have been an issue.
Despite all of this, I still think this a feature we want to have! :)
However, there are some constraints on the implementation:
- Minimal impact on the non-async path. *Ideally*, Juniper should only
create a new heap-allocated future for each object to resolve, *not*
each field. Scalars should not cause any overhead at all.
- No reliance on unstable compiler features. Juniper should not
require a nightly compiler.
This became a long response with little amount of actual content :) I
might open up a branch to do some work on this, but it's been hard trying
work in incremental pieces since it's such a cross-cutting change.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#2 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AAAGP7F4ZyFLEmO6FWhUxK2m_aCAzKzLks5sNJPqgaJpZM4K0V7G>
.
I maybe expressed myself a bit ambiguous there - if a user defines an async field, than it should obviously allocate a future for that field. I meant that no futures should be allocated for synchronous fields.
@mhallin@theduke In your opinion, how feasible is it to ship this?
Lack of async support is currently the thing that prevents me from using Juniper for more projects. I appreciate that all in all it would be a huge refactor, but maybe this is something that could be shipped in increments?
I've been gathering a lot of futures/tokio experience lately; would you be motivated to help me get the PRs reviewed and merged if I got started on this? Or do you feel like it's not the right time for this feature?
@srijs Just out of curiosity, do you already have a futures-based Rust codebase that you want to expose with Juniper, or are you looking at GraphQL servers with async support in general?
To be honest, the more i work with Promises/futures-rsand other concurrency systems such as Erlang/Go, the less interested I am working on this. Just compare the work with integrating futures-rs with say Rayon: it would be trivial making the execution logic parallel without running into any of the problems I listed in my first reply here. If the Rust community's async efforts were directed to something with that kind of API, I'd be more interested.
That said, I will of course help you out if you decide to tackle this! I'm not sure even where to begin, but keeping AST nodes alive for the duration of the entire execution without using references with lifetimes is something that needs to be solved first. That might require putting all nodes under Arc, which is kind of unfortunate...
@mhallin I do have existing Rust codebases (that use tokio extensively) where I would love to be able to use Juniper.
It seems with generator functions/coroutines we're moving into the direction you're describing, but of course that will not be usable in stable Rust for quite a while. So while I agree that it would be simpler by leveraging coroutines, I don't think it's practical to wait. At least personally I'd like to see Juniper support async before coroutines land in Rust stable.
I have started to play around with the AST/parser parts, to use reference-counted string objects as you suggested (although I'm not 100% they would need to be Arc; Rc might be sufficient). Servo is using tendril for this purpose, but it seems rather unstable at the moment. I wonder if there is a crate like bytes just for text, or whether we could actually use bytes for this purpose...
Hi, can someone give me an update on this, I recently started a project using Juniper but the data to fulfil the GraphQL request comes for an external web api so I need to take advantage of futures for performance. Is futures support being worked on, if so by when will it be ready? if not, how do people go about doing what I am trying to do without using async I/O ?
Mostly in response to, or in addition to, @mhallin's comment above, here are some thoughts based on the recent advances on the rust futures & async front, especially drawing off of these sources:
With conservative impl trait and, less importantly, universal impl trait slated for rust 1.26, we could experiment with field signatures having a return type of impl Future<...> as opposed to Box<Future<...>>, to save on heap allocations (we'll see when it actually lands on stable).
Per some comments above about reference issues even inside of a return impl Future<...>, perhaps the upcoming Pin API will be useful.
AST sharing
With futures-rs 0.3, the Pin API will be leveraged for futures, which will allow reference sharing between futures &c. I haven't looked at the code in Juniper where the AST sharing is taking place, so the Pin API may or may not make any difference. Need to investigate a bit more.
futures stability
futures-rs has just recently gone through a pretty massive revamp with the 0.2 release. Apparently the design is more long term, with a 0.3 coming soon:
we anticipate a 0.3 release relatively soon. That release will set a stable foundation for futures-core, after which we can focus on iterating the rest of the stack to take full advantage of async/await!
So, once 0.3 lands, that may be a good time to start experimenting more aggressively.
Hey! Thanks for the awesome work! I managed to integrate master with dataloader-rs! I noticed that the batch loading isn't working on deeper levels of the tree though. The following example will make it clear:
This will result in the following db queries (all of which are done by dataloader batch loading functions).
SELECT name FROM recipes
SELECT ingredient_id, amount FROM recipe_ingredients WHERE recipe_id IN ($1, $2, $3)
parameters: $1='3', $2='1', $3='2'SELECT name FROM ingredients WHERE id IN ($1)
parameters: $1='1'SELECT name FROM ingredients WHERE id IN ($1)
parameters: $1='2'SELECT name FROM ingredients WHERE id IN ($1)
parameters: $1='4'
So the the second level (recipe -> recipe_ingredients) of resolvers is batched but the third isn't (recipe_ingredient -> ingredient). This could just as well be a bug in dataloader-rs or even more likely just my incompetence, but i thought I'd post it here if someone has come across this and solved it already.
EDIT:
Ok it seems that if i set data_loader.with_yield_count(very_high_number) it will successfully batch the third level as well. But this results in very long running requests (a second or so).
Activity
theduke commentedon Jul 11, 2017
The big thing here is that this would also enable an implementation of something like dataloader, which Facebook has deemed the best practice for handling remote data fetching for a GraphQL query.
Based on my (limited) experience with GraphQL it's the only sane way to support large/complex backends and multiple queries per request in a sane way.
mhallin commentedon Jul 12, 2017
I've made a few attempts locally at integrating
futures-rs
into Juniper, but there are a lot of stumbling blocks that need to be overcome. There are also some nice properties the current synchronous implementation has that we'll probably lose, particularly around memory usage and heap allocations. This might not be a big issue, but I like the "you don't pay for what you don't use" mentality of Rust and C++ in general.So, these are some issues that have prevented me from building this. There are probably more :)
execute
just parses and executes the query synchronously.GraphQLType::resolve*
would need to return aBox<Future>
, which means that all fields will cause extra heap allocations, even if it's justfield like_count() -> i32 { self.like_count }
. Now that I think about it, maybe you could changeFieldResult
to something likeenum { Immediate(Value), Err(String), Deferred(Box<Future<Item=Value, Error=String>>) }
.futures-rs
crate changing around a bit while working on this. This shouldn't be the case anymore, but I would be weary releasing Juniper 1.0 while depending on a pre-1.0 release offutures-rs
.I've been dabbling with
futures-rs
and Tokio in another project that I'm working on, and the general feeling I get is that's it's pretty unergonomic to use. I've stumbled upon cases where I couldn't break out a bigand_then
callback to a separate function because of various reasons: either the type was "untypeable" - i.e. containing closures,BoxFuture
requiresSend
despite the name,Box<Future>
did not work because of lifetime issues, and even when switching to nightly and returningimpl Future
there have been cases where returning lifetimes have been an issue.Despite all of this, I still think this a feature we want to have! :) However, there are some constraints on the implementation:
This became a long response with little amount of actual content :) I might open up a branch to do some work on this, but it's been hard trying work in incremental pieces since it's such a cross-cutting change.
Mange commentedon Jul 12, 2017
mhallin commentedon Jul 12, 2017
I maybe expressed myself a bit ambiguous there - if a user defines an async field, than it should obviously allocate a future for that field. I meant that no futures should be allocated for synchronous fields.
srijs commentedon Sep 27, 2017
@mhallin @theduke In your opinion, how feasible is it to ship this?
Lack of async support is currently the thing that prevents me from using Juniper for more projects. I appreciate that all in all it would be a huge refactor, but maybe this is something that could be shipped in increments?
I've been gathering a lot of futures/tokio experience lately; would you be motivated to help me get the PRs reviewed and merged if I got started on this? Or do you feel like it's not the right time for this feature?
mhallin commentedon Sep 30, 2017
@srijs Just out of curiosity, do you already have a futures-based Rust codebase that you want to expose with Juniper, or are you looking at GraphQL servers with async support in general?
To be honest, the more i work with Promises/
futures-rs
and other concurrency systems such as Erlang/Go, the less interested I am working on this. Just compare the work with integratingfutures-rs
with say Rayon: it would be trivial making the execution logic parallel without running into any of the problems I listed in my first reply here. If the Rust community's async efforts were directed to something with that kind of API, I'd be more interested.That said, I will of course help you out if you decide to tackle this! I'm not sure even where to begin, but keeping AST nodes alive for the duration of the entire execution without using references with lifetimes is something that needs to be solved first. That might require putting all nodes under
Arc
, which is kind of unfortunate...srijs commentedon Oct 3, 2017
@mhallin I do have existing Rust codebases (that use tokio extensively) where I would love to be able to use Juniper.
It seems with generator functions/coroutines we're moving into the direction you're describing, but of course that will not be usable in stable Rust for quite a while. So while I agree that it would be simpler by leveraging coroutines, I don't think it's practical to wait. At least personally I'd like to see Juniper support async before coroutines land in Rust stable.
I have started to play around with the AST/parser parts, to use reference-counted string objects as you suggested (although I'm not 100% they would need to be
Arc
;Rc
might be sufficient). Servo is usingtendril
for this purpose, but it seems rather unstable at the moment. I wonder if there is a crate likebytes
just for text, or whether we could actually usebytes
for this purpose...dcabrejas commentedon Apr 8, 2018
Hi, can someone give me an update on this, I recently started a project using Juniper but the data to fulfil the GraphQL request comes for an external web api so I need to take advantage of futures for performance. Is futures support being worked on, if so by when will it be ready? if not, how do people go about doing what I am trying to do without using async I/O ?
Thanks
thedodd commentedon Apr 17, 2018
Mostly in response to, or in addition to, @mhallin's comment above, here are some thoughts based on the recent advances on the rust futures & async front, especially drawing off of these sources:
heap allocations
With conservative impl trait and, less importantly, universal impl trait slated for
rust 1.26
, we could experiment with field signatures having a return type ofimpl Future<...>
as opposed toBox<Future<...>>
, to save on heap allocations (we'll see when it actually lands on stable).Per some comments above about reference issues even inside of a return
impl Future<...>
, perhaps the upcomingPin
API will be useful.AST sharing
With
futures-rs 0.3
, thePin
API will be leveraged for futures, which will allow reference sharing between futures &c. I haven't looked at the code in Juniper where the AST sharing is taking place, so thePin
API may or may not make any difference. Need to investigate a bit more.futures stability
futures-rs
has just recently gone through a pretty massive revamp with the0.2
release. Apparently the design is more long term, with a0.3
coming soon:So, once
0.3
lands, that may be a good time to start experimenting more aggressively.Thoughts?
64 remaining items
rivertam commentedon Feb 24, 2020
Should this issue not be closed then?
kiljacken commentedon Feb 24, 2020
I believe that it wouldn't make sense until async support is in a release on crates.io
LegNeato commentedon Mar 6, 2020
Interfaces still don't work and we need to rip out the sync code. Keeping this open until those are done and a release is made.
repomaa commentedon Jul 11, 2020
Hey! Thanks for the awesome work! I managed to integrate master with dataloader-rs! I noticed that the batch loading isn't working on deeper levels of the tree though. The following example will make it clear:
This will result in the following db queries (all of which are done by dataloader batch loading functions).
So the the second level (recipe -> recipe_ingredients) of resolvers is batched but the third isn't (recipe_ingredient -> ingredient). This could just as well be a bug in dataloader-rs or even more likely just my incompetence, but i thought I'd post it here if someone has come across this and solved it already.
EDIT:
Ok it seems that if i set
data_loader.with_yield_count(very_high_number)
it will successfully batch the third level as well. But this results in very long running requests (a second or so).LegNeato commentedon Jul 11, 2020
Do you have a link to code?
repomaa commentedon Jul 11, 2020
@LegNeato sure: https://p.jokke.space/5l2FU/
tyranron commentedon Oct 6, 2020
With #682 landed, we're fully async compatible now!
wongjiahau commentedon Oct 9, 2020
@tyranron Are there any examples?
tyranron commentedon Oct 9, 2020
@wongjiahau check the book on
master
and examples in the repository.wongjiahau commentedon Oct 11, 2020
I found the fix by using
juniper = { git = "https://github.com/graphql-rust/juniper", rev = "68210f5"}
instead ofjuniper = 0.14.2
.LegNeato commentedon Dec 10, 2020
crates.io has been updated with juniper's async support, sorry for the delay. Any future bugs or api changes can get their own issues.
Note that we still support synchronous execution via
juniper::execute_sync
.Thank you to all the contributors who made this possible, especially @nWacky, @tyranron , @theduke , and @davidpdrsn 🍻 🎉 🥇