Make join throw if called synchronously from a closed spark#59
Make join throw if called synchronously from a closed spark#59
join throw if called synchronously from a closed spark#59Conversation
|
I still see an issue when using batch joins: primus.join([spark1, spark2, spark3], 'foo bar');The above example should throw an error if any of the spark is closed, but it doesn't now. I wonder if we should change strategy. To handle these errors the developers need to use a |
|
@lpinca is there a reason why this PR isn't merged?? |
|
It's an incomplete solution. It doesn't work for batch joins. |
|
As said above, I think the best approach would be to just document the behavior. Calling |
|
I agree, however there might be a way of getting this to work. According to async's documentation
So instead of removing the throw keep it there and just pass a callback when doing the batch call here Assuming I am understanding the goal of this PR correctly correctly. The following should work and from what I could tell it is only used when doing join and leave. (it is too early for me right now, so i might have missed something, or misunderstanding what is going on) |
|
@fadeenk it makes sense. I'll merge this. Are you up for creating a follow up PR for |
|
Actually I think I can just do it in this PR, I'm just too lazy to add a test haha. |
|
Hmm two thoughts:
|
|
To follow up to last comment
|
If the goal is to check if sparks are closed then yes it's possible. diff --git a/lib/rooms.js b/lib/rooms.js
index 9662dc8..922c300 100644
--- a/lib/rooms.js
+++ b/lib/rooms.js
@@ -469,16 +469,27 @@ Object.defineProperty(Rooms.prototype, 'connections', {
Rooms.prototype.batch = function batch(method, spark, room, fn) {
var sparks = Array.isArray(spark) ? spark : [spark]
- , tasks = [];
+ , tasks = []
+ , i = 0
+ , err;
- fn = fn || noop;
-
- sparks.forEach(function each(spark) {
- if ('string' === typeof spark) spark = this.primus.spark(spark);
+ function addTask(spark) {
tasks.push(function task(done) {
spark[method](room, done);
});
- }, this);
+ }
+
+ for (; i < sparks.length; i++) {
+ spark = this.primus.spark(
+ 'string' === typeof spark[i] ? spark[i] : spark[i].id
+ );
+ if (!spark) {
+ err = new RoomsError('One or more closed sparks');
+ if (fn) return setImmediate(fn, err), this;
+ throw err;
+ }
+ addTask(spark);
+ }
parallel(tasks, fn);
return this; |
|
I honestly don't know how useful it is. It will raise awareness but to avoid those errors the end users should check if the sparks are closed in their code so we are only duplicating everything. I also wonder how useful this batch feature is and it if makes sense to remove it so that This plugin does too much imo :) |
|
The diff you posted looks good, that will not even call the actual join if there is a disconnected socket. That may be good but it might cause issues since the behavior is a bit of opinionated. Another way to implement it would have been to join the connected ones and return the status for each socket join/leave. Also, the err is handled in the callback however the I do agree the batch join/leave kind of doesn't make sense in the scope of the plugin, we are literally just iterating through the sockets and joining them. I feel it would make more sense to just allow for a single join and have the user implement their own logic of how to do and handle multiple sockets joins. |
This. |
|
Should I just make a PR for that? or should we wait on a response from @cayasso first? |
|
Hi guys, thanks for bringing this up again, I agree with you all about the batch function, this was one of the initial requirements that I needed for one of my clients. But I have been thinking on a refactoring to make the plugin smaller and more scoped, so in the spirit of simplicity What do you guys think? JB |
|
Works for me. I have a special love for callbacks but can live with promises :) |
|
I have started some work on refactoring |
|
@lpinca yea I love callbacks still but I am getting in love of |
|
@cayasso promises are fine, no problem. |
Follow up of #58.
This makes
Rooms#jointhrow an error if it is called without a callback and the spark is closed.