Pr0Ger/PyAPNs2

send several messages to the same device in a batch

vsmelov opened this issue · 1 comments

here

https://github.com/Pr0Ger/PyAPNs2/blob/master/apns2/client.py#L164

def send_notification_batch(...) -> Dict[str, Union[str, Tuple[str, str]]]:
        """
        ...
        The function returns a dictionary mapping each token to its result. The result is "Success"
        if the token was sent successfully, or the string returned by APNs in the 'reason' field of
        the response, if the token generated an error.
        """

What if I want to send several messages to the same device in token a batch?
Then responses will be merged to one record in returning dictionary!
Can we just return list of results for every message in the same order as we have in notifications argument?

My idea is to rewrite the function as

    def send_notification_batch(
            self,
            notifications: Iterable[Notification],
            topic: Optional[str] = None,
            priority: NotificationPriority = NotificationPriority.Immediate,
            expiration: Optional[int] = None,
            collapse_id: Optional[str] = None,
            push_type: Optional[NotificationType] = None
    ) -> List[Union[str, Tuple[str, str]]]:
        """ copy-paste from the library, but returns list """
        notification_iterator = iter(enumerate(notifications))
        next_notification_index: Optional[int]
        next_notification: Optional[NotificationPriority]
        stream_id2notification_index: typing.Dict[int, int] = {}
        next_notification_index, next_notification = next(notification_iterator, (None, None))
        # Make sure we're connected to APNs, so that we receive and process the server's SETTINGS
        # frame before starting to send notifications.
        self.connect()

        results = {}  # notification_index -> result
        open_streams = collections.deque()  # type: typing.Deque[RequestStream]
        # Loop on the tokens, sending as many requests as possible concurrently to APNs.
        # When reaching the maximum concurrent streams limit, wait for a response before sending
        # another request.
        while len(open_streams) > 0 or next_notification is not None:
            # Update the max_concurrent_streams on every iteration since a SETTINGS frame can be
            # sent by the server at any time.
            self.update_max_concurrent_streams()
            # yeah, we access private field!
            if next_notification is not None and len(open_streams) < self._APNsClient__max_concurrent_streams:
                logger.info('Sending to token %s', next_notification.token)
                stream_id = self.send_notification_async(next_notification.token, next_notification.payload, topic,
                                                         priority, expiration, collapse_id, push_type)
                open_streams.append(RequestStream(stream_id, next_notification.token))
                stream_id2notification_index[stream_id] = next_notification_index

                next_notification_index, next_notification = next(notification_iterator, (None, None))
                if next_notification is None:
                    # No tokens remaining. Proceed to get results for pending requests.
                    logger.info('Finished sending all tokens, waiting for pending requests.')
            else:
                # We have at least one request waiting for response (otherwise we would have either
                # sent new requests or exited the while loop.) Wait for the first outstanding stream
                # to return a response.
                pending_stream = open_streams.popleft()
                result = self.get_notification_result(pending_stream.stream_id)
                logger.info('Got response for %s: %s', pending_stream.token, result)
                results[stream_id2notification_index[pending_stream.stream_id]] = result

        return [v for k, v in sorted(results.items())]

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs.