Client Test Cases

Revision as of 14:39, 14 November 2019 by Ge0rg (talk | contribs) (→‎MAM)
Jump to navigation Jump to search

This is a list of different test cases that client developers should apply to their code base.

Tool Suite

TODO: create and document an automated tool suite.

An ideal tool suite would be one of (or all of):

  • A hosted component with well-defined JIDs exhibiting the test case behavior
  • An XMPP component that can be self-hosted
  • A set of bot scripts / tools that can connect to a given (set of) account(s)

Message Errors

A client should properly support the following cases:

  1. A message is rejected by the other side - display error on message, show text message describing the failure in the details
  2. A message is accepted by the other side (you receive a Receipt) - display normally or with an additional checkmark
  3. A message is accepted (Receipt), then bounced (you receive an error) - the error is probably from a stale session and should be ignored - display checkmark
  4. A message is bounced, then accepted - first display error, then checkmark

There is a hosted version of test 1. at

Impersonation attacks

  1. Roster push impersonation CVE-2015-8688
  2. Carbon sender impersonation CVE-2017-5589
  3. MAM impersonation: a <message> from a remote JID containing a <result> with a wrapped <message>
  4. Impersonation via XEP-0297 Stanza Forwarding: Similar to the MAM impersonation but with a top-level <forward> element. Clients are supposed to clearly indicate that a message has been forwarded. Misbehaving clients might instead show the forwarded message as if it came from that person. There's also zero guarantee that a forwarded message is not in fact a forgery.

Session Management

  1. A client performs all session initation operations the correct order
  2. A client properly accepts a different resource than requested during binding, and responds to IQs sent to it
  3. Stream Management (XEP-0198)
    1. A client resumes a session with the same session-id if disconnected without closing the XML stream
    2. A client submits the correct number of stanzas on resume, after it was fed with some messages, IQs, and other (non-stanza) stream elements
    3. A client aborts the resume if the server requests stanzas that already were ACKed before the connection loss (<resumed h> with a value smaller than in the last <a/> element)
    4. A client aborts the resume if the server less stanzas than the client (<resumed h> with a value significantly larger than what the client sent)

Multi User Chats


  1. A join is not responded to at all by the MUC
  2. A join is responded to with an error presence
  3. A join is responded with a captcha challenge message
  4. After sending the captcha challenge response a MUC responds with a "not-authorized" error presence (which does *not* mean in this case the muc is password protected)
  5. Captcha messages may be archived (MAM) by the server, a client should ignore them
  6. The join response does not contain a subject
  7. The join response does not contain a self-presence
  8. The join response neither contains a subject nor a self-presence

Staying inside

  1. The client gets kicked by the MUC, with or without a message
  2. The client gets banned by the MUC, with or without a message
  3. The MUC join completes, but the occupant is then silently removed, all subsequent messages get rejected (see XEP-0410)




  1. The client gets muted by the MUC, with or without a message


  1. Another occupant sends an invalid presence to the room (I'm looking at you, old Gajim)

HTTP File Upload

A testing component could reject the file slot request IQ with different errors based on the requested file name / file size. A client developer would have a set of according files to trigger different conditions.


TODO: all kinds of handshake failures


TODO: incomplete archive, incoherent IDs, duplicates

  1. Client attempts to fetch MAM with a MAM-ID that is unknown to the server (because it expired)