There are at least two problems with OnionProtocol :
- The innermost
TLSMemoryBIOProtocol becomes wrappedProtocol when it should be the outermost; ProtocolWithoutConnectionLost does not pop up any TLSMemoryBIOProtocol off OnionProtocol stack because connectionLost is only called after the FileDescriptor doRead or doWrite return the reason for the disconnection.
We cannot solve the first problem without changing the way OnionProtocol controls its stack, and we cannot solve the second until we figure out a new implementation of the stack. Not surprisingly, the right design is a direct result of the way data flows in Twisted, so we'll start by analyzing the data stream.
Twisted represents an established association with an instance of twisted.internet.tcp.Server or twisted.internet.tcp.Client . Since the only interactivity in our program occurs in stoptls_client , we will consider only the data flow to and from the Client instance.
Let me warm up with the minimal LineReceiver client, which selects the return lines received from the local server on port 9999:
from twisted.protocols import basic from twisted.internet import defer, endpoints, protocol, task class LineReceiver(basic.LineReceiver): def lineReceived(self, line): self.sendLine(line) def main(reactor): clientEndpoint = endpoints.clientFromString( reactor, "tcp:localhost:9999") connected = clientEndpoint.connect( protocol.ClientFactory.forProtocol(LineReceiver)) def waitForever(_): return defer.Deferred() return connected.addCallback(waitForever) task.react(main)
Once the connection is established, Client becomes our LineReceiver protocol and supports input and output:

New data from the server forces the reactor to call the Client doRead method, which, in turn, passes what it received to the LineReceiver dataReceived method. Finally, LineReceiver.dataReceived calls LineReceiver.lineReceived when at least one line is available.
Our application sends the data string back to the server by calling LineReceiver.sendLine . This calls write in the transport binding to the protocol instance, which is the same Client instance that processed the incoming data. Client.write organizes the data sent by the reactor, and Client.doWrite actually sends the data through the socket.
We are ready to look at the behavior of OnionClient , which never causes startTLS :

OnionClient wrapped in OnionProtocol s , which are the essence of our attempted nested TLS. As a subclass of twisted.internet.policies.ProtocolWrapper instance is a kind of protocol transport sandwich; it is a protocol for lower-level transport and as a transport for the protocol that it carries through the masquerade set during the connection, WrappingFactory .
Now Client.doRead calls OnionProtocol.dataReceived , which proxies the data to OnionClient . As a transport, OnionClient OnionProtocol.write takes strings to send from OnionClient.sendLine and OnionClient.sendLine them to Client , its own transport. This is a normal interaction between ProtocolWrapper , its wrapped protocol and its own transport, so the natural data flows to each of them without any problems.
OnionProtocol.startTLS does something else. He tries to insert a new ProtocolWrapper - which is the TLSMemoryBIOProtocol - between the established protocol and transport pair. It seems pretty simple: ProtocolWrapper stores the top-level protocol as an attribute for wrappedProtocol and write proxies and other attributes up to its own transport . startTLS should be able to introduce a new TLSMemoryBIOProtocol , which wraps OnionClient in the connection, fixing this instance over its own wrappedProtocol and transport :
def startTLS(self): ... connLost = ProtocolWithoutConnectionLost(self.wrappedProtocol) connLost.onion = self
Here's the data stream after the first call to startTLS :

As expected, new data sent to OnionProtocol.dataReceived is redirected to TLSMemoryBIOProtocol stored in _tlsStack , which passes the decrypted plaintext to OnionClient.dataReceived . OnionClient.sendLine also passes its data to TLSMemoryBIOProtocol.write , which encrypts it and sends the received ciphertext to OnionProtocol.write , and then Client.write .
Unfortunately, this scheme failed after the second startTLS call. The main reason for this line:
self.wrappedProtocol = self.transport = tlsProtocol
Each startTLS call replaces wrappedProtocol with the innermost TLSMemoryBIOProtocol , although the data received with Client.doRead was encrypted with the outermost:

However, the transport s tags are nested correctly. OnionClient.sendLine can only call its write transport - that is, OnionProtocol.write - so OnionProtocol must replace its transport with the innermost TLSMemoryBIOProtocol to ensure that entries are sequentially nested inside additional levels of encryption.
Thus, the solution must ensure that the data passes through the first TLSMemoryBIOProtocol on _tlsStack to the next, in turn, so that each level of encryption is peeled in the reverse order:

Presenting _tlsStack as a list seems less natural given this new requirement. Fortunately, presenting the incoming data stream linearly suggests a new data structure:

Both the erroneous and the correct incoming data flow resemble a singly linked list, with wrappedProtocol serving as the ProtocolWrapper following links and protocol , serving as Client . The list should grow down from OnionProtocol and always end with OnionClient . The error arises because this ordering invariant is violated.
A single list is good for pushing protocols onto the stack, but it is inconvenient to push them, because removing it requires going down from head to node. Of course, this bypass occurs every time the data is received, so the problem is the complexity caused by the additional bypass, and not the complexity of the time. Fortunately, the list is actually double linked:

The transport attribute associates each nested protocol with its predecessor, so that transport.write can add sequentially lower encryption levels before finally sending data over the network. We have two sentries to help manage the list: Client should always be at the top, and OnionClient should always be at the bottom.
Combining the two, we get the following:
from twisted.python.components import proxyForInterface from twisted.internet.interfaces import ITCPTransport from twisted.protocols.tls import TLSMemoryBIOFactory, TLSMemoryBIOProtocol from twisted.protocols.policies import ProtocolWrapper, WrappingFactory class PopOnDisconnectTransport(proxyForInterface(ITCPTransport)): """ L{TLSMemoryBIOProtocol.loseConnection} shuts down the TLS session and calls its own transport C{loseConnection}. A zero-length read also calls the transport C{loseConnection}. This proxy uses that behavior to invoke a C{pop} callback when a session has ended. The callback is invoked exactly once because C{loseConnection} must be idempotent. """ def __init__(self, pop, **kwargs): super(PopOnDisconnectTransport, self).__init__(**kwargs) self._pop = pop def loseConnection(self): self._pop() self._pop = lambda: None class OnionProtocol(ProtocolWrapper): """ OnionProtocol is both a transport and a protocol. As a protocol, it can run over any other ITransport. As a transport, it implements stackable TLS. That is, whatever application traffic is generated by the protocol running on top of OnionProtocol can be encapsulated in a TLS conversation. Or, that TLS conversation can be encapsulated in another TLS conversation. Or **that** TLS conversation can be encapsulated in yet *another* TLS conversation. Each layer of TLS can use different connection parameters, such as keys, ciphers, certificate requirements, etc. At the remote end of this connection, each has to be decrypted separately, starting at the outermost and working in. OnionProtocol can do this itself, of course, just as it can encrypt each layer starting with the innermost. """ def __init__(self, *args, **kwargs): ProtocolWrapper.__init__(self, *args, **kwargs)
(This is also available on GitHub .)
The solution to the second problem lies in PopOnDisconnectTransport . The source code attempted to pull a TLS session from the stack through connectionLost , but since only a private file descriptor called connectionLost , it was unable to delete the stopped TLS sessions that did not close the underlying socket.
At the time of this writing, TLSMemoryBIOProtocol calls its transport loseConnection exactly two places: _shutdownTLS and _tlsShutdownFinished . _shutdownTLS is called in active closes ( loseConnection , abortConnection , unregisterProducer and after loseConnection and all pending records were reset ), and _tlsShutdownFinished is called during passive closure ( communication failures , empty reading , reading errors and writing errors ). All this means that both sides of a closed connection can pop stop TLS sessions from the stack during loseConnection . PopOnDisconnectTransport does this idempotently because loseConnection usually idempotent, and TLSMemoryBIOProtocol certainly expects it to be.
The drawback of the stack control logic in loseConnection is that it depends on the specifics of the TLSMemoryBIOProtocol implementation. A generic solution will require new APIs at many levels of Twisted.
Until then, we have adhered to another example of the Hiram Law .