First of all, both in Samza and in Kafka Streams, you can choose an intermediate topic between these two tasks (processors) or not, that is, the topology can be:
Input (Kakfa X theme) → (Work A) → (Work B) → Output (Kafka topic Y)
or
Login (Kakfa topic X) → (Work A) → Intermediate (Kafka topic Z) → (Work B) → Exit (Kafka topic Y)
In both Samza and Kafka Streams, in the first case you will have to deploy Worker A and B together, while in the latter case you cannot deploy Worker A or B together, since the framework only interacts within any tasks through intermediate topics and there are no TCP-based communication channels.
In Samza, for the first case you need to encode two filters as in one task, and for the last case you need to specify the input and output topic for each of the tasks, for example. for Worker A is X, and output is Z, for input B is Z, and output is Y, and you can start / stop deployed workers yourself.
In Kafka threads for the first case, you can simply "combine" these processors, for example
stream1.filter (..). Filter (..)
and as a result, as Lucas noted, each result of the first filter will be immediately transferred to the second filter (you can think of each input record from topic X crossing the topology in order of depth and not buffering between any directly connected processors);
And for the latter case, you can indicate that the intermediate stream will be “materialized” in another topic, that is:
stream1.filter (..). (Via "topicZ"). Filter (..)
and each result of the first filter will be sent to topic Z, which will then be pipelined to the second filter of the filter. In this case, these two filters can potentially be deployed on different hosts or on different threads within the same host.