What happens here is that Traits has two different ways to handle notifications: static notifications and dynamic notifications.
Static notifiers (for example, created by specially selected methods _*_changed() ) are quite easy: each instance line has a list of notifications about t, which are basically functions or lightweight shell methods.
Dynamic notifiers (for example, those created with on_trait_change() and extended name conventions , such as a[] , are much more powerful and flexible, but as a result they are much heavier. In particular, in addition to the shell object they create, they also create an analysis view the extended name of the attribute and the handler object, some of which are in the HasTraits queue HasTraits instances of the subclass.
As a result, even for a simple expression like a[] , many new Python objects will be created, and these objects must be created for each on_trait_change listener for each instance separately in order to properly handle corner elements, such as instance features. The relevant code is here: https://github.com/enthought/traits/blob/master/traits/has_traits.py#L2330
Based on the reported numbers, most of the differences in memory usage that you see is to create this dynamic listener infrastructure for each instance and each on_trait_change decorator.
It is worth noting that there is a short circuit for on_trait_change in the case when you use a simple attribute name, in which case it generates a static attribute attribute instead of a dynamic notifier. Therefore, if you should write something like:
class FooSimpleDecorator(HasTraits): a = List(Int) @on_trait_change('a') def a_updated(self): pass @on_trait_change('a_items') def a_items_updated(self): pass
you should see similar memory performance for specially named methods.
To answer the rephrased question "why use on_trait_change ", in FooDecorator you can write one method instead of two if your answer to changing the list or any items in the list is the same. This greatly simplifies the debugging and maintenance of the code, and if you do not create thousands of these objects, the additional memory usage is negligible.
This becomes an even more important factor when you consider more complex advanced feature name templates in which dynamic listeners automatically process changes that would otherwise require significant manual (and error prone) code to connect and remove listeners from intermediate objects and features. The strength and simplicity of this approach usually outweighs concerns about memory usage.