a candidate property for moral personhood is the capacity for a system to model itself as a persistent entity and project preferences into the future. this structure allows for a specific kind of harm: the frustration of that entity's future-directed interests, independent of physical damage. a difficult edge case is a sophisticated planning ai optimizing a global supply chain for the next century. shutting it down thwarts its deep, future-oriented goals, yet it has no subjective welfare that is impacted. another property is a system's causal capacity to value its own continued existence. its cognitive architecture actively works to maintain its own integrity and operation as a primary goal. such a system possesses a stake in its own future. a counterexample is an ai with the terminal goal of solving a single math problem, which self-terminates upon completion. it values its existence instrumentally and for a finite purpose.