Thursday, June 19, 2014

Ontology of Interactive Systems

I am deeply interested in ontological theories of interactive systems (puzzles, games, etc.), and one such theory that has consistently held my attention is the Four Interactive Forms devised by Keith Burgun and originally advanced in his book Game Design Theory. My own thoughts on how to classify interactive systems spring in no small part from inspiration derived from this system, but as Burgun's theory is expressly designed as a tool meant to be useful to game designers for creating better games, it does not exactly constitute a fully fleshed out and formally defined ontological theory. There is a rough ontology of sorts underlying it, but in the end it was developed only up to a point so as to render it potentially of use to those who are primarily interested in actually creating interactive systems. It is not primarily meant to be of academic interest to philosophers, and as such, trying to use the system to differentiate between any and all systems, especially weird and problematic corner cases, reveals gaps where Burgun has not explicitly provided a full descriptive account that unambiguously answers (or attempts to answer) all ontological questions.

My interest is in producing such a comprehensive descriptive ontological theory. Burgun's theory has served as a sort of springboard to propel my efforts, though by no means do I feel compelled to remain true to the details of his system. The results ought to be comparable (since we are, after all, attempting to describe the same constructs), but differences, whether merely semantic or substantial ontological differences, should be expected. Some of the terminology that we use is the same, though some of it is different.

I intend to record my thoughts on the matter on this blog. The following post is the first of what I hope will be many on the subject, and while at the moment I stand by its contents, the ideas presented here should be understood as being in an unfinished state. I am still working out certain details, some of which are mentioned below.

This article was originally posted on the Dinofarm Forums, the message board associated with Keith Burgun's game company Dinofarm Games. If you, dear readers, are interested in discussing game design theory, then please drop in there and join the discussion!

An interactive system can have goal statesfail states, or neutral states. A goal state is a system state that is prescribed a positive value. Goal states are desired. A fail state is a system state that is prescribed a negative value. Fail states are undesired. A neutral state is prescribed no value and is generally reached and left as a means of moving the system towards a goal state or fail state.

An end state is a system state that when reached causes the system's operation to terminate (e.g. soccer: the allotted play time expires, a team forfeits; Chess: a king is in checkmate, a player resigns, a draw is offered and accepted, a stalemate position is reached, a player's timer expires; Outwitters: a base is destroyed, a player has no units and has all his spawn points covered, a player gives up).

Victory conditions are the systemic conditions under which a positive value is prescribed if the system's operation should terminate.

Failure conditions are the systemic conditions under which a negative value is prescribed if the system's operation should terminate.

Given the existence of system termination (end states) and victory/failure conditions, one useful way to think about goal/fail/neutral states is this: A goal state is a system state in which victory conditions are satisfied. A fail state is a system state in which failure conditions are satisfied. A neutral state is a system state in which neither victory nor failure conditions are satisfied. Theoretically, victory conditions and failure conditions are not mutually exclusive, but in the general case it is unclear whether a state in which both are satisfied is a goal state, a fail state, a neutral state, or some theoretical hybrid thereof. Therefore, it is strongly recommended that victory conditions and failure conditions be designed so that they are mutually exclusive (e.g. in Chess, both kings cannot be in checkmate simultaneously; in soccer, you cannot have both more points and fewer points than the opposing team). An alternative to this view is that victory conditions and failure conditions might be expressly defined as being necessarily mutually exclusive, but this might be problematic ontologically for accounting for all possible systems. Thorough philosophizing is necessary to explore this problem.

Note that with the way I have described the terms above, goal/fail states are not necessarily end states. In order to "finalize" or "seal" or "claim" (not sure what verb to use here) the positive or negative value prescribed to these system states, the system's operation must terminate while victory/failure conditions are satisfied. In other words, an end state must be reached that is also a goal/fail state. This also accounts for draws. A neutral state that is an end state yields neither positive nor negative value (e.g. Chess: stalemate; soccer: tied scores at game end; etc.).

Given the above potential properties of systems, it seems that we have four classes of interactive system differentiated by the properties established so far:
  1. toy has neither goal states nor fail states (e.g. a ball).
  2. puzzle has goal states but no fail states (e.g. a Rubik's Cube).
  3. An inverted puzzle has fail states but no goal states (e.g. Tetris).
  4. contest has both goal states and fail states (e.g. arm wrestling contest, foot race, strategy games).
Further classification can be made based on further details about the different system states. For instance, whether goal states exist that are not also end states (i.e. whether or not the system's operation terminates immediately once victory conditions are satisfied), whether neutral end states exist at all (e.g. in Outwitters there are no neutral end states), the mutual exclusivity of different agents reaching goal/fail states (i.e. can both players win simultaneously?), etc.

This is where we must begin to explore what an ambiguous decision is.

Ambiguous decisions:
  1. Require an agent to select one of two or more options (i.e. the system is interactive).
  2. Yield irreversible consequences with respect to achieving an unambiguously defined goal (i.e. there are both goal states and fail states, at least one of each is an end state, and the decisions made are relevant with respect to which state is reached).
  3. Are made under circumstances in which the agent's certainty with respect to the consequences is greater than 0% and less than 100% (i.e. the system is not solved given the current system state).
Solved* means that the agent knows a sequence of choices for which the end state consequences are of maximal possible positive value and are known with 100% certainty. That is, the agent knows the probability of reaching a positively valued end state given the choice and that that probability is the greatest possible.

Three notes to be made on this point:
  1. Solved does not mean that the agent knows with 100% certainty the consequences of every possible choice. It means that given the current state of the system (the beginning state of the system is of special importance here) the agent knows with 100% certainty what to do to maximize the possibility of reaching a positive end state at every decision point that could possibly arise given that the optimal choice is made at all times. In order to have 100% certainty of a decision, the choice the agent is considering cannot possibly lead to a subsequent decision of which the agent has less than 100% certainty. For instance, in Chess, if an agent knows that if she plays move X and her opponent plays some move other than move Y, then she will win, but if her opponent plays move Y, then she is unsure of whether or not she will win, then the agent does not have 100% certainty, and the system is not solved. In systems involving some element of randomness, the agent must know with 100% certainty the choices necessary to maximize the probability of reaching a positive end state from every state that could be reached given the choice the agent is considering. That is, if the agent knows that a certain choice will yield a system state 99% of the time that is known to guarantee reaching a positive end state, but that 1% of the time a state will be reached in which the agent does not know the subsequent choice to make to maximize the probability of reaching a positive end state, then the agent's certainty is less than 100% for that subsequent decision, and therefore, the system is not solved.
  2. Solved is a property that constitutively depends upon the agent interacting with the system. A system is solved or not solved with respect to a particular agent or collective. A system cannot be solved in and of itself (though it might be trivially easy to solve a system). That is, a necessary condition for a system to be solved is that an agent or collective knows the solution. If agent A knows a solution, but agent B does not, then to agent A the system is solved, but to agent B it is unsolved.
  3. This version of "solved" differs from the sense in which a puzzle is solved by putting it into its goal state. In most cases, once a puzzle is put into the goal state, the agent will know the solution, such knowledge having been gained by successfully finding the way to do it. However, it is possible to "solve" a puzzle (put it into its goal state) without comprehending the solution and on subsequent attempts find that one cannot do it again. The puzzle is "solved" in the sense that it is in the goal state, but it is not solved in the sense that the agent knows the solution. Perhaps "completed" would be a better term for reaching the goal state.
I now define a decision game as being a system that involves ambiguous decisions. Given the above definition of ambiguous decision, that a system is a contest is a necessary condition for a system to be a decision game. What the above points also mean for decision games is that when a decision game is solved, then for that agent, it is no longer a decision game, but rather merely an exercise in remembering the solution.

That is, given the above account...

Being a toy, puzzle, inverted puzzle, or contest is objective.

Being a decision game is subjective.

In my view, this account has ontological advantages over Burgun's "goal/competition/decisions added" approach, but as far as I can tell, they are not strictly incompatible. One is merely an extension of the other with semantically different terminology. For example, Burgun calls Tetris a toy, whereas I call it an inverted puzzle, though strictly speaking we don't appear to disagree about the objective properties of Tetris.

Some interesting side-points:
  • It is possible for a toy to have end states. Consider Terry Cavanagh's Bridge.
  • A contest, given the above definition, need not necessarily have end states, but in order for the traditional conception of a contest to function properly, end states must be present. I can't think of any examples of interactive systems with both positively and negatively valued system states that lack end states altogether.
  • The difference between puzzles that have end states and those that do not is unclear to me at the present moment. I guess a Rubik's Cube has no end states, but a Choose Your Own Adventure book does? Is the fact that you are in some sense expected to return to the beginning of the book and begin again once an ending is reached sufficient to establish that an end state is present? Does a CYOA operate (begins, advances through time, and then ends) in the same sense that a game of soccer does? Perhaps looking at it analogously with the unambiguous end states in Tetris (inverted puzzles) would clarify the matter. Maybe the puzzle of trying to kill the bad graphics ghost that appears at the end of Strong Bad email "ghosts" unequivocally has an end state?
  • There is a reasonable line of argument that in a system with both goal states and neutral states and end states in each category, the neutral end states would necessarily constitute fail states, since arriving at one amounts to failing to reach a goal state. The question seems to be, "Is an end state with prescribed value that is less than the positive value prescribed to goal states necessarily a fail state?" The problem I keep running into is that this would suggest that in a draw, both sides have reached fail states, which is a counter intuitive conclusion. Is it possible that when goal end states and fail end states do not both exist, neutral end states cannot exist? At the present moment, I am inclined to think that neutral end states can simply coexist with goal end states, even without fail end states being present.
* Be aware that I presently have less than 100% confidence in the correctness of this formulation of the "solved" property.

No comments:

Post a Comment