Why is My Minimax not expanding and making the right movements?

I do minimax in Python 2.7.11 in the main Pacman game. Pacman is a maximizing agent, and one or more ghosts (depending on the test layout) are / are minimizing agent (s).

I have to implement minimax so that there is potentially more than one minimizing agent, and so that it can create a tree of n layers (depth). For example, Ply 1 would be every ghost that would make a turn, minimizing the terminal state utilities of their possible moves, as well as pacman, doing his turn, maximizing what the ghosts had already minimized. Graphically, layer 1 is as follows:

Ply 1 depth minimax

If we had the following arbitrary utilities assigned to green terminal states (from left to right):

-10, 5, 8, 4, -4, 20, -7, 17

Pacman needs to return -4and then move in that direction, creating a completely new minimax tree based on this solution. Firstly, the list of variables and functions needed for my implementation makes sense:

# Stores everything about the current state of the game
gameState

# A globally defined depth that varies depending on the test cases.
#     It could be as little as 1 or arbitrarily large
self.depth

# A locally defined depth that keeps track of how many plies deep I've gone in the tree
self.myDepth

# A function that assigns a numeric value as a utility for the current state
#     How this is calculated is moot
self.evaluationFunction(gameState)

# Returns a list of legal actions for an agent
#     agentIndex = 0 means Pacman, ghosts are >= 1
gameState.getLegalActions(agentIndex)

# Returns the successor game state after an agent takes an action
gameState.generateSuccessor(agentIndex, action)

# Returns the total number of agents in the game
gameState.getNumAgents()

# Returns whether or not the game state is a winning (terminal) state
gameState.isWin()

# Returns whether or not the game state is a losing (terminal) state
gameState.isLose()

This is my implementation:

""" 
getAction takes a gameState and returns the optimal move for pacman,
assuming that the ghosts are optimal at minimizing his possibilities
"""
def getAction(self, gameState):
    self.myDepth = 0

    def miniMax(gameState):
        if gameState.isWin() or gameState.isLose() or self.myDepth == self.depth:
            return self.evaluationFunction(gameState)

        numAgents = gameState.getNumAgents()
        for i in range(0, numAgents, 1):
            legalMoves = gameState.getLegalActions(i)
            successors = [gameState.generateSuccessor(j, legalMoves[j]) for j, move 
                                                           in enumerate(legalMoves)]
            for successor in successors:
                if i == 0:
                    return maxValue(successor, i)
                else:
                    return minValue(successor, i)

    def minValue(gameState, agentIndex):
        minUtility = float('inf')
        legalMoves = gameState.getLegalActions(agentIndex)
        succesors = [gameState.generateSuccessor(i, legalMoves[i]) for i, move 
                                                      in enumerate(legalMoves)]
        for successor in successors:
            minUtility = min(minUtility, miniMax(successor))

        return minUtility

    def maxValue(gameState, agentIndex)
        self.myDepth += 1
        maxUtility = float('-inf')
        legalMoves = gameState.getLegalActions(agentIndex)
        successors = [gameState.generateSuccessor(i, legalMoves[i]) for i, move
                                                       in enumerate(legalMoves)]
        for successor in successors:
            maxUtility = max(maxUtility, miniMax(successor))

        return maxUtility

    return miniMax(gameState)

Does anyone have any idea why my code is doing this? I hope there are several Minimax / Artificial Intelligence experts who can identify my problems. Thanks in advance.

UPDATE: by creating an instance of the value self.myDepthas 0instead 1, I handled the exception throw problem. However, the general incorrectness of my implementation still remains.

+4
1

- . , depth, . , maxValue, maxValue. , numAgents, , miniMax . , :

def getAction(self, gameState):

    self.numAgents = gameState.getNumAgents()
    self.myDepth = 0
    self.action = Direction.STOP # Imported from a class that defines 5 directions

    def miniMax(gameState, index, depth, action):
        maxU = float('-inf')
        legalMoves = gameState.getLegalActions(index)
        for move in legalMoves:
            tempU = maxU
            successor = gameState.generateSuccessor(index, move)
            maxU = minValue(successor, index + 1, depth)
            if maxU > tempU:
                action = move
        return action

    def maxValue(gameState, index, depth):
        if gameState.isWin() or gameState.isLose() or depth == self.depth:
            return self.evaluationFunction(gameState)

        index %= (self.numAgents - 1)
        maxU = float('-inf')
        legalMoves = gameState.getLegalActions(index)
        for move in legalMoves:
            successor = gameState.generateSuccessor(index, move)
            maxU = max(maxU, minValue(successor, index + 1, depth)
        return maxU

    def minValue(gameState, index, depth):
        if gameState.isWin() or gameState.isLose() or depth == self.depth:
            return self.evaluationFunction(gameState)

        minU = float('inf')
        legalMoves = gameState.getLegalActions(index)
        if index + 1 == self.numAgents:
            for move in legalMoves:
                successor = gameState.generateSuccessor(index, move)
                # Where depth is increased
                minU = min(minU, maxValue(successor, index, depth + 1)
        else:
            for move in legalMoves:
                successor = gameState.generateSuccessor(index, move)
                minU = min(minU, minValue(successor, index + 1, depth)
        return minU

    return miniMax(gameState, self.index, self.myDepth, self.action)

! .

0

All Articles