There are 8 prison cells in a row and each cell is either occupied or vacant.

Each day, whether the cell is occupied or vacant changes according to the following rules:

  • If a cell has two adjacent neighbors that are both occupied or both vacant, then the cell becomes occupied.
  • Otherwise, it becomes vacant.

Note that because the prison is a row, the first and the last cells in the row can’t have two adjacent neighbors.

You are given an integer array cells where cells[i] == 1 if the ith cell is occupied and cells[i] == 0 if the ith cell is vacant, and you are given an integer n.

Return the state of the prison after n days (i.e., n such changes described above).

Example 1:

Input: cells = [0,1,0,1,1,0,0,1], n = 7
Output: [0,0,1,1,0,0,0,0]
Explanation: The following table summarizes the state of the prison on each day:
Day 0: [0, 1, 0, 1, 1, 0, 0, 1]
Day 1: [0, 1, 1, 0, 0, 0, 0, 0]
Day 2: [0, 0, 0, 0, 1, 1, 1, 0]
Day 3: [0, 1, 1, 0, 0, 1, 0, 0]
Day 4: [0, 0, 0, 0, 0, 1, 0, 0]
Day 5: [0, 1, 1, 1, 0, 1, 0, 0]
Day 6: [0, 0, 1, 0, 1, 1, 0, 0]
Day 7: [0, 0, 1, 1, 0, 0, 0, 0]

Example 2:

Input: cells = [1,0,0,1,0,0,1,0], n = 1000000000
Output: [0,0,1,1,1,1,1,0]

Constraints:

  • cells.length == 8
  • cells[i] is either 0 or 1.
  • 1 <= n <= 109
Solution

Overview

First of all, one can consider this problem as a simplified version of the Game of Life invented by the British mathematician John Horton Conway in 1970.

By simplification, this problem is played on one dimensional array (compared to 2D in Game of Life), and it has less rules.

Due to the nature of game, one of the most intuitive solutions to solve this problem is playing the gamei.e. we can simply run the simulation.

Starting from the initial state of the prison cells, we could evolve the states following the rules defined in the problem step by step.

In the following sections, we will give some approaches on how to run the simulation efficiently.


Approach 1: Simulation with Fast Forwarding

Intuition

One important observation from the Game of Life is that we would encounter some already-seen state over the time, simply due to the fact that there are limited number of states.

The above observation applies to our problem here as well. Given K number of cells, there could be at most 2^K possible states. If the number of steps is larger than all possible states (i.e. N > 2^K), we are destined to repeat ourselves sooner or later.

In fact, we would encounter the repetitive states sooner than the theoretical boundary we estimated above. For instance, with the initial state of [1,0,0,0,1,0,0,1], just after 15 steps, we would encounter a previously seen state. Once we encounter a state seen before, the history would then repeat itself again and again, assuming that time is infinite.

All states between two repetitive states form a cycle, which would repeat itself over the time. Therefore, based on this observation, we could fast-forward the simulation rather than going step by step, once we encounter any repetitive state.

Algorithm

Here is the overall idea to implement our fast-forward strategy.

  • First of all, we record the state at each step, with the index of the current step, i.e. state -> step_index.
  • Once we discover a repetitive state, we can then determine the length (denoted as C) of the cycle, with the help of hashmap that we recorded.
  • Starting from this repetitive state, the prison cells would play out the states within the cycle over and over, until we run out of steps.
  • In other words, if the remaining steps is N, at least we could fast-forward to the step of N\mod  C.
  • And then from the step of N\mod C, we continue the simulation step by step.
fastforward

Note: we only need to do the fast-forward once, if there is any.

Here are some sample implementations based on the above idea.

class Solution:
    def prisonAfterNDays(self, cells: List[int], N: int) -> List[int]:

        seen = dict()
        is_fast_forwarded = False

        while N > 0:
            # we only need to run the fast-forward once at most
            if not is_fast_forwarded:
                state_key = tuple(cells)
                if state_key in seen:
                    # the length of the cycle is seen[state_key] - N 
                    N %= seen[state_key] - N
                    is_fast_forwarded = True
                else:
                    seen[state_key] = N

            # check if there is still some steps remained,
            # with or without the fast-forwarding.
            if N > 0:
                N -= 1
                next_day_cells = self.nextDay(cells)
                cells = next_day_cells

        return cells


    def nextDay(self, cells: List[int]):
        ret = [0]      # head
        for i in range(1, len(cells)-1):
            ret.append(int(cells[i-1] == cells[i+1]))
        ret.append(0)  # tail
        return ret

Complexity Analysis

Let K be the number of cells, and N be the number of steps.

  • Time Complexity: \mathcal{O}\big(K \cdot \min(N, 2^K)\big)
    • As we discussed before, at most we could have 2^K possible states. While we run the simulation with N steps, we might need to run \min(N, 2^K) steps without fast-forwarding in the worst case.
    • For each simulation step, it takes \mathcal{O}(K) time to process and evolve the state of cells.
    • Hence, the overall time complexity of the algorithm is \mathcal{O}\big(K \cdot \min(N, 2^K)\big).
  • Space Complexity:
    • The main memory consumption of the algorithm is the hashmap that we used to keep track of the states of the cells. The maximal number of entries in the hashmap would be 2^K as we discussed before.
    • In the Java implementation, we encode the state as a single integer value. Therefore, its space complexity would be \mathcal{O}(2^K), assuming that K does not exceed 32 so that a state can fit into a single integer number.
    • In the Python implementation, we keep the states of cells as they are in the hashmap. As a result, for each entry, it takes \mathcal{O}(K) space. In total, its space complexity becomes \mathcal{O}(K \cdot 2^K).

Approach 2: Simulation with Bitmap

Intuition

In the above approach, we implemented the function nextDay(state), which runs an iteration to calculate the next state of cells, given the current state.

Given that we have already encoded the state as a bitmap in the Java implementation of the previous approach, a more efficient way to implement the nextDay function would be to apply the bit operations (e.g AND, OR, XOR etc.).

The next state of a cell depends on its left and right neighbors. To align the states of its neighbors, we could make a left and a right shift respectively on the bitmap. Upon the shifted bitmaps, we then apply the XOR and NOT operations sequentially, which would lead to the next state of the cell.

Here we show how it works with a concrete example.

bit operations

Note that, the head and tail cells are particular, which would remain vacant once we start the simulation. Therefore, we should reset the head and tail bits by applying the bit AND operation with the bitmask of 01111110 (i.e. 0x7e).

Algorithm

We could reuse the bulk of the previous implementations, and simply rewrite the nextDay function with the bit operations as we discussed.

Additionally, at the end of the simulation, we should decode the states of the cells from the final bitmap.

class Solution:
    def prisonAfterNDays(self, cells: List[int], N: int) -> List[int]:

        seen = dict()
        is_fast_forwarded = False

        # step 1). convert the cells to bitmap
        state_bitmap = 0x0
        for cell in cells:
            state_bitmap <<= 1
            state_bitmap = (state_bitmap | cell)

        # step 2). run the simulation with hashmap
        while N > 0:
            if not is_fast_forwarded:
                if state_bitmap in seen:
                    # the length of the cycle is seen[state_key] - N 
                    N %= seen[state_bitmap] - N
                    is_fast_forwarded = True
                else:
                    seen[state_bitmap] = N
            # if there is still some steps remained,
            #   with or without the fast-forwarding.
            if N > 0:
                N -= 1
                state_bitmap = self.nextDay(state_bitmap)

        # step 3). convert the bitmap back to the state cells
        ret = []
        for i in range(len(cells)):
            ret.append(state_bitmap & 0x1)
            state_bitmap = state_bitmap >> 1

        return reversed(ret)


    def nextDay(self, state_bitmap: int):
        state_bitmap = ~ (state_bitmap << 1) ^ (state_bitmap >> 1)
        state_bitmap = state_bitmap & 0x7e  # set head and tail to zero
        return state_bitmap

Complexity Analysis

Let K be the number of cells, and N be the number of steps.

  • Time Complexity: \mathcal{O}\big(\min(N, 2^K)\big) assuming that K does not exceed 32.
    • As we discussed before, at most we could have 2^K possible states. While we run the simulation, we need to run \min(N, 2^K) steps without fast-forwarding in the worst case.
    • For each simulation step, it takes a constant \mathcal{O}(1) time to process and evolve the states of cells, since we applied the bit operations rather than iteration.
    • Hence, the overall time complexity of the algorithm is \mathcal{O}\big(\min(N, 2^K)\big).
  • Space Complexity: \mathcal{O}(2^K)
    • The main memory consumption of the algorithm is the hashmap that we used to keep track of the states of the cells. The maximal number of entries in the hashmap would be 2^K as we discussed before.
    • This time we adopted the bitmap for both Java and Python implementation, so that each state consumes a constant \mathcal{O}(1) space.
    • To sum up, the overall space complexity of the algorithm is \mathcal{O}(2^K).

Leave a reply