SSP HowTo

Synopsis

This code implements the Stochastic Shortest Path (SSP) solver proposed in [1] and is based on the Focused Real Time Dynamic Programming algorithm of [2]. The program can be used to obtain the results of [1]. In addition, the code can be easily modified (see instructions below) to solve any other SSP problem. The code uses the Boost C++ Libraries, which are required to successfully compile it.

Hot to use the code

The code can be used "as is" to solve the problem presented in [1] for random network topologies. To this end, compile the code using the command:

$ g++ -I BOOST_PATH main.cpp

and then execute it with:

$ ./main NUMBER_OF_NODES EPSILON DELTA MAX_TRANS MAX_X MAX_Y ALPHA H_UPPER FILENAME N_SAMPLE

where, NUMBER_OF_NODES is the total number of nodes in the network, EPSILON is used to tune the quality of the solution, DELTA is a tunable parameter used to prune the state space (see [1]), MAX_TRANS is the maximum number of nodes that are allowed to cooperate by simultaneously transmitting the packet in a time slot, ALPHA is a parameter used to control the tradeoff between energy consumption and delay, MAX_X and MAX_Y determine the area where the nodes are randomly placed, FILENAME is the name of the file used to print the results and N_SAMPLE is used to set the number of random network topologies to consider in the optimization (the optimal policy is found for each of them, the corresponding performance metrics are computed and averaged over N_SAMPLE topologies to obtain the wanted statistics).

Hot to Modify the code to solve general SSP problems

The code can be easily modified so as to solve any SSP problem. To do so, the necessary steps are:

  1. Define the struct STATE to reflect the state of the Markov system. For example, in the current implementation the state is represented by a bitset, where each bit represents a node of the network. For each node, each bit can be either 1 if the node as correctly decoded the packet, or 0 otherwise. Thus, the optimal action, the next state with maximum priority and the list of states for the next system transition and all possible actions are represented through bitsets too.

  2. Define how to compare two STATE structures through the function:

    int mycmp(void *litem, void *ritem)

    which is used to retrieve an element from the hash table.

  3. Define the transaction probability function for your Markov decision process through the following function:

    double trans_prob(CURRENT, NEXT, ACTION)

    The function returns the probability of going from state CURRENT to state NEXT when action ACTION is taken.

  4. Define the cost associated with the transaction of the previos step:

    double cost(CURRENT, NEXT, ACTION)

    The function returns the cost of going from state CURRENT to state NEXT when action ACTION is taken.

  5. Define how to generate the list of next states and the list of possible actions for any given state. This is done though the function:

    void prune(STATE *s)

  6. Define the final (absorbing) state through the function:

    bool isFinal(STATE *s)

References

[1] Michele Rossi, Cristiano Tapparello and Stefano Tomasin, On Optimal Cooperator Selection Policies for Multi-Hop Ad Hoc Networks, IEEE Transactions on Wireless Communications,Vol. 10, No. 2, February 2011, pp. 506-518.

[2] T. Smith and R. G. Simmons, Focused real-time dynamic programming for MDPs: squeezing more out of a heuristic, in Proc. National Conference on Artificial Intelligence, Boston, MA, July 2006.


Download the source code [25KB]