
Learning to Detect an Odd Markov Arm
A multiarmed bandit with finitely many arms is studied when each arm is...
read it

Learning to Detect an Odd Restless Markov Arm with a Trembling Hand
This paper studies the problem of finding an anomalous arm in a multiar...
read it

A Bad Arm Existence Checking Problem
We study a bad arm existing checking problem in which a player's task is...
read it

Optimal Odd Arm Identification with Fixed Confidence
The problem of detecting an odd arm from a set of K arms of a multiarme...
read it

Sequential Multihypothesis Testing in Multiarmed Bandit Problems:An Approach for Asymptotic Optimality
We consider a multihypothesis testing problem involving a Karmed bandi...
read it

A General Framework of MultiArmed Bandit Processes by Switching Restrictions
This paper proposes a general framework of multiarmed bandit (MAB) proc...
read it

A unified framework for bandit multiple testing
In bandit multiple hypothesis testing, each arm corresponds to a differe...
read it
Detecting an Odd Restless Markov Arm with a Trembling Hand
In this paper, we consider a multiarmed bandit in which each arm is a Markov process evolving on a finite state space. The state space is common across the arms, and the arms are independent of each other. The transition probability matrix of one of the arms (the odd arm) is different from the common transition probability matrix of all the other arms. A decision maker, who knows these transition probability matrices, wishes to identify the odd arm as quickly as possible, while keeping the probability of decision error small. To do so, the decision maker collects observations from the arms by pulling the arms in a sequential manner, one at each discrete time instant. However, the decision maker has a trembling hand, and the arm that is actually pulled at any given time differs, with a small probability, from the one he intended to pull. The observation at any given time is the arm that is actually pulled and its current state. The Markov processes of the unobserved arms continue to evolve. This makes the arms restless. For the above setting, we derive the first known asymptotic lower bound on the expected stopping time, where the asymptotics is of vanishing error probability. The continued evolution of each arm adds a new dimension to the problem, leading to a family of Markov decision problems (MDPs) on a countable state space. We then stitch together certain parameterised solutions to these MDPs and obtain a sequence of strategies whose expected stopping times come arbitrarily close to the lower bound in the regime of vanishing error probability. Prior works dealt with independent and identically distributed (across time) arms and rested Markov arms, whereas our work deals with restless Markov arms.
READ FULL TEXT
Comments
There are no comments yet.