Wednesday, December 11, 2019

Variance Annotation in Scala 2.x

Target audience: Intermediate
Estimated reading time: 3'

The purpose of this post, is to introduce the basics of parameterized classes, upper and lower bounds and variance sub-typing in Scala.
Table of contents
Overview
Follow me on LinkedIn

Note: The code associated with this article is written using Scala 2.12.4

Overview

Scala is a statically typed language similar to Java and C++. Most of those concepts have been already introduced and commonly in Java with generics and type parameters wildcard. However Scala added some useful innovation regarding sub-typing covariance and contra-variance. API designers and library developers are facing the dilemma of choosing between parameterized types and abstract types and which parameters should be invariant, covariant or contra-variant.


Sub-classing

Like in Java, generic or parameterized type have been introduced to overcome the limitation of polymorphism. Moreover, understanding of concept behind the Scala type system help developers decipher and fix the occasional compilation failure related to typing.
Let's consider a simple collection class ,Container that adds and retrieves an array of items of type Item as follows.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
class Item
case class SubItem() extends Item
 
class Container {
  def add(items: Array[Item]) : Unit =  items.foreach( add(_) )

  def retrieve(items: Array[Item]) : Unit =
    Range(0, items.size).foreach(items.update(_, retrieve))

  def add(item: Item) : Unit = { }
  def retrieve : Item = new SubItem
}

The client code can invoke the add (lines 5 & 10) and retrieve (lines 7 & 11) methods passing an array of elements of type Item. However passing an array of type SubItem inherited from Item (line 2) will generate a compiling error!

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
val subItemsContainer = new Container
val array = Array[SubItem](new SubItem())
 
  // Compile error:
  // "type mismatch;  found   : Array[SubItem]  
  // required: Array[Item] 
  // Note: SubItem <: Item, but class Array is 
  // invariant in type T..."
 subItemsContainer.add(array)
 subItemsContainer.retrieve(Array[SubItem](new SubItem))


The Scala compiler throws a compiling error because the class Container does not have the adequate information about the sub type Item into SubItem in the add method (line 9). The invocation of the retrieve method, passing an argument of type SubItem generates also a compiler error. 
Scala provides developers with upper bounds and lower bounds sub-typing which is quite similar to the type variance available in Java.

Variance to the Rescue

The type T upper bounded by the type Item as defined by the following notation (line 1). T <: Item can be processed as a 'producer' of type Item. The notation provides the Scala compiler with any type inheriting Item. It should be allowed to be passed as argument of the add method (lines 2 & 7).
Similarly, the type U with a lower bound type SubItem
   U >: SubItem
can be processed as a 'consumer' operation of type SubItem, in the retrieve method (lines 4 & 8).

1
2
3
4
5
6
7
8
9
class Container[T <: Item] {
   def add(items: Array[T]) : Unit = items.foreach( add(_) )
      
   def retrieve[U >: SubItem](items: Array[U]) : Unit = 
       Range(0, items.size)foreach(items.update(_, retrieve))
  
   def add(item: T): Unit = {}
   def retrieve: U = new SubItem
}

Container[T] is a covariant class for the type Item as argument of the method add. The method retrieve[U] is contra-variant for the type SubItem.
This time around the compiler has the all the necessary information to subtype Item and super type SubItem, therefore the following code snippet won't generate a compiling error

val itemsContainer = new ContainerT[SubItem]
itemsContainer.add(Array[SubItem](new SubItem)))
itemsContainer.retrieve[Item](Array[Item](new SubItem))


Scala has a dedicated annotation for unvariance 'T', covariance '+T' and contra-variance '-T'. Using this annotation the container class can be expressed as follow.

1
2
3
4
5
6
7
8
9
class Container[+T] {
   def add(items: Array[T]) : Unit = items.foreach( add(_) )
 
   def retrieve[-U](items: Array[U]) : Unit = 
          Range(0, items.size).foreach(items.update(_, retrieve))
  
   def add(item: T) : Unit = { }
  
   def retrieve : U = new SubItem
}

In a nutshell, variances can be represented as functors:
Covariance: if (B inherit A) then (Container[B] inherit Container[A])
Contra-variance: if( B inherit A) then (Container[A] super class of Container[B])
Univariance: No rule applies.
 

The sub-typing rules above can be represented graphically 


Thank you for reading this article. For more information ...

References



---------------------------
Patrick Nicolas has over 25 years of experience in software and data engineering, architecture design and end-to-end deployment and support with extensive knowledge in machine learning. 
He has been director of data engineering at Aideo Technologies since 2017 and he is the author of "Scala for Machine Learning" Packt Publishing ISBN 978-1-78712-238-3

Sunday, October 20, 2019

Contextual Thompson Sampling

Target audience: Advanced
Estimated reading time: 5'

In this article, we delve into the notion of the multi-armed bandit, also known as the K-armed bandit, with a particular focus on Thompson sampling as applied to the contextual bandit.

Table of contents
       Epsilon-greedy
Follow me on LinkedIn

What you will learn: How to add contextual information as prior to the Thompson sampling algorithm.

Multi-arm bandit problem

The multi-armed bandit problem is a well-known reinforcement learning technique,  widely used for its simplicity. Let's consider a  slot machine with n arms (bandits) with each arm having its own biased probability distribution of success. In its simplest form pulling any one of the arms gives you a  reward of either 1 for success, or 0 for failure. 

Our objective is to pull the arms one-by-one in sequence such that we maximize our total reward collected in the long run. 
Data scientists have developed several solutions to tackle the multi-armed bandit problem for which the 3 most common algorithms are:
  • Epsilon-Greedy
  • Upper confidence bounds
  • Thompson sampling

Epsilon-greedy

The Epsilon-greedy algorithm stands as the most straightforward method for navigating the exploration-exploitation dilemma. In the exploitation phase, it consistently chooses the lever with the highest recorded payoff. Nevertheless, occasionally (with a frequency of ε, where ε is less than 0.1), it opts for a random lever to 'explore'—this allows for the investigation of levers whose payouts are not well-known. Levers with established or confirmed high payouts are selected the majority of the time, specifically (1-ε) of the instances.

Upper Confidence Bound (UCB)

The Upper Confidence Bound (UCB) algorithm is arguably the most popular approach to solving the multi-armed bandit problem. It is occasionally described by the principle of 'Optimism in the Face of Uncertainty': It operates under the assumption that the uncertain average rewards of each option will be on the higher end, according to past outcomes.

Thompson sampling

The Thompson sampling algorithm is essentially a Bayesian optimization method. Its central tenet, known as the probability matching strategy, can be distilled to the concept of 'selecting a lever based on its likelihood of being the optimal choice.' This means the frequency of choosing a specific lever should correspond to its actual chances of being the best lever available.

The following diagram illustrates the difference between the upper confidence bounds and the Thompson sampling.


Fig. 1 Confidence Bounds for Multi-Armed bandit problem

Context anyone?

The methods discussed previously do not presume any specifics about the environment or the agent that is choosing the options. There are two scenarios:
  • Context-Free Bandit: In this scenario, the choice of the option with the highest reward is based entirely on historical data, which includes prior outcomes and the record of rewards (both successes and failures). This data can be represented using a probability distribution like the Bernoulli distribution. For example, the chance of rolling a '6' on a dice is not influenced by who is rolling the dice.
  • Contextual Bandit: Here, the current state of the environment, also known as the context, plays a role in the decision-making process for selecting the option with the highest expected reward. The algorithm takes into account a new context (state), makes a choice (selects an arm), and then observes the result (reward). For instance, in online advertising, the choice of which ad banner to display on a website is influenced by the historical reward data linked to the user's demographic information.
All of the strategies suited to the multi-armed bandit problem can be adapted for use with or without context. The following sections will concentrate on the problem of the contextual multi-armed bandit.

Contextual Thompson sampling (CTS)

Let's dive into the key characteristics of the Thompson sampling
  • We assume the prior distribution on the parameters of the distribution (unknown) of the reward for each arm.
  • At each step, t, the arm is selected according to the posterior probability to be the optimal arm. 
The components of the contextual Thompson sampling are
1. Model of parameters w
2. A prior distribution p(w) of the model
3. History H consisting of a context x and reward r
4. Likelihood or probability p(r|x, w) of a reward given a context x and parameters w
5. Posterior distribution computed using naïve Bayes \[p\left( \tilde{w} | H\right ) = \frac{p\left( H | \tilde{w} \right ) p(\tilde{w})}{p(H)}\]

But how can we model a context?

Actually, a process to select the most rewarding arm is actually a predictor or a learner. Each predictor takes the context, defines as a vector of features and predicts which arm will generate the highest reward.
The predictor is a model that can be defined as
- Linear model
- Generalized linear model (i.e. logistic)
- Neural network

The algorithm is implemented as a linear model (weights w) for estimating the reward from a context x  as \[w^{T}.x\]
The ultimate objective for any reinforcement learning model is to extract a policy which quantifies the behavior of the agent. Given a set X of context xi and a set A of arms, the policy Ï€  is defined by the selection of an arm given a context x
\[\pi : X\rightarrow A\]
Variables initialization\[V_{0} = \lambda I_{d}, \hat{w}=0,b_{0}\]
Iteration
FOR t = 1, ... T \[\begin{matrix} 1: \tilde{w_{t}} \leftarrow \mathbb{N}(\tilde{w_{t}} , \frac{v^{2}}{V_{t}} ) \\ \phantom] \phantom] \phantom] \phantom] \phantom] \phantom] \phantom] 2: a_{t}^{*} = \arg \max _{j} x_{j,t}^{T} \widehat{w_{t}} \\ \phantom] \phantom] \phantom] \phantom] \phantom] \phantom] 3: a_{t} = \arg \max _{j} x_{j,t}^{T} \tilde{w_{t}} \\ \phantom] \phantom] \phantom] 4: reward \rightarrow r_{t} \phantom] \phantom] \phantom] \phantom] \phantom] \phantom] \phantom] \\ \phantom] \phantom] \phantom] \phantom] \phantom] \phantom] \phantom] \phantom] \phantom] 5: V_{t+1} = V_{t} + x_{a_{t},t}.x_{a_{t},t}^T \\ \phantom] \phantom] \phantom] 6: b_{t+1}= b_{t} + x_{a_{t},t}.r_{t} \\ \phantom] 7: \tilde{w} = V_{t+1}^{T}.b \phantom] \phantom] \phantom] \phantom] \phantom] \phantom] \phantom] \phantom] \phantom] \\ \phantom] \phantom] \phantom] \phantom] \phantom] \phantom] \phantom] \phantom] \phantom] \phantom] 8: \mathbb{R}_{t}=x_{a^{*},t}^T.\tilde{w_{t}} - x_{a,t}^T, \tilde{w_{t}} \end{matrix}\] 
The sampling of the normal distribution (line 1) is described in details in the post Multivariate Normal Sampling with Scala. The algorithm computes the maximum reward estimation through sampling the normal distribution (line 2) and play the arm a* associated to it (line 3).
The parameters V and b are updated with the estimated value (line 5 and 6) and the actual reward Rt is computed (line 8) after the weights of the context w are updated (line 7).

Thank you for reading this article. For more information ...


---------------------------
Patrick Nicolas has over 25 years of experience in software and data engineering, architecture design and end-to-end deployment and support with extensive knowledge in machine learning. 
He has been director of data engineering at Aideo Technologies since 2017 and he is the author of "Scala for Machine Learning" Packt Publishing ISBN 978-1-78712-238-3