Saturday, September 8, 2018

Scala 2.x Implicit Classes to Extend Libraries

Target audience: Beginner
Estimated reading time: 3'

Implicit methods offer significant utility in establishing global type conversions, provided their semantics are clearly grasped. But where do implicit classes stand in this spectrum?
This article delves into the theory and practicality of Scala's implicit classes, highlighting their potential to enhance existing Scala standard libraries.

Follow me on LinkedIn
Notes
  • For the sake of readability of the implementation of algorithms, all non-essential code such as error checking, comments, exception, validation of class and method arguments, scoping qualifiers or import is omitted.
  • The code associated with this article is written using Scala 2.11.8

Overview

Implicit methods are quite useful in defining global type conversion (as long as the semantic is clearly understood). But what about implicit classes?
Implicit classes can be used to extend existing Scala standard library classes. Most of Scala classes are declared final or implement a sealed trait. Composition is a viable alternative to Inheritance in term of re-usability: the class to extend is wrapped into a helper or utility class. However, a helper/container class adds an indirection and "pollute" the name space.

 
Let's look at an example of extension of standard library.

Example

The use case consists of extending the Try class, scala.util.Try with a Either semantic: Execute a function if the code throws an exception, and another function if the computation succeeds. Such simple design allows computation flows to be routed depending on unexpected state.


The main construct is the declaration of a implicit class that wraps Try. A recovery function rec is called if an exception is thrown, a computation f is performed otherwise.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
import scala.util._
 
implicit class Try2Either[T](_try: Try[T]) {
    
  def toEither[U](rec: ()=>U)(f: T=>T): Either[U,T] = 
     _try match {
         case Success(res) => Right(f(res))
         case Failure(e) => 
             println(e.toString)
             Left(rec())  
     }
}

You may notice that the method toEither is curried to segregate the two parameterized type T and U. It also comply with the semantic of Either with Left element for error (and recovery) and Right element dedicated to the successful outcome. 

Let's take the example of the normalization of a large array of floating point values by their sum. A small value will generate a rounding error during the normalization and an exception is thrown. However we do not want to burden the client code with handling the exception (the client method may not know the context of the exception after all). In this example the recovery function rec instantiates the class Recover which is responsible for a retry, potentially from a different state.


Implicit classes have an important limitation in terms of re-usability. You cannot override a default method without having to sub-class the original Scala standard library class, which is not an option in our example because Try is a sealed abstract class.
As with implicit methods, the name of the class is never actually used in the code but need to reflect the intention of the developer. Let's apply this implicit class

type DVector = Array[Double]

  // Arbitrary recovery class
class Recover {
     def reTry(x: DVector): DVector  = Array[Double](0.0)
}
  
  // Normalization of a vector. Proceeds to the
  // next computation (log) if the normalization succeeds
  // or invoke the recovery procedure otherwise
def normalize(x: DVector ): Either[Recover, DVector] = Try {
    val sum = x.sum 

    if(sum < 0.01) 
        throw new IllegalStateException(s"sum $sum")
    x.map( _ / sum) 
}
 .toEither(() => new Recover)( (v: DVector) => v.map( Math.log(_))

The implementation is very elegant. There is no need for new semantic and naming convention and the return type Either is very well understood in the Scala development community.

Thank you for reading this article. For more information ...

References

Monday, July 9, 2018

Speed up Fibonacci Sequence with Tail Recursion

Target audience: Intermediate
Estimated reading time: 5'

The Fibonacci sequence is characterized by each number being the summation of its two immediate predecessors. In this article, we delve into an examination and comparison of tail recursion's efficiency in Scala, specifically when applied to the Fibonacci sequence.


Table of contents
Follow me on LinkedIn

There are many ways to skin a cat and implement the Fibonacci recursion. This post illustrates the power of tail recursion in Scala, relative to alternative solution such as recursion without tail elimination and iterations. This post compares the performance of the simple "out-of-the-box" implementation of the Fibonacci recursion with an Scala implementation using tail-recursion.

Note: This post relies on Scala 2.11.8


Non-tail recursive Fibonacci

The Fibonacci algorithm is defined as
   f(n) = f(n-1) + f(n-2)
   f(0) = 0
   f(1) = 1
Let's look at text book recursive implementation.

 
def fibonacci(n: Int): Long =  {
    if(n == 0) 0
    else if(n == 1) 1
    else fibonacci1(n-1) + fibonacci1(n-2)
}

The duration of execution is recorded for values between 2 and 50, run 10,000 times.
The profile of the performance of the non-tail recursion of Fibonacci should not be a surprise. After all, the time complexity of this implementation is a staggering O(2n)


Tail recursion

Let's look at one example of implementation of the Fibonacci formula using the Scala tail recursion.

def fibonacci(n: Int): Long = {
   
   @tailrec
   def _fibonacci(i: Int, next: Long, sum: Long): Long = 
       if(i ==0) sum
       else fibonacci(i-1, sum+ next, next)

   fibonacci(n, 1, 0)
}

The performance of the tail recursion is measure against a plain-vanilla iterative execution. The duration of execution is recorded for values between 5 and 200, run 100,000 times.

 
def fibonacci(n: Int): Long = {
    var s1 = 0
    var s2 = 1 
 
    (n-1 to 0 by -1).foreach(
       _ => {
           val tmp = s2 + s1
           s1 = s2
           s2 = tmp    
      }
   )
   
   s1
}

The following chart illustrates that the performance of the implementation of Fibonacci using the tail recursion and the iterative execution is almost identical.

Iterative computation

For good measure, I thought it would be interesting to measure the performance of an iterative approach using a fold.

def fibonacci(n: Int): Long = 
 
  (n-1 to 0 by -1)./:((0, 1)) {  
      case ((s1, s2), n) => { 
          val tmp = s2 + s1
          (s2, tmp) 
      }
   }._1

Here is the performance profile for the iterative computation.
Note: The fold (:/) can be implemented as a closure using a recursive formula for the function argument.

Conclusion

Scala tail recursion is indeed as effective as the iterative implementation at processing the Fibonacci formula. The performance of the recursion without tail elimination is extremely poor. Aggregators or reducers such as fold, reduce and scan add some overhead to the computation although the performance is still decent.

Thank you for reading this article. For more information ...

References



---------------------------
Patrick Nicolas has over 25 years of experience in software and data engineering, architecture design and end-to-end deployment and support with extensive knowledge in machine learning. 
He has been director of data engineering at Aideo Technologies since 2017 and he is the author of "Scala for Machine Learning" Packt Publishing ISBN 978-1-78712-238-3

Thursday, May 24, 2018

Streaming Moving Averages with Apache Spark

Target audience: Intermediate
Estimated reading time: 5'

Spark Streaming offers an abstraction called Discretized Stream, or DStream, which signifies a continuous flow of data. This can either stem from an initial source or emerge as a result of transforming the input data. Essentially, a DStream is characterized by a consistent sequence of Resilient Distributed Datasets (RDD) [1].


Follow me on LinkedIn
Important notes
  • This post describes the streaming features as implemented in Apache Spark 2.0.x 
  • Stream should not confused with structured streaming [2].

Use case

The creation of models through supervised learning from a given training sets usually requires the application of some pre-processing algorithm such as smoothing, filtering or extrapolating existing data for missing values. 

Computing the moving average of a time series is a simple and convenient technique used to filter out noise and reduce the impact of outliers on trend lines. For this post, we consider the simplest form of moving average over a period of p values, as defined in the following formula \[\tilde{x_{t}}= \frac{1}{p}\sum_{j=t-p+1}^{t}x_{j}\] 
The iterative (or recursive) form is defined as \[\tilde{x_{t}}=\tilde{x_{t-1}}+\frac{1}{p}(x_{t}-x_{t-p})\] 

Spark DStreams

This section refers to discretized streams that were introduced in Spark 1.x. Structured streaming will be discussed in a future post.



Thank you for reading this article. For more information ...

References



---------------------------
Patrick Nicolas has over 25 years of experience in software and data engineering, architecture design and end-to-end deployment and support with extensive knowledge in machine learning. 
He has been director of data engineering at Aideo Technologies since 2017 and he is the author of "Scala for Machine Learning" Packt Publishing ISBN 978-1-78712-238-3

Monday, April 23, 2018

Overload Operators in C++ & Scala

Target audience: Beginner
Estimated reading time: 4'

Operator overloading allows for a reinterpretation of existing operators by imparting a distinct significance to them, all while preserving their original function. This is an example of compile-time polymorphism
While C++ offers the capability to assign unique meanings to operators for specific data types, how does its approach align with that of Scala's operator overloading?


Table of contents
Follow me on LinkedIn
 

Overview

In object-oriented programming, operator overloading is a specific case of polymorphism, where different operators may have different implementations depending on their arguments. The main benefit of operator overloading is to allow the developer to program using notation relevant to the domain (i.e. summation of vectors) and allows user-defined types a similar level.

C++ and Scala supports overloaded or redefinition of operators.
There is however a distinction between C++ and Scala in terms of operator overloading: In Scala the operators are over-loadable by the programmer who create methods with operator name while in C++ the operators are over-loadable by the language as a predefined set.

Notes
  • For the sake of readability of the implementation of algorithms, all non-essential code such as error checking, comments, exception, validation of class and method arguments, scoping qualifiers or import is omitted.
  • Scala version used in 2.11.8

C++ operator overloading

In C++, operator overloading instructs the compiler to perform a certain operation when its corresponding operator is used on one or more variables. However, the && , || and comma operators cannot be overloaded.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
class Point {
private:
   int x;
   int y;
public:
      // Constructor
   Point(int x = 0, int y = 0) : x(x), y(y) { }

      // overloading += operator
   Point&amp; operator+=(const Point& p) { 
       this.x += x;
       this.y += y;
       return (*this);
    }
  
   Point operator+(const Point& p) {
       Point result;
       result.x = x + p.getX();
       result.y  = y + p.getY();
       return result, 
   }

   friend std::ostream& operator<<(std::ostream& out, Point p) {
        out << "(" << p.getX() << "," << p.getY() << ")";
        return out;
   }
};


int main() {
    Point p1(4, 5), p2(5, 11)
    cout << "p1=" << p1 << "p1 + p2=" << p1+p2;
    return 0;
}

Scala overloading by proxy

Scala does not technically have operator overloading because it does not have operators in the common sense.  The operators are indeed function name defined by the programmer. Scala treats all operators as methods and thus allows operator overloading by proxy.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
final class Point(val x: Int, val y: Int) {
 
   // Binary operator
   def +(p : Point) : Point = {
       require(p != null, "cannot add a null point")
       new Point(x + p.x, y + p.y)
    }
    
    override def toString : String = s"(${x.toString},${y.toString)"           
}
 
object Overload extends App {
    val p1 = new Point(2, 0)
    val p2 = new Point(4, -2)
       
    Console.println(s"p1=$p1 and p1+p2=${(p1 + p2)}")
}

The class Point is immutable as the coordinate, x and y are declared private value. It is highly recommended to create immutable objects and containers because it reduced the object state lifecycle. Immutability avoids the need of complex and error-prone concurrent update of the state of an object (race condition).

Thank you for reading this article. For more information ...

References

  • Scala for the Impatient - C. Horstman - Addison-Wesley 2012
  • The Annotated C++ Reference Manual - M. Ellis,  B. Stroustrup - Addison-Wesley 1991
  • github.com/patnicolas

---------------------------
Patrick Nicolas has over 25 years of experience in software and data engineering, architecture design and end-to-end deployment and support with extensive knowledge in machine learning. 
He has been director of data engineering at Aideo Technologies since 2017 and he is the author of "Scala for Machine Learning" Packt Publishing ISBN 978-1-78712-238-3

Friday, March 16, 2018

Fisher-Yates Shuffle in Scala

Target audience: Intermediate
Estimated reading time: 4'


This post describes the Fisher-Yates algorithm to shuffle randomly a sequence of objects.
Follow me on LinkedIn

Note: The code associated with this article is written using Scala 2.12.4

Overview

The generation of a sequence of randomly ordered elements from an existing sequence is a common problem. How can you be sure if the new sequence of items is actually randomly generated?
The Fisher-Yates shuffle algorithm (a.k.a. Donald Knuth shuffle) produces unbiased permutations with a similar likelihood.
One important benefit of Fisher-Yates is the ability to shuffle the elements of the sequence, in place.


Note:
There are several interpretations of the shuffling algorithm. This post describes the implementation of the most common Fisher-Yates algorithm


Fisher-Yates

Here is the Fisher-Yates in a nutshell. Let's consider an array arr of n elements
For each array index k from n-1 to 1
    Select a random value m  between k and 0
    Swap arr[r] and arr[k]
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
def fisherYates[T](t: Seq[T]): Seq[T] = {

   setSeed(System.currentTimeMillis)
  
   (0 until t.size).foreach (n => {
       val randomIdx = n + nextInt(t.size-n)
       val tmp = t(randomIdx)
       t.update(randomIdx, t(n))
       t(n) = tmp
   })
   t
} 

The Fisher-Yates algorithm executes as an iteration with the following two steps:
  • Select a random item in the remaining sequence (line 6)
  • Swap the randomly selected item with the current item (lines 8,9)

Fast Fisher-Yates implementation

The performance of the Fisher-Yates algorithm can be improved by implementing the swapping of the randomly selected item with the current time in-place using bit manipulation operators.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
def fisherYates2(seq: mutable.Seq[Int]): IndexedSeq[Int] = {
  require(seq.size > 0, "Undefined argument")

  setSeed(System.currentTimeMillis)
  (0 until seq.size).map(i => {
     var randomIdx: Int = i + nextInt(seq.size-i)
     seq(randomIdx) ^= seq(i)
     seq(i) = seq(randomIdx) ^ seq(i) 
     seq(randomIdx) ^= (seq(i))
     seq(i)
  })
}

The following code snippet tests the two implementations of the Fisher-Yates shuffling algorithm.
  val input = Array[Char](
    'a', 't', '4', '&', '9', '2', 'z', 'y', 'r', '8', '*', '?'
  )
  println(fisherYates[Char](input).mkString(","))
  // t,&,*,2,a,r,4,z,?,y,8,9

  println(fisherYates2(Array.tabulate(34)(n => n)).mkString(","))
  // 9,24,11,29,1,31,28,20,6,18,23,0,13,17,22,0,27,
  // 4,16,7,25,14,30,32,5,12,8,19,21,0,0,33,26,0
Note: Scala standard library provides developers with a shuffling method: Random.shuffle.

Thank you for reading this article. For more information ...

References


---------------------------
Patrick Nicolas has over 25 years of experience in software and data engineering, architecture design and end-to-end deployment and support with extensive knowledge in machine learning. 
He has been director of data engineering at Aideo Technologies since 2017 and he is the author of "Scala for Machine Learning" Packt Publishing ISBN 978-1-78712-238-3