Tuesday, 2 August 2011

How to write asynchronously executing code in Java

Performance critical system should try to use asynchronously executing code as much as possible for side-lined operations like entry to database for audit logging etc. 

1) Worker Threads: The main thread should continue whatever it is doing, and the task of making the database entry ( or anyother time consuming operation) should be left to a different worker thread.

2) Blocking Queue: The Operation of inserting records to database should be added in a queue or pipe which can then be picked by worker threads. Blocking queue will prevent the worker thread from continuously monitoring the Queue for new task. If Queue is empty then the worker threads will go in wait state.

3) Thread Pool of Worker Threads: For better efficiency thread pool should be used, because it saves the cost of creating and removing worker threads.

Java’s concurrent API provides ExecutorService class which represents an asynchronous execution mechanism which is capable of executing tasks in the background.


To test this code, you can make worker thread sleep for sometime and while the worker thread is sleeping, the execution of main thread should continue.

ExecutorService executorService = Executors.newFixedThreadPool(10);
executorService.submit(new Runnable(){
         public void run() {
                  licDao.executeQuery("SQL query");
                            }});

To test this code, you can make the worker thread sleep for some time and see if the main thread has moved on.

Thursday, 28 July 2011

Multithreading- Understanding thread Safety in Java

Thread Save : Code that is safe to call by multiple threads simultanously is called thread safe.


Local variables are stored in each thread's own stack. That means that local variables are never shared between threads. That also means that all local primitive variables are thread safe. Here is an example of a thread safe local primitive variable:

public void someMethod(){
      long threadSafeInt = 0;
      threadSafeInt++;
}

Local Object References All objects are stored in the shared heap. If an object created locally never escapes the method it was created in, it is thread safe. In fact you can also pass it on to other methods and objects as long as none of these methods or objects make the passed object available to other threads.
Thread safe local object:

public void someMethod(){

      LocalObject localObject = new LocalObject();
      localObject.callMethod();
      method2(localObject);
}
public void method2(LocalObject localObject){
      localObject.setValue("value");
}
The LocalObject instance in this example is not returned from the method, nor is it passed to any other objects that are accessible from outside the someMethod() method. Each thread executing the someMethod() method will create its own LocalObject instance and assign it to the localObject reference. Therefore the use of the LocalObject here is thread safe. In fact, the whole method someMethod() is thread safe. Even if the LocalObject instance is passed as parameter to other methods in the same class, or in other classes, the use of it is thread safe. The only exception is of course, if one of the methods called with the LocalObject as parameter, stores the LocalObject instance in a way that allows access to it from other threads


Object Members: Object members are stored on the heap along with the object. Therefore, if two threads call a method on the same object instance and this method updates object members, the method is not thread safe. Here is an example of a method that is not thread safe:

public class NotThreadSafe{

     StringBuilder builder = new StringBuilder();
     public add(String text){
     this.builder.append(text);
     }
}
Even if the use of an object is thread safe, if that object points to a shared resource like a file or database, your application as a whole may not be thread safe. For instance, if thread 1 and thread 2 each create their own database connections, connection 1 and connection 2, the use of each connection itself is thread safe. But the use of the database the connections point to may not be thread safe. For example, if both threads execute code like this:
check if record X exists
if not, insert record X

If two threads execute this simultanously, and the record X they are checking for happens to be the same record, there is a risk that both of the threads end up inserting it. This is how:

Thread 1 checks if record X exists. Result = no
Thread 2 checks if record X exists. Result = no
Thread 1 inserts record X
Thread 2 inserts record X

This could also happen with threads operating on files or other shared resources. Therefore it is important to distinguish between whether an object controlled by a thread is the resource, or if it merely references the resource.

Immutable objects are thread safe. But the reference to immutable object may not be thread safe. i.e the class with has immutable object as its member may not be thread safe itself.














Thursday, 14 July 2011

Multithreading concepts

Race Condition: The situation where two threads compete for the same resource, where the sequence in which the resource is accessed is significant, is called race conditions. A code section that leads to race conditions is called a critical section.

 Mutex :  Mutual exclusion (often abbreviated to mutex) algorithms are used in concurrent programming to avoid the simultaneous use of un-shareable resources by pieces of computer code called critical sections.
 
Critical section : In concurrent programming a critical section is a piece of code that accesses a shared resource (data structure or device) that must not be concurrently accessed by more than one thread of execution.

Semaphore: Synchronization tool.
A semaphore S is an integer variable that, apart from initialization, is accessed only through two standard atomic operations : wait() and signal().


Definiation of wait():

wait(S){
    while S<=0;
    //no-op
    s--;
}

Definiation of signal():

signal(S){
    S++;
}
 
Mutual Exclusion Implementation by Semaphores:
 
do {

waiting(mutex);
      // critical section
signal(mutex);
     //remainder section
}
while(TRUE);

Semaphores can be counting and binary.
The value of a counting semaphore can range over an unrestricted domain.

The value of a binary semaphore can range only between 0 and 1.
Binary semaphores are also known as mutex locks, as they are locks that provide mutual exclusion

Producer-Consumer Problem:
In the Producer-consumer problem, one process (the producer) generates data items and another process (the consumer) receives and uses them. They communicate using a queue of maximum size N. Obviously the consumer has to wait for the producer to produce something if the queue is empty. Perhaps more subtly, the producer has to wait for the consumer to consume something if the buffer is full.


The problem is easily solved if we model the queue as a series of boxes which are either empty or full, and regard empty boxes as one type of resource and full boxes as another type of resource. The producer "removes" an empty box and then "creates" a full one, whilst the consumer does the reverse.
Given that emptyCount and fullCount are counting semaphores, and emptyCount is initially N whilst fullCount is initially 0, the producer does the following repeatedly:
produce:


P(emptyCount)
putItemIntoQueue(item)
V(fullCount)

The consumer does the following repeatedly:-
consume:

P(fullCount)
item ← getItemFromQueue()
V(emptyCount)

It is important to note that the order of operations is essential. For example, if the producer places the item in the queue after incrementing fullCount, the consumer may obtain the item before it has been written. If the producer places the item in the queue before decrementing emptyCount, the producer might exceed the size limit of the queue.


Monitor : A monitor is like a building that contains one special room that can be occupied by only one thread at a time. The room usually contains some data. From the time a thread enters this room to the time it leaves, it has exclusive access to any data in the room. Entering the monitor building is called "entering the monitor." Entering the special room inside the building is called "acquiring the monitor." Occupying the room is called "owning the monitor," and leaving the room is called "releasing the monitor." Leaving the entire building is called "exiting the monitor."

Java's monitor supports two kinds of thread synchronization: mutual exclusion and cooperation. Mutual exclusion, which is supported in the Java virtual machine via object locks, enables multiple threads to independently work on shared data without interfering with each other. Cooperation, which is supported in the Java virtual machine via the wait and notify methods of class Object, enables threads to work together towards a common goal.

The form of monitor used by the Java virtual machine is called a "Wait and Notify" monitor. (It is also sometimes called a "Signal and Continue" monitor.) In this kind of monitor, a thread that currently owns the monitor can suspend itself inside the monitor by executing a wait command. When a thread executes a wait, it releases the monitor and enters a wait set. The thread will stay suspended in the wait set until some time after another thread executes a notify command inside the monitor. When a thread executes a notify, it continues to own the monitor until it releases the monitor of its own accord, either by executing a wait or by completing the monitor region. After the notifying thread has released the monitor, the waiting thread will be resurrected and will reacquire the monitor.


The kind of monitor used in the Java virtual machine is sometimes called a Signal and Continue monitor because after a thread does a notify (the signal) it retains ownership of the monitor and continues executing the monitor region (the continue). At some later time, the notifying thread releases the monitor and a waiting thread is resurrected. Presumably, the waiting thread suspended itself because the data protected by the monitor wasn't in a state that would allow the thread to continue doing useful work. Also, the notifying thread presumably executed the notify command after it had placed the data protected by the monitor into the state desired by the waiting thread. But because the notifying thread continued, it may have altered the state after the notify such that the waiting thread still can't do useful work. Alternatively, a third thread may have acquired the monitor after the notifying thread released it but before the waiting thread acquired it, and the third thread may have changed the state of the protected data. As a result, a notify must often be considered by waiting threads merely as a hint that the desired state may exist. Each time a waiting thread is resurrected, it may need to check the state again to determine whether it can move forward and do useful work. If it finds the data still isn't in the desired state, the thread could execute another wait or give up and exit the monitor.



Blocking Queue: Suppose we have one producer thread and one or more consumer threads. The producer thread gets a data object and then it gets exclusive access to the queue. It then enqueues the data object and sleeps for 100 milliseconds. The consumer thread loops, getting exclusive access to the queue, and checking to see if there are any objects to dequeue. If there are, the objects are dequeued and processed. If there is not, the thread sleeps for 100 milliseconds and then tries again.


The down side to this is that if there is a lot of time between the enqueueing of objects, the consumer thread will spend a lot of CPU time just checking to see if there is anything to do. It would be more efficient if the consumer thread was blocked from executing until there was an object in the queue.

This is the purpose of a blocking queue. With this type of queue, the thread that calls the Dequeue method is blocked until there is an object in the queue.


Future: A Future represents the result of an asynchronous computation. Methods are provided to check if the computation is complete, to wait for its completion, and to retrieve the result of the computation.

Tuesday, 5 July 2011

what is cglib?

cglib is a powerful, high performance and quality Code Generation Library, It is used to extend JAVA classes and implements interfaces at runtime.

In order to create a working example we'll need to open a regular java project and add two jars as a dependency (the latest versions available at the moment):

- cglib-2.2.jar
- asm-all-3.2.jar


We will 'proxify' the mock Executable class and add transaction management to it.

public class Executable {
public void executeMe() {
        System.out.println("execution begins");
        // real time operation here       
      System.out.println("execution ends");    
    }
}

We'll create a class that adds the 'transactions'. This class will be used by CGLIB to proxify the executable class, so it should implement net.sf.cglib.proxy.MethodInterceptor

The class looks like this:
import net.sf.cglib.proxy.MethodInterceptor;   

import net.sf.cglib.proxy.MethodProxy;
import java.lang.reflect.Method;
public class MyInterceptor implements MethodInterceptor {
    // the real object     
    private Object realObj;
   // constructor - the supplied parameter is an
    // object whose proxy we would like to create    
    public MyInterceptor(Object obj) {
        this.realObj = obj;
    }
    // this method will be called each time     
    // when the object proxy calls any of its methods    
    public Object intercept(Object o,
                            Method method,
                            Object[] objects,
                            MethodProxy methodProxy) throws Throwable {
        // just print that we're about to execute the method        
        System.out.println("Before");        
        // Begin Transaction        
       System.out.println("transaction begins");        
        // invoke the method on the real object with the given params        
        Object res = method.invoke(realObj, objects);
        // print that the method is finished        
        System.out.println("After");
        // Commit Transaction
        System.out.println("Transaction commit”);
        // return the result        
        return res;
    }

}

The last class is the main class. Here we will actually create the proxy so here we'll see some CGLIB related code:


import net.sf.cglib.proxy.Enhancer;

public class Main {
    public static void main(String[] args) {
        // 1. create the 'real' object
        Executable exe = new  Executable ();
        // 2. create the proxy
        Executable proxifiedExecutable = createProxy(exe);
        // 3. execute the proxy - as we see it has the same API as the real object
        proxifiedExecutable.executeMe();
    }
    // given the obj, creates its proxy
    // the method is generified - just to avoid downcasting...
    public static T createProxy(T obj) {
        // this is the main cglib api entry-point
        // this object will 'enhance' (in terms of CGLIB) with new capabilities
        // one can treat this class as a 'Builder' for the dynamic proxy
        Enhancer e = new Enhancer();
        // the class will extend from the real class
        e.setSuperclass(obj.getClass());
        // we have to declare the interceptor  - the class whose 'intercept'
        // will be called when any method of the proxified object is called.
        e.setCallback(new MyInterceptor(obj));
        // now the enhancer is configured and we'll create the proxified object
        T proxifiedObj = (T) e.create();
        // the object is ready to be used - return it
        return proxifiedObj;
    }
}