Wednesday, December 14, 2011

MongoDB : Remote Access - Part 2

As stated in previous post on MongoDB, once installed, user can verify by opening shell & by typing mongo. It will by defualt connect with the table test (which is not a one you created obviously) like below if everything worked properly during installation.


MongoDB shell version: 1.4.3
Wed Nov 23 14:31:29 ***
connecting to: test
>


Before moving on to remote access, better to know few very basic commands to check what you really wants to know. Type show dbs which will show what are the existing databases. Then if you want to use existing one or new one type use dbname which will not create a database at the exact moment but on the fly if it does not exist. In mongodb table is considered a collection. Actually here, this is not exact same as the table in RDBMS but for clarity consider it is so. For to view all the collections in used database type show collections & to use one, type use collectionname. To view all the data inside of selected collection, type db.collectionname.find(). Here in mongodb, once you selected a database, when querying with collection you always has to use the reference of the database with query. That is why you have to use db.$what_ever_the_bla_bla next. You can find sql to mongodb mapping relationship page here. 


Now let's move to setting up a remote connection to a mongodb server.  It is better to have two terminals. One to start server & listening on incoming requests. Second to locally execute & view whether remote calls have worked (optional).

To start server

1. Create a directory structure in root
sudo mkdir -p  /data/db
2. Grant user permission to it
sudo chown `id -u` /data/db

3. Run mongo server to listen on incoming connections
mongod
You will noticed that sever is starting & saying it is listening on port as indicated in below image.
 
But If the result is like below

Then do the following.

4. Find mongodb PID & kill it.
ps -eF | grep 'mongo\|PID'

 
5. You can see in first shell image, I have executed this command & obtained the ID 1143. Next is to kill that process.
sudo kill “PID_VALUE”
Re-run the mongodb server: mongod
This is because if mongodb was installed using sudo apt get install, it will always run each time machine reboots. Therefore before server starts up, already running process has to be killed. Then server will start properly and keep on listening for incoming connections. To connect to a remote server simply type mongo remoteIPaddress after starting the mongodb server on both sides.To get mongodb execution status use sudo status mongodb


From now on there are plenty of enough resources available to continue with mongodb. Go after & enjoy the power of mongo.

Monday, December 12, 2011

MongoDB | Document Oriented, No SQL open source database - Part 1

OBJECTS. It is all about dealing with objects. That is, storing them, retrieving them back, updating them PLUS encoding, efficient indexing, replica managing etc.  Typical object can have its own as well as inherited feature sets. These can be considered as key value pairs like below.

{
    "username" : "bob",
    "address" : {
        "street" : "123 Main Street",
        "city" : "Springfield",
        "state" : "NY"
    }
}
 
Above is an example of simple nested JSON object. When remote method invocation or inter process communication happens, by using this kind of mechanism to send data can be more convenient. When received, by querying, storing data back in SQL databases is not needed if there exits a secure, storage effective  document oriented querying language. That is where MongoDB fits with BSON format. It is a cross language database system yet to come with more features. BSON helps to store JSON objects as binary objects which reduce size & increase indexing & retrieving performance.

In this post, a simple working scenario for successfully installing mongoDB on Linux based operating system (basically on Ubuntu 10.10) will be discussed.

Installation

1. Add MongoDB repository into Ubuntu assuming installed ubuntu version is 10.10
Add below line any where of source.list file.
deb http://downloads.mongodb.org/distros/ubuntu 10.10 10gen into /etc/apt/source.list.


2. Create PGP key and Install MongoDB
We need to generate key to gain access into MongoDB repository.
Use sudo apt-key adv --keyserver keyserver.ubuntu.com --recv 7F0CEB10 to get keyserver
Then do sudo apt-get update to update your repository.


3. Do sudo apt-get install mongodb-stable to install mongoDB into your Ubuntu.
If that didnt work, use just sudo apt-get install mongodb instead of -stable
Test by run command mongo and you will get MongoDB shell version

Now we have to configure mongoDB with PHP Driver so that you can interact with mongoDB programetically via server-side scripting language.


4.  Configure MongoDB PHP Driver
Before configure mongoDB PHP driver, first need to have build-essential, php5-dev and php-pear.
To install those:
sudo apt-get install build-essential php5-dev php-pear

**remember to tick on first two updates in software updates/updates in synaptic package manager. Otherwise system will not download all the packages needed & later steps will fail.


5. Then install pecl driver for mongo (For connecting with PHP)
sudo pecl install mongo


6. Add Mongo Extension into php.ini
In the end of line, add extension=mongo.so into /etc/php5/apache2/php.ini.


7. Restart apache by
sudo service apache2 restart


8. Check with phpinfo() to see if MongoDB already installed.

From next post, How to get remote accessibility by starting mongoDB server & checking it out via remote mongoDB terminal will be discussed. 


Good Luck :)
 

Friday, October 28, 2011

J2ME, Symbian C++, QT or Android :: Why & Where?

"Once upon a time there was a language called J2ME", you will not be surprise if you hear this few years later. But it should not be forgotten that each of these languages have its own context or domain which can vary according to the needs of the customer.

Why & Where...

A well known truth about J2ME is that it is a sandbox language. I.e. It always needs to stand on top of a another language stack (OS) & hence J2ME is always restricted to access some kernel level functionalities of limited devices because of this 3rd party behavior. For instances one can not write an app which can be auto-started in device boot up without using some other party interaction like push registry. Of course by using push registry one can write an app which will auto-start but via a timer, sms or http like interaction signaled by an outsider. Therefore without any interaction auto-start in boot up is impossible in J2ME. Another scenario is accessing device key pad for locking and unlocking purposes is also impossible as J2ME is not allowed to access locking API of the device. Actually J2ME does not even has such an API for locking. One more thing to notice is that every mobile device has a settings programme which is responsible for managing installed applications. In J2ME context you can never write an app to get the control of this settings app which is silently handled as a device kernel level programme.

As mentioned the primery reason for this is J2ME is a third party language pre-configured and installed on top of another core language stack of the device. For example simply consider Nokia mobile phone which claims to support for J2ME. But the fact is Nokia has its own language stack as the core of the device called Symbian.  Symbian OS has its own rich APIs to directly interact with hardware level and other core functionalities of the device. Symbians S40, S60, Symbain 3 are examples for such Symbian OS APIs equipped with SDKs to leverage developers for developing apps. Both the carbide.C++ and the later added framework called QT C++ can be used for writing applications for Symbian OS. Above mentioned impossible scenarios in J2ME can be achieved via these two implementation frameworks.

But as J2ME hides internal complexities from developer, for developing some typical enterprise level apps, J2ME is apparently efficient and easier than using Carbide or QT. Hence language depends on requirements as always. Focusing on Android here is irrelevant because it is that big buzz word everybody talks about these days. To be it as this much of huge buzz, its complete OS stack has played an incredible role. Because of this perfect organization from ground level APIs to higher level APIs in its OS stack, developers have been able to develop apps without considering dependencies, hardware abstractions, library couplings etc. But there is an one little problem with Android. I.e. its fast growing development life cycle. Not like any other OS versions of other devices, Android does not provide long term service for once released version. They tend to grow fast and if earlier version can not tolerate with the later version, bad luck for the users those who have that earlier version of device. But generally people rarely open their mouths up on this.

Therefore Why and Where is a decision made by developers.


Sunday, October 9, 2011

Checking out and setting up QJSON for QT symbian

Generally JSON is simple data exchange format like xml but more simple and flexible than it. Specially when transferring data in XML format, it adds more weight to the actual information we want to transmit as information is overwhelmed by opening and closing tags. This can sometime be useful and sometime be an overhead. To avoid or reduce that overhead we can use JSON. It stands for Java Script Object Notations which can be used to transmit data over http as JSON object. There are many JSON related online references are availble in internet. Therefore What we are focusing is QJSON which is QT based library that maps JSON data to QVariant/QMap objects.

We will consider making qt lib and setting up the path properly via QT creator IDE.  qjson library is not implicitly available with default libraries comes with QT. Therefore you have to checkout qjson project source separately and then build it and generate a qjson lib file. One thing to notice is do not try to download qjson source from source forge because due to some reasons it is not the complete project for one to develop lib file they need easily. That source hides some essential files like .pro file etc which are useful to generate lib file directly by using QT Creator IDE.

For checking out the latest version of qjson, first you shoul have a git client installed in your machine. Git is a FOS distributed version management system that can be downloaded from official git site. qjson repository is hosted here. For checking out this as online from downloaded git client, usegit clone git://gitorious.org/qjson/qjson.git command and import complete latest qjson version to your local disk space.

Then open this qjson project via QT Creator and build it. It will create qjson.dll.a lib file under build/lib folder of qjson source. It is like your_disk_name:/qjson-0.7.1/qjson/build/lib/qjson.dll.a. Now all you have to do is to tell QMake in your .pro where is located your header files and lib file.

Ex: type as following in any where of your .pro file
INCLUDEPATH += "c:/qjson-0.7.1/include"
LIBS += "c:/qjson-0.7.1/qjson/build/lib/qjson.dll.a"
 
That's it. Enjoy qjson in QT. Some initiative code snippets on qjson for qt can be found here.


 


Sunday, October 2, 2011

Distributed Resource Management System for Business Process Management Systems

I did my final year research on Distributed Resource Management System for Business Process Management Systems. I have shared my research abstract which was published in UCSC researh symposium 2011 here with. For the purpose of research evaluation and demonstration, RESTful Distributed Resource Management System (DRMS) engine has been developed and integrated with one of FOSS BPMS.

Business Process Management (BPM) is a discipline which maps human tasks and non human tasks according to a predefined workflow in a way that technical and non technical people in the organization can administrate, monitor and communicate effectively and efficiently. Considerable amount of BPM software are available to automate business processes. When business becomes more complex and needed to be expanded or out sourced, a single BPM server may not be sufficient to tackle all these processes. This may lead to manage several BPM systems within one organization. However if those servers are not properly connected and configured, even allocating a single resource for a single BPM system can also be difficult due to the lack of information available at a glance. Apparently a mechanism should be established to enforce cooperation among homogeneous BPM systems.

DRMS for YAWL is a distributed resource management system which leverages administrative activities like monitoring, decision making and scheduling for YAWL, an open source Business Process Management System (BPMS) developed by YAWL foundation. When business is needed
to be expanded to cope with internal or external transactions, organizations should have to increase their technical servers from one to several. At this stage when each of these servers is dealing with organizational processes (business processes) instead of writing their own logic to tackle with business, they can use existing BPM engines to get efficient and effective results. While using several BPM systems within same organization, accurate coordination among each is
highly required to place correct decisions and schedules. Placing a new business process tends to be difficult if the administrator cannot deterministically come to a conclusion, in which BPM engine he should place the new process. In fact that is what DRMS is trying to solve.

DRMS uses existing resource patterns and introduces some new patterns to cope with distributed resource management. DRMS is equipped with two major perspectives called snapshot view and rule execution. In snapshot view, global synchronized view and non-synchronized view are considered. Synchronized view is what administrator can get as a real global view of all resources in BPM cluster. Non-synchronized view lets administrator to explicitly decide in which distributed BPM servers’ view he tends to view in a given time independently from other servers. Before making decisions when managing resources distributed across a cluster of servers, it is more important to analyze the current state of the cluster; thus, we came up with this idea of global view. For this, we have extended few resource patterns where more significant and important features of a state of the cluster can be extracted easily. By thoroughly analyzing, we manipulated several key factors of functions which must be available in any view, to speed up the process of making more effective decisions. The idea of granting administrators both synchronized and non-synchronized views can leverage identifying different states of the cluster before executing the correct rule set. There administrators of the organization can analyze independent views by switching among each of BPMS by standing with a single interface of DRMS. This is facilitated in non-synchronized view. Gradually when cluster expands, implicit information gathering from cluster by the system is more efficient and it is facilitated by synchronized global view.

Views help to decide on decisions that are going to be made. In fact to execute those decided decisions, there should be a distributed rule execution mechanism for DRMS which is called rule implementation. At the moment these rules can be evaluated and put in to action in the synchronized view. Distributing resources according to capabilities, roles or positions and assign administrative access levels to employees are two patterns which are implicitly available in DRMS. Distributing newly available work items (processes) among BPM cluster based on how similar kinds of processes were executed in history is an advanced solution. Mutual and union resource extraction and updating the cluster through global flooding with or without a restriction are some sub patterns that are also available in DRMS for YAWL.

DRMS also supports for load balancing up to some extent. Workflows will be deployed according to the selected load balancing algorithm. Four suggested workload routing algorithms namely random, round robin, priority and dynamic help to adjust the load of the BPM cluster as determined. At the moment DRMS has been developed to work with YAWL BPM engine through mainly communicating via its resource engine. However one of the key features of DRMS is that its implementation is completely independent from YAWL’s implementation issues. That is, DRMS calls to YAWL remote functions via RESTful service calls over HTTP and in order to facilitate this DRMS service, it would be more appropriate to have a homogenous BPM cluster. That means each and every BPM engine should be from the same vendor or otherwise must be same open source product. Apparently, in reality, it is very rare that a particular organization may use different kinds of BPM engines within their domain as it adds performance and scalability issues while integrating, controlling flow of events, associating data and handling exceptions due to different architectural implementations of each BPM systems.

Main objective of this project is to show that if managing several BPM engines, by using a mechanism which is equipped with view analyzing and rule execution by extending standard resource patterns and introducing some new resource patterns to fit with a BPM cluster, both resource and work item scheduling can be orchestrated more easily and effectively. It adds a central accessibility for all distributed engines. Hence, we suggest by applying our studies to existing BPM systems, both local and global contexts will be implicitly available for customers unlike existing BPM engines support only for resource management within its local context. Therefore, as distributed resource management among homogenous BPM engines via work flow patterns is not yet ported, we hope our research findings will be useful in coping with similar kinds of situations in future emerging extensible Service Oriented Architecture (SOA) based BPM engines.

Sunday, April 10, 2011

Inter-Servlet communication among different contexts

Suppose you have Servlet X in Web App A and Servlet Y in Web App B. You want to pass some parameters through your Web App A's JSP to your Servlet X and there that Servlet should pass those parameters again to another Servlet in difference context (i.e Web App B's Servlet Y) and then redirect back to the calling JSP in Web App A. Users may feel that they are dealing with single context but literally it will be tow difference contexts.

I will direct you to experience this by 5 steps.

1. Viewer JSP - index.jsp (User will simply click a button to pass parameters to a Servlet within the same context)

< Ex: input type="button" value="Fetch Info" name="Get_Info" onclick="location.href='ServletX?action=getParticipants&id=user&pwd=yawl'"/>

Here after you clicking the button "Fetch Info" it will call "ServletX" Servlet in the same Web App (same context) along with "getParticipants" and two other parameters.

Scenario discussed here is retrieving some info resides in another context to a jsp by invoking two Servlets.


2. Servlet X will collect parameters in its service method

String action = request.getParameter("action");
String id = request.getParameter("id");
String pwd = request.getParameter("pwd");

Then We can redirect it to another Servlet as below,

request.setAttribute("output", action );
ServletContext cont = getServletContext().getContext("/WebAppB");
RequestDispatcher dis = cont.getRequestDispatcher("/ServletY");
dis.forward( request, response );

//getServletContext().getContext("/WebAppB"); is the important part because in this way you can jump in to other contexts and by invoking the method getRequestDispatcher() you can forward messages to what ever jsp/servlet combinations as wish. But if you use getRequestDispatcher(String path) of the ServletRequest interface it cannot extend outside the current servlet context.

For more info visit the api for ServletContext.

3. Then Servlet Y in Web App B can access the parameters and apply what ever the logic has to be performed.

String action = (String) request.getAttribute("output");

//Business logic goes here

request.setAttribute("output", action);
ServletContext cont = getServletContext().getContext("/WebAppA");
RequestDispatcher dis = cont.getRequestDispatcher("/index.jsp");
dis.forward(request, response);

Here after applying what ever the logic you wants, you can switch back to WebApp A servlet's context by invoking getServletContext().getContext("/WebAppA");. Then you can redirect output variable to index jsp in WebApp A's context to view the results.

4. Therefore to receive and view the results you need to implement a code similar to this in your earlier index.jsp page.

<%--

if (request.getAttribute("output") != null) {

String suc = (String) request.getAttribute("output");
%><--input type="text" name="output" value="<%=suc%>" readonly="readonly" disabled="disabled" --/><%

}


--%>

5. Last step is very important because without setting crossContext parameter to true server will not allow to happen inter-servlet communication in two deference contexts due to security policies.

There are several ways to do this. First one is enabling in each web app's perspective and the second one is enabling in server side which affect globally for all web apps hosted.

1. In this way you have to set
crossContext parameter to true in each and every Web App's context.xml file which resides in META-INF folder of your web directory.

2. Without configuring each context.xml files, you can directly enable
crossContext parameter to true which resides in context.xml file in server's (Here it is Apache Tomcat) conf directory. Then in deployment time server will create context.xml files for each and every web apps in its host and enable crossContext parameter to true in each. These individual contexts files are located at $CATALINA_HOME/conf/[enginename]/[hostname]/ folder.

For more info regarding Tomcat's context container visit this link.
This is just a one scenario. But this can be applied in various ways to achieve communication among deference contexts in same Web Container.

Saturday, January 8, 2011

MCrazies : Keep in touch with your favourite movie theaters



MCrazies simply implies the shortened name of Movie Crazies which is a simple sms mobile application can be used to get movie details of your favorite movie theater/s purposely developed for movie crazies :). It was developed as an appzone application of Etizalat. In detailed this application provides information about on going movies in famous theaters. In some urgent situations browsing through the internet to get information on the on going movies in theaters becomes so irritate. But getting those information by sending a simple SMS sounds more easier and simpler. You don't have to spend time surfing the internet or calling friends to get these information anymore! Just type
mcraz SPACE <name of theater 1> SPACE <name of theater 2> and send to 4499 from your Etisalat phone.
Here, name of theater 2 is optional. If user wants to get details of on going movies in more than one theater, then it is possible to type several theater names by leaving a space in between each theater name (case insensitive). User will get a SMS in return with the corresponding results for his SMS.

At the moment mcraz supports eleven movie theatres including Majestic City - Bambalapitiya, Savoy1 & Savoy2 - Wallawatta, Liberty - Kollupitiya, Sigiri - Katugasthota, Ricky1 - Colombo, Eros - Wallawatta, Arena - Katugastota, Regal - Colombo, Concord - Dehiwala, Cinemax - Ja-Ela and Quinlon - Nugegoda. Here User does not need to type the location of the theater and even the exact name of the theater. Application can tackle with validity of names up to some extent like by typing mc or majestic or majesticcity or majesticsity will be considered as Majestic City and reply to the user accordingly. So do not be hesitate to use MCrazies if you are a person who wants to absorb the real enthusiasm of watching a good movie by sitting in a quality theater as soon as you was asked to or you feel to do so :).