Sunday, November 30, 2014

Dictionary: Remote Procedure Call

The concept of Remote Procedure Call (RPC) was developed by Bruce Jay Nelson, a Xerox Parc researcher, in 1981. In 1984, he and his fellow researcher from Xerox PARC published a paper which entitled "Implementing Remote Calls". This paper describes of implementation of RPC that is used in the Cedar project.

Remote Procedure Call (RPC) is an important concept in a distributed system. It was developed by observation of procedure calls that are well-known mechanism for transfer of control and data within a program running on a single computer. As the networked system grown into popular there is a need to transfer control and data across the network in a simple way. When calling a remote procedure, the calling (caller) environment is suspended, the parameter are passed across the network to the environment where the procedure is executed (callee), and desired procedure will be executed there and then send the result back to the caller.

So in a simple way, RPC is a concept of computer programming for letting a computer program to execute a method or sub-routine in a remote computer and get the result of that execution easily, simpe, and straight-forward as calling a local function. RPC is also known as remote invocation or remote method invocation in a object oriented principles.

What processes happen when a client calls a RPC method or function?

    1. Create a message buffer (contigious array of bytes of some size).
    2. Pack the needed information into the message buffer. The information includes identifier of called function and function arguments. This process often called as message serialization or marshaling the argument.
    3. Wait for the reply, because the function calls are usually synchronous, the call will wait for its completion.
    4. Unpack return code and other arguments. This unpacking process often called as unmarshaling or deserialization.
    5. Return to the caller and the caller can continue for more processing.

  2. Birrell, A. D.; Nelson, B. J. (1984). "Implementing remote procedure calls". ACM Transactions on Computer Systems 2: 39. doi:10.1145/2080.357392
  3. Arpaci-Dusseau, Remzi H.; Arpaci-Dusseau, Andrea C. (2014), Introduction to Distributed Systems

Tuesday, November 25, 2014

Ubuntu Troubleshoot: Ubuntu can't change display device brightness

After installing my Ubuntu 14.04 all control works fine except for the brightness controller. The brightness button on my keyboard is just fine and able to show the brightness level is changing, but the display brightness is not changing. To fix this I must add a new file named 20-intel.conf in the /usr/share/X11/xorg.conf.d/ directory and enter the below configuration as the content .

Section "Device"
Identifier  "card0"
Driver      "intel"
Option      "Backlight"  "intel_backlight"
BusID       "PCI:0:2:0"EndSection

and now my brightness controller works as I expect.

Sunday, November 23, 2014

Hadoop Tips: Change default namenode and datanode directories

When we start Hadoop in psudo-distributed mode using the sbin/ command for the first time, The default directory will be created in /tmp/ directory. The problem arises if you restart your machine the created directories will be deleted and you can't start your hadoop again.

To solve this problem you can change the default directory using configuration file that you can find in your hadoop's etc/hadoop/hdfs-site.xml. Add these configuration properties:

    <!--for namenode-->

    <!--for datanode-->

But please make sure, your namenode and datanode directories exist. And don't forget to set the properties value using correct URI (started with file://) format just like in the example. After that you can format your namenode using this command:

$ bin/hdfs namenode -format

and start your hadoop again using:

$ sbin/

If the namenonde or datanode still not working, you can check log files to see the problem.
Hope this tips could help you. If you find other problem related to this, please leave a comment below. Cheers! :)

Tuesday, November 18, 2014

Hadoop Troubleshoot: Hadoop build error related to findbugs, Eclipse configuration, protobuf, and AvroRecord

Last week, I was trying to build Hadoop 2.5.0 from source code. I tried several ways to build the source code, first one is using Maven in terminal, and the second one is using Eclipse as my IDE.

1. Related to findbugs (using maven in terminal)

I read the BUILDING.txt that you can find in the root of Hadoop source code directory. And I ran this command to build me a hadoop package:

$ mvn package -Pdist,native,docs,src -DskipTests -Dtar

And some how in the middle of long building time, there is an error related to FINDBUGS_HOME environment variable as you can see in this specifice error message:


I already have the findbugs installed and then I tried to set the FINDBUGS_HOME to /usr/bin/findbugs using this command:

$ export FINDBUGS_HOME=/usr/bin/findbugs

But still no use, the error was still there. So I downloaded the findbugs source code from sourceforge and set once again the FINDBUGS_HOME to the findbugs source code root directory.

$ export FINDBUGS_HOME=/path/to/your/<sourcecode>/findbugs

I tried to run build command again and it went well this time. :)

2. Build path error (Eclipse configuration)

When you're trying to import the Hadoop projects into your Eclipse workspace, and try to build all the projects, probably you will many kind of errors, but you could see this specific error message too:

Unbound classpath variable: 'M2_REPO/asm/asm/3.2/asm-3.2.jar' in project 'hadoop-tools-dist' hadoop-tools-dist

This error related to M2_REPO Classpath variable in Eclipse. To solve this problem you can open Classpath Variable configuration  from Eclipse menu:

"Windows -> Preferences". It will open Preferences dialog. After that, in that dialog you can go to:

"Java -> Build Path -> Classpath Variable". You can add new Classpath Variable and give name to the new variable M2_REPO  and fill the path with: 


Try to rebuild all you projects. You won't see those kind of error again after that. But there are still many error in your project.

3. hadoop-streaming build path error

If you are lucky (or not), you will find thiserror message related to hadoop-streaming build path error:

Project 'hadoop-streaming' is missing required source folder: '/hadoop-2.5.0-working/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-resourcemanager/conf'

This error is quite strange. Because the build path can't be changed although you already edited it (I tried this many times and it was quite annoying). But the solution more strange, just remove the build path configuration and it will disappear. You can do it this way:

To open the configuration, you can right click to your hadoop-streaming project to open the context menu. And then choose "Build Path -> Configure Build Path". It will open "Properties for hadoop-streaming" dialog. From that dialog, you can choose "Source" tab and select the problematic source folder and then press the "Remove" button. After that just rebuild the projects. And the remaining error you see is related to protobuf and avro missing files.

4. Protobuf and Avro missing files

The remaining errors you might find are related to missing files in hadoop-common project. The problematic packages in the project are and org.apache.hadoop.ipc.protobuf (you can see the protobuf directory is empty). I have done searching about the empty protobuf directory but it only tells us to rebuild the project, and it will automatically generate the file for you, using maven with these commands: 

$ mvn clean
$ mvn compile -Pnative
$ mvn package
$ mvn compile findbugs:findbugs
$ mvn install
$ mvn package -Pdist,docs,src,native -Dtar

I don't know if this one works for you, but it didn't work in my project. These errors messages still linger in my project:

AvroRecord cannot be resolved to a type
The import org.apache.hadoop.ipc.protobuf.TestRpcServiceProtos cannot be resolved
The import org.apache.hadoop.ipc.protobuf.TestProtos cannot be resolved
TestProtobufRpcProto cannot be resolved to a type
TestProtobufRpc2Proto cannot be resolved to a type
EmptyRequestProto cannot be resolved to a type
EchoResponseProto cannot be resolved to a type

EchoRequestProto cannot be resolved to a type

After I found this very good site called grepcode, and I was able to find all classes and files that I needed (you even can get from other version of hadoop!).
For example you can download the file here, and you can put it in the directory respectively.

That's all. If you find this post is useful, please leave a comment below. See you in my next post.

Thursday, November 13, 2014

MLj Tips: MLj set up and installation

After some trials and errors I managed to run my MLj in my machine. A lot of mistakes I made because I can't find any good tutorial out there. So, I write one for myself. You can read about MLj here.

Here are some steps that I did when setting up MLj in my machine:

1. Get the MLj for Linux

2. Install the SMLNJ

You can install the SMLNJ using this command:
$ sudo apt-get install smlnj

3. Extract mlj0.2c-linux.tar.gz

You can extract your mlj package somewhere in your drive. It will contain mlj as the root directory.

4. Go to your mlj/bin directory

Open and go to your mlj/bin directory. You'll find 4 files there. They are mlj, mlj.bat, mlj-jdk1.1.1.x86-linux, and run.x86-linux.

5. Try to run the mlj

Use this command to run the mlj:
$ ./mlj    

Probably it will raise this kind of error:
./mlj: 1: ./mlj: .arch-n-opsys: not found
 mlj: unable to determine architecture/operating system

to fix this, you can set your PATH environment variable using this command:
$ export PATH=$PATH:<your mlj/bin path>


you can open and edit your mlj file and change this part of code:

Try run ./mlj or mlj once again. and the error is still there. Because the .arch-n-opsys file is not compatible with current desktop architecture environment. It's okay, we will get rid of it in the next step. 

mlj: unable to determine architecture/operating system

6. Copy .arch-n-opsys file from your smlnj

You may create backup of your current .arch-n-opsys file if you want and then copy the new one from the smlnj directory. Here are the commands:
$ mv .arch-n-opsys .arch-n-opsys.bak
$ cp /usr/lib/smlnj/bin/.arch-n-opsys .

7. Try to run mlj once again

Try to run ./mlj or mlj again and you'll get this view:

MLj 0.2c on x86 under Linux with basis library for JDK 1.1
Copyright (C) 1999 Persimmon IT Inc.

MLj comes with ABSOLUTELY NO WARRANTY. It is free software, and you are
welcome to redistribute it under certain conditions.
See COPYING for details.

Your installation is done!

If you have any question or find my post useful please leave a comment. 

Tuesday, November 11, 2014

Dictionary: First Class Methods/Functions

In Programming language first class methods/functions means the programming language treats the method or function as a first class citizen. 

Based on Structure and Interpretation of Computer Programs 2nd Edition Book, elements with fewest restrictions are said to have first-class status. The rights and privileges  of first-class element are:
  • They may be named by variables.
  • They may be passed as arguments to procedures.
  • They may be returned as the results of procedures.
  • They may be included in data structures.
So if a language support a first-class function, it will let the functions to be passed as a parameters, and returned as a result of procedures. First class function is necessary for the functional programming style.

Example of programming languages that support first-class function are Scheme, ML, Haskel, F#, Perl, Scala, Python, PHP, Lua, JavaScript, C#, C++, and etc.

  2. Abelson, Harold; Sussman, Gerald Jay (1996). Structure and Interpretation of Computer Programs - 2nd Edition. MIT Press.

Thursday, November 6, 2014

WALA Tips: Problems/errors when you are trying to run WALA examples

Today I tried to run examples of WALA, and I found and tackled some problems that you might find also. I made this post to help me remember how I solve those problem. :).

For a complete configuration manual you can refer to this WALA wiki. Here are the problem list:

1. The import org.eclipse.pde.internal.core.PDEStateHelper cannot be resolved

I use the latest Eclipse version (Luna) when this post written. This problem happens when I tried to build the project. There is that you can find in package, needs to import the org.eclipse.pde.internal.core.PDEStateHelper.

So to resolve this problem I downloaded the org.eclipse.pde.core_3.3.jar that contain  the org.eclipse.pde.internal.core.PDEStateHelper class. And add that file to my java build path libraries.  You can do that by right click you project, and then from the context menu you choose: 
"Build Path -> Configure Build Path...". It will open the project Java Build Path configuration dialog. After that you can open "Libraries" tab and then click "Add External JARs", it will open file browser and select you org.eclipse.pde.core_3.3.jar. Close your Java Build Path configuration dialog by clicking on "OK" button. Then try to build your project once again. Viola! I hope your error will vanish just like what I did. 

2. Problem when trying to run example 1 (the SWTTypeHierarchy) 

When you are trying to run example 1, SWTTypeHierarchy that you can find in, probably you could get this error:

"{resource_loc:/} "



This problem happen because the SWTTypeHierarchy expecting the JLex.jar to be found at directory. So what you need to do is put your JLex.jar file in the root of project root directory. After doing this step you will able to run the example 1 correctly. If successful, you should see a new window pop up with a tree view of the class hierarchy of JLex.

3. Problem when trying to run example 2 (the PDFTypeHierarchy)

When you are trying to run example 2, probably you're going to have several problems. First problem is the configuration file. This configuration file contain path configuration of your java runtime directory "java_runtime_dir" and "output" directory. If you haven't yet created the file you'll get this message:

" Unable to set up wala properties "

To solve that you need to create the file in project dat directory. You can copy or rename the (that exist in the dat directory) into After that try to run the example once more and you'll get another error (sorry guys).

" property_file_unreadable"

To solve that you need to configure another file, the This file contain executable path configuration of your pdf viewer and graphviz (you can install in Linux by using apt-get install graphviz). You need to create in project and put them in the dat directory. You can copy or rename the (that exisst in the dat directory) into And then try to run once more your project. If there is another error please stay put with me. :). 

If you get these exceptions:

1. Exception in thread "main"

Probably you didn't set the java runtime directory correctly in your file. You must direct your path to the java jre library path. In case of my environment it should be like this: 


2. spawning process [null, -Tpdf, -o,, -v,]Exception in thread "main" 

Probably you haven't set your an output directory in your file. Make sure to create the directory because WALA won't create it for you.

Please make sure you're checking your and configuration files first. Make sure all properties are set correctly such as the java, output, dot_exe, and pdfview_exe paths. If all correctly configured you'll see the a PDF file representing the type hierarchy. 

Hope this post will help you. If you find another problem or you find this post help your problem, please leave a comment below. See you in my next post.