Sunday, December 27, 2015

Elastic Search 2.x sample CRUD code

What is ElasticSearch?

Elasticsearch is an open-source, restful, distributed, search engine built on top of apache-lucene, Lucene is arguably the most advanced, high-performance, and fully featured search engine library in existence today—both open source and proprietary.
Elasticsearch is also written in Java and uses Lucene internally for all of its indexing and searching, but it aims to make full-text search easy by hiding the complexities of Lucene behind a simple, coherent, RESTful API.

Basic Concept and terminologies:

1.Near Realtime (NRT) Elasticsearch is a near real time search platform. What this means is there is a slight latency (normally one second) from the time you index a document until the time it becomes searchable.
2.Cluster A cluster is a collection of one or more nodes (servers) that together holds your entire data and provides federated indexing and search capabilities across all nodes. default cluster name will be "elasticsearch".
3.Node A node is a single server that is part of your cluster, stores your data, and participates in the cluster’s indexing and search capabilities.
4.Index An index is a collection of documents that have somewhat similar characteristics i.e like database.
5.Type Within an index, you can define one or more types. A type is a logical category/partition of your index and defined for documents that have a set of common fields. i.e. like table in relational database. a type
6.Document A document is a basic unit of information that can be indexed. For example

Below image will show how we can co-relate the relational database with elastic index which will make easy to understand the elastic terms and api.
In Elasticsearch, a document belongs to a type, and those types live inside an index. You can draw some (rough) parallels to a traditional relational database:

Relational DB ⇒ Databases ⇒ Tables ⇒ Rows ⇒ Columns Elasticsearch ⇒ Indices ⇒ Types ⇒ Documents ⇒ Fields

Development: Maven library dependency:
<dependency>
   <groupId>org.elasticsearch</groupId>
   <artifactId>elasticsearch</artifactId>
   <version>2.1.1</version>
</dependency>

Client: using java client we can performe operations on elastic search cluster/node.
1.Perform standard index, get, delete and search operations on an existing cluster
2.Perform administrative tasks on a running cluster
3.Start full nodes when you want to run Elasticsearch embedded in your own application or when you want to launch unit or integration tests

Two types of client to get the client connection with cluster to perform the operations.
1. Node Client.
2. TransportClient.
Node Client: Instantiating a node based client is the simplest way to get a Client that can execute operations against elasticsearch. TransportClient: The TransportClient connects remotely to an Elasticsearch cluster using the transport module. It does not join the cluster, but simply gets one or more initial transport addresses and communicates with them. sample elastic search crud sample code: Node Client:
Node node  = NodeBuilder.nodeBuilder().clusterName("yourclustername").node();
Client client = node.client();
TransportClient:
Settings settings = Settings.settingsBuilder()
                    .put(ElasticConstants.CLUSTER_NAME, cluster).build();
TransportClient transportClient = TransportClient.builder().settings(settings).build().
                    addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName(host), port));
Creat Index: We can create the IndexRequest or using XContentBuilder we can populate the request to store in the index.
 XContentBuilder jsonBuilder = XContentFactory.jsonBuilder();
 Map<String, Object> data = new HashMap<String, Object>();
 data.put("FirstName", "Uttesh");
 data.put("LastName", "Kumar T.H.");
 jsonBuilder.map(data);
public IndexResponse createIndex(String index, String type, String id, XContentBuilder jsonData) {
    IndexResponse response = null;
    try {
        response = ElasticSearchUtil.getClient().prepareIndex(index, type, id)
                .setSource(jsonData)
                .get();
        return response;
    } catch (Exception e) {
        logger.error("createIndex", e);
    }
    return null;
}
Find Document By Index:
public void findDocumentByIndex() {
        GetResponse response = findDocumentByIndex("users", "user", "1");
        Map<String, Object> source = response.getSource();
        System.out.println("------------------------------");
        System.out.println("Index: " + response.getIndex());
        System.out.println("Type: " + response.getType());
        System.out.println("Id: " + response.getId());
        System.out.println("Version: " + response.getVersion());
        System.out.println("getFields: " + response.getFields());
        System.out.println(source);
        System.out.println("------------------------------");
    }

public GetResponse findDocumentByIndex(String index, String type, String id) {
        try {
            GetResponse getResponse = ElasticSearchUtil.getClient().prepareGet(index, type, id).get();
            return getResponse;
        } catch (Exception e) {
            logger.error("", e);
        }
        return null;
    }

Find Document By Value
public void findDocumentByValue() {
        SearchResponse response = findDocument("users", "user", "LastName", "Kumar T.H.");
        SearchHit[] results = response.getHits().getHits();
        System.out.println("Current results: " + results.length);
        for (SearchHit hit : results) {
            System.out.println("--------------HIT----------------");
            System.out.println("Index: " + hit.getIndex());
            System.out.println("Type: " + hit.getType());
            System.out.println("Id: " + hit.getId());
            System.out.println("Version: " + hit.getVersion());
            Map<String, Object> result = hit.getSource();
            System.out.println(result);
        }
        Assert.assertSame(response.getHits().totalHits() > 0, true);
    }

    public SearchResponse findDocument(String index, String type, String field, String value) {
        try {
            QueryBuilder queryBuilder = new MatchQueryBuilder(field, value);
            SearchResponse response = ElasticSearchUtil.getClient().prepareSearch(index)
                    .setTypes(type)
                    .setSearchType(SearchType.QUERY_AND_FETCH)
                    .setQuery(queryBuilder)
                    .setFrom(0).setSize(60).setExplain(true)
                    .execute()
                    .actionGet();
            SearchHit[] results = response.getHits().getHits();
            return response;
        } catch (Exception e) {
            logger.error("", e);
        }
        return null;
    }
Update Index
public void UpdateDocument() throws IOException {
    XContentBuilder jsonBuilder = XContentFactory.jsonBuilder();
    Map<String, Object> data = new HashMap<String, Object>();
    data.put("FirstName", "Uttesh Kumar");
    data.put("LastName", "TEST");
    jsonBuilder.map(data);
    UpdateResponse updateResponse = updateIndex("users", "user", "1", jsonBuilder);

}
public UpdateResponse updateIndex(String index, String type, String id, XContentBuilder jsonData) {
    UpdateResponse response = null;
    try {
        System.out.println("updateIndex ");
        response = ElasticSearchUtil.getClient().prepareUpdate(index, type, id)
                .setDoc(jsonData)
                .execute().get();
        System.out.println("response " + response);
        return response;
    } catch (Exception e) {
        logger.error("UpdateIndex", e);
    }
    return null;
}
Remove Index:
public void RemoveDocument() throws IOException {
    DeleteResponse deleteResponse = elastiSearchService.removeDocument("users", "user", "1");
}

public DeleteResponse removeDocument(String index, String type, String id) {
        DeleteResponse response = null;
        try {
            response = ElasticSearchUtil.getClient().prepareDelete(index, type, id).execute().actionGet();
            return response;
        } catch (Exception e) {
            logger.error("RemoveIndex", e);
        }
        return null;
    }
Full sample code is available at guthub Download full code

Monday, May 11, 2015

ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 0 time(s); retry policy is


15/05/08 01:26:12 INFO ipc.Client: Retrying connect to server: localhost/127.0.0.1:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1 SECONDS)
...
Bad connection to FS. command aborted. exception: Call to localhost/127.0.0.1:9000 failed on connection exception: java.net.ConnectException: Connection refused


solution : run "bin/hadoop namenode -format" command

Hadoop Set Up on Ubuntu Linux (Single-Node Cluster)

Running Hadoop on Ubuntu Linux (Single-Node Cluster)

Hadoop is a framework written in Java, Incorporates features similar to those of the Google File System (GFS) and of the MapReduce computing paradigm.

Hadoop’s HDFS is a highly fault-tolerant distributed file system and, like Hadoop in general, designed to be deployed on low-cost hardware. It provides high throughput access to application data and is suitable for applications that have large data sets.

Simple Hadoop installation up and running so that you can play around with the software and learn more about it.

For windows OS user to learn hadoop install the virtual box along with Ubuntu OS.

Click here for the virtual box and Ubuntu set-up http://uttesh.blogspot.in/2015/05/install-ubuntu-linux-on-virtual-box.html



After the virtual box with Ubuntu set-up is done, follow below for the hadoop set up.

Step 1. Hadoop requires a working Java 1.5+ installation.
Step 2. Adding a dedicated Hadoop system user.
Step 3. Configuring SSH
Step 4. Disabling IPv6
Step 5. Hadoop Installation

Step 1. Hadoop requires a working Java 1.5+ installation:

run following command for sun JDK

# Update the source list
$ sudo apt-get update

# Install Sun Java 7 JDK
$ sudo apt-get install sun-java7-jdk
We can also install oracle jdk manually or running following commands

$ sudo apt-add-repository ppa:webupd8team/java
$ sudo apt-get update
$ sudo apt-get install oracle-java7-installer
The full JDK which will be placed in /usr/lib/jvm/java-6-* (well, this directory is actually a symlink on Ubuntu).

After installation, check whether JDK is correctly set up:
uttesh@uttesh-VirtualBox:~$ java -version
java version "1.7.0_80"
Java(TM) SE Runtime Environment (build 1.7.0_80-b15)
Java HotSpot(TM) 64-Bit Server VM (build 24.80-b11, mixed mode)

Step 2. Adding a dedicated Hadoop system user: *this is not recommended, you can skip only it helps to separate the Hadoop installation from other software applications and user accounts running on the same machine.

$ sudo addgroup hadoop
$ sudo adduser --ingroup hadoop hduser

Step 3. Configuring SSH

Hadoop requires SSH access to manage its nodes,For single-node setup of Hadoop, we therefore need to configure SSH access to "localhost"

a. Install SSH : ssh is pre-packaged with Ubuntu, but we need to install ssh first to start sshd server. Use the following command to install ssh and sshd.

$ sudo apt-get install ssh


Verify installation using following commands.

$ which ssh
## Should print '/usr/bin/ssh'

$ which sshd
## Should print '/usr/bin/sshd'


b. Check if you can ssh to the localhost without a password.

$ ssh localhost

Note that if you try ssh to the localhost without installing ssh first, an error message will be printed saying 'ssh: connect to host localhost port 22: Connection refused'. So be sure to install ssh first.

c. If you cannot SSH to the localhost without a password create a ssh key pair using the following command.

$ ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa


d. Now the key pair has been created, note that id_rsa is the private key and id_rsa.pub is the public key are in .ssh directory. We need to include the new public key to the list of authorized keys using the following command.

$ cat ~/.ssh/id_dsa.pub &gt;&gt; ~/.ssh/authorized_keys
uttesh@uttesh-VirtualBox:~$ ssh-keygen -t rsa -P ""
Generating public/private rsa key pair.
Enter file in which to save the key (/home/uttesh/.ssh/id_rsa): 
Created directory '/home/uttesh/.ssh'.
Your identification has been saved in /home/uttesh/.ssh/id_rsa.
Your public key has been saved in /home/uttesh/.ssh/id_rsa.pub.
The key fingerprint is:
53:e9:c6:d8:0a:7f:3e:7b:b2:36:2d:6c:df:be:16:7c uttesh@uttesh-VirtualBox
The key's randomart image is:
+--[ RSA 2048]----+
|                 |
|           .     |
|          o      |
|         *       |
|      . S =  .   |
|       o +    o E|
|        o...   o |
|         oO o..  |
|         o+X.o+. |
+-----------------+
e. try connect to the localhost and check if you can ssh to the localhost without a password.

$ ssh localhost

If the SSH connect should fail, these general tips might help:

Enable debugging with ssh -vvv localhost and investigate the error in detail.

Step 4. Disabling IPv6 :

One problem with IPv6 on Ubuntu is that using 0.0.0.0 for the various networking-related Hadoop configuration options will result in Hadoop binding to the IPv6 addresses of my Ubuntu box. there’s no practical point in enabling IPv6 on a box when you are not connected to any IPv6 network. Hence, I simply disabled IPv6 on my Ubuntu machine.

To disable IPv6 on Ubuntu 10.04 LTS, open /etc/sysctl.conf in the editor of your choice and add the following lines to the end of the file:

# disable ipv6
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
You have to reboot your machine in order to make the changes take effect.

You can check whether IPv6 is enabled on your machine with the following command:

$ cat /proc/sys/net/ipv6/conf/all/disable_ipv6

A return value of 0 means IPv6 is enabled, a value of 1 means disabled.


Step 5. Hadoop Installation :

1. Download the latest stable Hadoop release from this http://www.apache.org/dyn/closer.cgi/hadoop/common/. hadoop-2.5.1.tar.gz

2. Install Hadoop in /usr/local or any preferred directory. Decompress the downloaded file using the following command.

$ tar -xf hadoop-2.5.1.tar.gz -C /usr/local/

or right click on the file and click extract from UI.

3. Add $HADOOP_PREFIX/bin directory to your PATH, to ensure Hadoop is available from the command line.

Add the following lines to the end of the $HOME/.bashrc file of user. If you use a shell other than bash, you should of course update its appropriate configuration files instead of .bashrc.

PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games"

# Set Hadoop-related environment variables
export HADOOP_HOME=/usr/local/hadoop

# Set JAVA_HOME (we will also configure JAVA_HOME directly for Hadoop later on)
export JAVA_HOME=/usr/lib/jvm/java-6-sun

# Some convenient aliases and functions for running Hadoop-related commands
unalias fs &> /dev/null
alias fs="hadoop fs"
unalias hls &> /dev/null
alias hls="fs -ls"

# If you have LZO compression enabled in your Hadoop cluster and
# compress job outputs with LZOP (not covered in this tutorial):
# Conveniently inspect an LZOP compressed file from the command
# line; run via:
#
# $ lzohead /hdfs/path/to/lzop/compressed/file.lzo
#
# Requires installed 'lzop' command.
#
lzohead () {
    hadoop fs -cat $1 | lzop -dc | head -1000 | less
}

# Add Hadoop bin/ directory to PATH
export PATH=$PATH:$HADOOP_HOME/bin

Standalone Mode
Hadoop by default is configured to run as a single Java process, which runs in a non distributed mode. Standalone mode is usually useful in development phase since it is easy to test and debug. Also, Hadoop daemons are not started in this mode. Since Hadoop's default properties are set to standalone mode and there are no Hadoop daemons to run, there are no additional steps to carry out here.

Pseudo-Distributed Mode
This mode simulates a small scale cluster, with Hadoop daemons running on a local machine. Each Hadoop daemon is run on a separate Java process. Pseudo-Distributed Mode is a special case of Fully distributed mode.

To enable Pseudo-Distributed Mode, you should edit following two XML files. These XML files contain multiple property elements within a single configuration element. Property elements contain name and value elements.

1. etc/hadoop/core-site.xml
2. etc/hadoop/hdfs-site.xml

Edit the core-site.xml and modify the following properties. fs.defaultFS property holds the locations of the NameNode.

<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://localhost:9000</value>
    </property>
</configuration>

Edit the hdfs-site.xml and modify the following properties. dfs.replication property holds the number of times each HDFS block should be replicated.

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
</configuration>

Configuring the base HDFS directory :
hadoop.tmp.dir property within core-site.xml file holds the location to the base HDFS directory. Note that this property configuration doesn't depend on the mode Hadoop runs on. The default value for hadoop.tmp.dir property is /tmp, and there is a risk that some linux distributions might discard the contents of the /tmp directory in the local file system on each reboot, and leads to data loss within the local file system, hence to be on the safer side, it makes sense to change the location of the base directory to a much reliable one.

Carry out following steps to change the location of the base HDFS directory.

1.Create a directory for Hadoop to store its data locally and change its permissions to be writable by any user.
$ mkdir /var/lib/hadoop
$ chmod 777 /var/lib/hadoop


2.Edit the core-site.xml and modify the following property.
<configuration>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/var/lib/hadoop</value>
    </property>
</configuration>


Formatting the HDFS filesystem

We need to format the HDFS file system, before starting Hadoop cluster in Pseudo-Distributed Mode for the first time. Note that formatting the file system multiple times will result deleting the existing file system data.

Execute the following command on command line to format the HDFS file system.
$ hdfs namenode -format


Starting NameNode daemon and DataNode daemon

$ $HADOOP_HOME/sbin/start-dfs.sh


Now you can access the name node web interface at http://localhost:50070/.







Friday, May 8, 2015

Install Ubuntu Linux on Virtual Box


It is always good to have virtual box with our required OS installed, If u have windows box and want to learn hadoop, its good to have virtual box with ubuntu to learn.

Prerequisites :

1. Download and install Virtual box https://www.virtualbox.org/.
2. Download Ubuntu ISO from http://www.ubuntu.com/download/desktop.


Installation of the virtual box is simple and easy. after installing the virtual box now we will install Ubuntu linux in VM.



Create the VM instance for the ubuntu OS.

Click on the "new" menu item from VM virtual box and it will pop-up the window as show below and choose the name for the VM alongwith system bit and OS Type.



select the RAM for the system, its always good to have RAM more 1GB



select the hard drive









select the Memory for the system, its always good to have Memory more 15GB.



VM is created and now we need to install the Ubuntu linux on this VM.

run the created VM or double click on the created VM instance.




select the download ubuntu iso file.



after some time, ubuntu installation window will load.



click on the install button and follow the ubuntu installation process, it will take 10-15 min for the ubuntu installation.



after the successful installation of the ubuntu, install the guess-addition for the full screen mode of the VM.










Tuesday, April 28, 2015

Analyzing the application code by using the sonarqube ANT/MAVEN


SonarQube™ software (previously known as “Sonar”) is an open source project hosted at Codehaus. By using this we can analyze the source code, its very easy to configure and use.


1. Download and unzip the SonarQube distribution ("C:\sonarqube" or "/etc/sonarqube")

2. Start the SonarQube server: under bin folder run the executable file according to respective OS.

sonarqube/bin/[OS]

3.Browse the results at http://localhost:9000

we will use Embedded database for learning.

under sonarqube/conf/sonar.properties will have the db base configuration, default it uses embeded db H2 which is in build in java.

Application level ANT configuration :


Download the sonar-ant-task jar file download

copy the jar file to /lib folder

add following to existing build.xml file of the application.

<taskdef uri="antlib:org.sonar.ant" resource="org/sonar/ant/antlib.xml">
    <classpath path="path/to/sonar-ant-task-*.jar" />
</taskdef>

if you don't want to modify the existing build.xml file then use below xml file and run "ant -f analyze-code.xml"



after the successful execution it will provide the url to access the result.


for maven application it simple run the following command

mvn clean install sonar:sonar


sample result page :





web service client JAXWS by maven

Generate web service client stub class by using the JAXWS maven plugin.

"jaxws-maven-plugin" will generate the web service stub classes by using that we can implement client or test the web service.


generated stub classes will stored under src folder and by using this service classes we can communicate with service and get the response.

for free webservice for the learning and client implementation visit xmethod.com

take any service and generate the client stub classes.


add the WSDL URL in the pom.xml

<wsdlUrls>
     <wsdlUrl>                            
    enter the wsdl URL here
     </wsdlUrl>
</wsdlUrls>


full sample :



Tuesday, April 14, 2015

JMETER load testing by code/ JMETER API implementation sample by java code

This tutorial attempts to explain the basic design, functionality and usage of the Jmeter, Jmeter is excellent tool used to perform load testing on the application, By using the jmeter GUI we can create the test samples for the request
according to our requirement and execute the samples with load of number of users.
As jmeter tool is fully developed by using JAVA, We can write the java code to do the same without using the GUI of the jmeter, Its not advisable to implement the java code for the load testing, its just a proof of concept to write the samples by java code using the jmeter libraries.
Jmeter as very good documentation/APIs, After going through the jmeter source code and other reference resources, wrote the following sample code.

Pre-prerequisites:



Prior to understand following code we must have basic knowledge of the how jmeter works.
Initially we need load the jmeter properties which will be used by jmeter classes/libraries in later stage of code
//JMeter Engine
StandardJMeterEngine jmeter = new StandardJMeterEngine();
//JMeter initialization (properties, log levels, locale, etc)
JMeterUtils.setJMeterHome(jmeterHome.getPath());
JMeterUtils.loadJMeterProperties(jmeterProperties.getPath());
JMeterUtils.initLogging();// you can comment this line out to see extra log messages of i.e. DEBUG level
JMeterUtils.initLocale();

1. Create "Test Plan" Object and JOrphan HashTree

//JMeter Test Plan, basically JOrphan HashTree
HashTree testPlanTree = new HashTree();
// Test Plan
TestPlan testPlan = new TestPlan("Create JMeter Script From Java Code");
testPlan.setProperty(TestElement.TEST_CLASS, TestPlan.class.getName());
testPlan.setProperty(TestElement.GUI_CLASS, TestPlanGui.class.getName());
testPlan.setUserDefinedVariables((Arguments) new ArgumentsPanel().createTestElement());

2. Samplers : Add "Http Sample" Object

Samplers tell JMeter to send requests to a server and wait for a response. They are processed in the order they appear in the tree. Controllers can be used to modify the number of repetitions of a sampler
// First HTTP Sampler - open uttesh.com
HTTPSamplerProxy examplecomSampler = new HTTPSamplerProxy();
examplecomSampler.setDomain("uttesh.com");
examplecomSampler.setPort(80);
examplecomSampler.setPath("/");
examplecomSampler.setMethod("GET");
examplecomSampler.setName("Open uttesh.com");
examplecomSampler.setProperty(TestElement.TEST_CLASS, HTTPSamplerProxy.class.getName());
examplecomSampler.setProperty(TestElement.GUI_CLASS, HttpTestSampleGui.class.getName());

3.Loop Controller

Loop Controller will execute the samples number times the loop iteration is declared.
// Loop Controller
LoopController loopController = new LoopController();
loopController.setLoops(1);
loopController.setFirst(true);
loopController.setProperty(TestElement.TEST_CLASS, LoopController.class.getName());
loopController.setProperty(TestElement.GUI_CLASS, LoopControlPanel.class.getName());
loopController.initialize();

4.Thread Group

Thread group elements are the beginning points of any test plan. All controllers and samplers must be under a thread group. Other elements, e.g. Listeners, may be placed directly under the test plan, in which case they will apply to all the thread groups. As the name implies, the thread group element controls the number of threads JMeter will use to execute your test.

// Thread Group
ThreadGroup threadGroup = new ThreadGroup();
threadGroup.setName("Sample Thread Group");
threadGroup.setNumThreads(1);
threadGroup.setRampUp(1);
threadGroup.setSamplerController(loopController);
threadGroup.setProperty(TestElement.TEST_CLASS, ThreadGroup.class.getName());
threadGroup.setProperty(TestElement.GUI_CLASS, ThreadGroupGui.class.getName());

5. Add sampler,controller..etc to test plan

// Construct Test Plan from previously initialized elements
testPlanTree.add(testPlan);
HashTree threadGroupHashTree = testPlanTree.add(testPlan, threadGroup);
threadGroupHashTree.add(examplecomSampler);
// save generated test plan to JMeter's .jmx file format
SaveService.saveTree(testPlanTree, new FileOutputStream("report\\jmeter_api_sample.jmx"));
above code will generate the jmeter script which we wrote from the code.

5. Add Summary and reports

//add Summarizer output to get test progress in stdout like:
// summary =      2 in   1.3s =    1.5/s Avg:   631 Min:   290 Max:   973 Err:     0 (0.00%)
Summariser summer = null;
String summariserName = JMeterUtils.getPropDefault("summariser.name", "summary");
if (summariserName.length() > 0) {
    summer = new Summariser(summariserName);
}
// Store execution results into a .jtl file, we can save file as csv also
String reportFile = "report\\report.jtl";
String csvFile = "report\\report.csv";
ResultCollector logger = new ResultCollector(summer);
logger.setFilename(reportFile);
ResultCollector csvlogger = new ResultCollector(summer);
csvlogger.setFilename(csvFile);
testPlanTree.add(testPlanTree.getArray()[0], logger);
testPlanTree.add(testPlanTree.getArray()[0], csvlogger);

Finally Execute the test

// Run Test Plan
jmeter.configure(testPlanTree);
jmeter.run();

System.out.println("Test completed. See " + jmeterHome + slash + "report.jtl file for results");
System.out.println("JMeter .jmx script is available at " + jmeterHome + slash + "jmeter_api_sample.jmx");
System.exit(0);

Full Source Code of the POC is available on the GitHub click here
Simple source :

Generate JMX sample file by code and opened in jmeter UI.

Summary Report generated by code after test execution

Wednesday, April 8, 2015

get byte or memory size of array,list,collections in java

In java lot of time we will come across the scenerio where in which we need to find the how much memory used by given list.

The ArrayList holds a pointer to a single Object array, which grows as the number of elements exceed the size of the array. The ArrayList's underlying Object array grows by about 50% whenever we run out of space.

ArrayList also writes out the size of the underlying array, used to recreate an identical ArrayList to what was serialized.

sample code to get the memory size the collection in bytes



Monday, March 30, 2015

Generate the bar code image by itext

We can generate a bar code image by using the itext jar.

Itext jar : http://sourceforge.net/projects/itext/

Download jar : http://sourceforge.net/projects/itext/files/latest/download


sample code

import com.itextpdf.text.pdf.BarcodePDF417;
import java.awt.Color;
import java.awt.image.BufferedImage;
import java.io.ByteArrayOutputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import javax.imageio.ImageIO;

/**
 *
 * @author Uttesh Kumar T.H.
 */
public class GenerateBarCodeImage {
    public static void main(String[] args) throws IOException {
        BarcodePDF417 barcode = new BarcodePDF417();
        barcode.setText("Bla bla");
        java.awt.Image img = barcode.createAwtImage(Color.BLACK, Color.WHITE);
        BufferedImage outImage = new BufferedImage(img.getWidth(null), img.getHeight(null), BufferedImage.TYPE_INT_RGB);
        outImage.getGraphics().drawImage(img, 0, 0, null);
        ByteArrayOutputStream bytesOut = new ByteArrayOutputStream();
        ImageIO.write(outImage, "png", bytesOut);
        bytesOut.flush();
        byte[] pngImageData = bytesOut.toByteArray();
        FileOutputStream fos = new FileOutputStream("barcode.png");
        fos.write(pngImageData);
        fos.flush();
        fos.close();
    }
}

Sunday, March 29, 2015

Regular expression to extract the src tag details from given image tag or html source text


By using the regular expression we can extract the any data from the given input, here we are trying to get the src attribute value from the given image tag or html text data.

Image 'src' attribute extract Regx:

<img[^>]*src=[\\\"']([^\\\"^']*)


sample code:

import java.util.regex.Matcher;
import java.util.regex.Pattern;

/**
 *
 * @author Uttesh Kumar T.H.
 */
public class ImgTest {

    public static void main(String[] args) {

        String s = "<p><img src=\"38220.png\" alt=\"test\" title=\"test\" /> <img src=\"32222.png\" alt=\"test\" title=\"test\" /></p>";
        Pattern p = Pattern.compile("<img[^>]*src=[\\\"']([^\\\"^']*)");
        Matcher m = p.matcher(s);
        while (m.find()) {
            String src = m.group();
            int startIndex = src.indexOf("src=") + 5;
            String srcTag = src.substring(startIndex, src.length());
            System.out.println(srcTag);
        }
    }

}


Compare images are same by java

We can compare the given images are same or not by comparing the buffer data of the image.

1. Compare the image sizes are same or not.
2. Compare the binary data of two images are same or not.

sample code :

import java.awt.image.BufferedImage;
import java.awt.image.DataBuffer;
import java.io.File;
import javax.imageio.ImageIO;

/**
 *
 * @author Uttesh Kumar T.H.
 */
public class compareimage {

    public static boolean compareImage(File fileA, File fileB) {
        try {
            // take buffer data from botm image files //
            BufferedImage biA = ImageIO.read(fileA);
            DataBuffer dbA = biA.getData().getDataBuffer();
            int sizeA = dbA.getSize();
            BufferedImage biB = ImageIO.read(fileB);
            DataBuffer dbB = biB.getData().getDataBuffer();
            int sizeB = dbB.getSize();
            // compare data-buffer objects //
            if (sizeA == sizeB) {
                for (int i = 0; i < sizeA; i++) {
                    if (dbA.getElem(i) != dbB.getElem(i)) {
                        return false;
                    }
                }
                return true;
            } else {
                return false;
            }
        } catch (Exception e) {
            System.out.println("Failed to compare image files ...");
            return false;
        }
    }

    public static void main(String[] args) {
        File file1 = new File("path to image1");
        File file2 = new File("path to image2");
        System.out.println("result :" + compareImage(file1, file2));
    }
}

Friday, March 27, 2015

Reverse elements in Array

Reverse all elements in array, traditional way how we use to do is get the array and iterate through the for loop and use swap logic to move to temporary array and get the result.

But from > jdk 1.5 introduced reverse() in java.util.Collections which will reverse the order of the elements, all we have to do is to convert the array to arraylist by using Arrays.asList() and using collections.reverse() to reverse the order of the element.

import java.util.Arrays;
import java.util.Collections;
import java.util.List;

/**
 *
 * @author Uttesh Kumar T.H.
 */
public class ReverseArray {

    public static void main(String[] args) {
        Integer[] numbers = new Integer[]{1, 2, 3, 4, 5, 6};
        List numberlist = Arrays.asList(numbers);
        Collections.reverse(numberlist);
        for (int i = 0; i < numberlist.size(); i++) {
            System.out.println(numberlist.get(i));
        }

        // logical way of doing , its always good to understand the logic 
        int[] _numbers = {1, 2, 3, 4, 5, 6};
        for (int i = 0; i < _numbers.length / 2; i++) {
            int temp = _numbers[i]; // swap numbers 
            _numbers[i] = _numbers[_numbers.length - 1 - i];
            _numbers[_numbers.length - 1 - i] = temp;
        }
        System.out.println("reversed array : " + Arrays.toString(_numbers));
    }
}