Saturday, 29 December 2018


  • Inner class is a class defined inside other class and act like a member of the enclosing class.
  • There are two main types of inner classes
    • Static member class
    • Inner class
      • Member class
      • Anonymous class
      • Local class
  • Static member class
    • A static member class behaves much like an ordinary top-level class, except that it can access the static members of the outer
    • Outer class can access all inner class fields by referring Inner Class.(InnerA.printAnimal()). But inner class can access all static outer class properties and methods directly.
    • Both inner and outer classes can access others private properties and methods.
    • The static nested class can be accessed as the other static members of the enclosing class without having an instance of the outer class.
    •  The static class can contain non-static and static members and methods.
    • InnerClassTest1.java
      • public class InnerClassTest1 { private static int out = 1; public static class innerA { private static int number = 10; static void printAnimal(String animal) { System.out.println(out); System.out.println(animal); test(); } } private static void test() { System.out.println(innerA.number); } }
    • App.java
      • public class App { public static void main(String[] args) { InnerClassTest1.innerA.printAnimal("Tiger"); } }
  • Member class
    • Need to create Instance access properties and class out side the method.
    • Need Outer class object to create inner class object.
    • The member class can be declared as public, private, protected, final and abstract. E.g.
    • InnerClassTest2.java
      • public class InnerClassTest2 { private int out = 1; public class MemberClass { private void showName() { System.out.println("Sonu"); System.out.println(out); } } public void test() { MemberClass m = new MemberClass(); m.showName(); } }
    • App.java
      • public class App { public static void main(String[] args) { InnerClassTest2 o = new InnerClassTest2(); InnerClassTest2.MemberClass m = o.new MemberClass(); } }
  • Local class
    • It can be accessed only from same method.
    • Local classes cannot be public, private, protected, or static. (No access modifiers)
    • Local class can access all outer class properties and methods.
    • local class can access local variables and parameters of the enclosing method that are effectively final.
    • InnerClassTest3.java
      • public class InnerClassTest3 { int outerNumber = 1; public void methodTest() { int methodNumber = 2; class MethodLocalClass { private void showName() { System.out.println("SONU"); System.out.println(outerNumber); System.out.println(methodNumber); test(); } } methodNumber = 2; //error outerNumber =3; //will work MethodLocalClass m = new MethodLocalClass(); m.showName(); } public void test() { System.out.println("TEST"); } }
  • Anonymous class
    • These are local classes which are automatically declared and instantiated in the middle of an expression
    • Also, like local classes, anonymous classes cannot be public, private, protected, or static. 
    • They can define in the  arguments to the constructor of the outerclass, but cannot otherwise have a constructor.
    • They can implement only one interface or extend a class.
    • Anonymous class cannot define any static fields, methods, or classes, except for static final constants.
  • xxxx

Friday, 21 December 2018


  1. ACID Properties in DBMS?
    1. A transaction is a single logical unit of work which accesses and possibly modifies the contents of a database. Transactions access data using read and write operations.In order to maintain consistency in a database, before and after transaction, certain properties are followed. These are called ACID properties.
    2. Atomicity
      1. By this, we mean that either the entire transaction takes place at once or doesn’t happen at all. There is no midway i.e. transactions do not occur partially. Each transaction is considered as one unit and either runs to completion or is not executed at all. 
    3. Consistency
      1. This means that integrity constraints must be maintained so that the database is consistent before and after the transaction. It refers to correctness of a database. 
    4. Isolation
      1. This property ensures that multiple transactions can occur concurrently without leading to inconsistency of database state. Transactions occur independently without interference. Changes occurring in a particular transaction will not be visible to any other transaction until that particular change in that transaction is written to memory or has been committed. This property ensures that the execution of transactions concurrently will result in a state that is equivalent to a state achieved these were executed serially in some order.
    5. Durability
      1. This property ensures that once the transaction has completed execution, the updates and modifications to the database are stored in and written to disk and they persist even is system failure occurs. These updates now become permanent and are stored in a non-volatile memory.
    6. The effects of the transaction, thus, are never lost.


    1. filter employees with same salary count greater than five (group by salary) using hibernate criteria?
      1. Session session = getCurrentSession(); ProjectionList projectionList = Projections.projectionList(); projectionList.add(Projections.groupProperty("totalCode")) .add(Projections.groupProperty("activityCode")) .add(Projections.sum("amount")) .add(Projections.rowCount()); Criteria criteria = session.createCriteria(Payment.class); criteria.setProjection(projectionList); List payments = criteria.list(); for (Object[] payment : payments) { System.out.println("totalCode: " + payment[0]); System.out.println("activityCode: " + payment[1]); System.out.println("amountSum: " + payment[2]); System.out.println("rowCount: " + payment[3]); }
    2. What is Hibernate N+1 Problems and its Solution?
      1. Hibernate n+1 problems only comes for one to many relationship.
      2. Let us see this problem by example – We have Department table with a one-to-many relationship with Employee. One Department may have many Employees.
      3. We have written the Hibernate Department Entity as below.
      4. @Entity public class Department { private Long id; @OneToMany private Employee[] Employees; }
      5. So now you want to print out all the details of Employee models. A native O/R implementation would SELECT all Department and then do N additional SELECTs for getting the information of Employee for each department.
      6. -- To Get all Departments SELECT * FROM Department; -- To get each Employee, get Employee details SELECT * FROM Employee WHERE Employee.departmentId = ?
      7. As you see, the N+1 problem can happen if the first query populates the primary object and the second query populates all the child objects for each of the unique primary objects returned.
      8. Solution for Hibernate N+1 Problem
        1. Using HQL fetch join
          1. You can use the fetch while using the HQL as below example.
          1. from Department d join fetch d.employees Employee
          2. Hibernate Generated SQL would be similer as –
          3. SELECT * FROM Department d LEFT OUTER JOIN Employee e ON d.id = d.department_id
        1. Using Criteria query
          1. Criteria criteria = session.createCriteria(Department.class); criteria.setFetchMode("employees", FetchMode.EAGER);
        2. In both above cases, the query returns a list of Department objects with the Employee initialized, and only one query needs to be run to return all the Department and Employee information required.

Friday, 7 December 2018

public class FirstWordUpperCase { public static void main(String[] args) { System.out.println(changeString("how are you man")); } private static String changeString(String str) { StringBuffer sb = new StringBuffer(); String[] arr = str.split("\\s"); for (int i = 0; i < arr.length; i++) { sb.append(Character.toUpperCase(arr[i].charAt(0))). append(arr[i].substring(1)).append(" "); } return sb.toString(); } }


  1. How to disable caching on back button of the browser?
    1. <% response.setHeader(“Cache-Control”,”no-store”); response.setHeader(“Pragma”,”no-cache”); response.setHeader (“Expires”, “0”); //prevents caching at the proxy server %>

Wednesday, 5 December 2018


  1. In this kind of association, a foreign key column is created in owner entity. For example, if we make EmployeeEntity owner, then a extra column "ACCOUNT_ID" will be created in Employee table. This column will store the foreign key for Account table.
  2. To make such association, refer the Account entity in EmployeeEntity class as follow:
  3. @OneToOne @JoinColumn(name="ACCOUNT_ID") private AccountEntity account;
  4. @JoinColumn
    1. The join column is declared with the @JoinColumn annotation which looks like the @Column annotation, It is used in column owner column and mapped by used in other table. It has one more parameters named referencedColumnName. This parameter declares the column in the targeted entity that will be used to the join.
  5. mappedBy
    1.  mappedBy is used. ‘mappedBy’ refers to the property name of the association on the owner side.AccountEntity.java
    2. @OneToOne(mappedBy="account") private EmployeeEntity employee;


  1. One table has a foreign key column that references the primary key of associated table.In Bidirectional relationship, both side navigation is possible.
  2. We are discussing an example of Student and University relationship. Many student can enroll at one University. And one University can have many students. 
  3. @Entity @Table(name = "STUDENT") public class Student { @Id @GeneratedValue @Column(name = "STUDENT_ID") private long id; @Column(name = "FIRST_NAME") private String firstName; @ManyToOne(optional = false) @JoinColumn(name = "UNIVERSITY_ID") private University university; ------------------------ ------------------------ } @Entity @Table(name = "UNIVERSITY") public class University { @Id @GeneratedValue @Column(name = "UNIVERSITY_ID") private long id; @Column(name = "NAME") private String name; @Column(name = "COUNTRY") private String country; @OneToMany(mappedBy = "university", cascade = CascadeType.ALL) private List students; }
  4. @JoinColumn says that Student table will contain a separate column UNIVERSITY_ID which will eventually act as a foreign key reference to primary key of University table


  1. In this scenario, any given employee can be assigned to multiple projects and a project may have multiple employees working for it, leading to a many-to-many association between the two.
  2. We have an employee table with employee_id as its primary key and a project table with project_id as its primary key. A join table employee_project is required here to connect both sides.
  3. The model classes Employee and Project need to be created with JPA annotations:
    1. @Entity @Table(name = "Employee") public class Employee { // ... @ManyToMany(cascade = { CascadeType.ALL }) @JoinTable( name = "Employee_Project", joinColumns = { @JoinColumn(name = "employee_id") }, inverseJoinColumns = { @JoinColumn(name = "project_id") } ) Set projects = new HashSet<>(); // standard constructor/getters/setters } @Entity @Table(name = "Project") public class Project { // ... @ManyToMany(mappedBy = "projects") private Set employees = new HashSet<>(); // standard constructors/getters/setters }
  4. Both the Employee class and Project classes refer to one another, which means that the association between them is bidirectional.
  5. In order to map a many-to-many association, we use the @ManyToMany, @JoinTable and @JoinColumn annotations. 
  6. @ManyToMany
    1. The @ManyToMany annotation is used in both classes to create the many-to-many relationship between the entities.
  7. @JoinTable
    1.  In our example, the owning side is Employee so the join table is specified on the owning side by using the @JoinTable annotation in Employee class
  8. @JoinColumn
    1. The @JoinColumn annotation is used to specify the join/linking column with the main table. Here, the join column is employee_id and project_id is the inverse join column since Project is on the inverse side of the relationship
  9. mappedBy
    1. In the Project class, the mappedBy attribute is used in the @ManyToMany annotation to indicate that the employees collection is mapped by the projects collection of the owner side


Thursday, 29 November 2018


  1. They have a logical index, yes - effectively the number of times you need to iterate, starting from the head, before getting to that node.
  2. it can't directly search using index of the object
  3. Typically O(1) access by index is performed by using an array lookup, and in the case of a linked list there isn't an array - there's just a chain of nodes. To access a node with index N, you need to start at the head and walk along the chain N times... which is an O(N) operation.
  4. import java.util.LinkedList; import java.util.List; import java.util.function.Supplier; import java.util.stream.Collectors; import java.util.stream.IntStream; public class LnkedListNthh { public static void main(String[] args) { LnkedListNthh app = new LnkedListNthh(); LinkedList list = app.createList(); app.nthElement(list, 2); app.SecondLastElement(list); } private void nthElement(LinkedList list, Integer n) { list.get(n); System.out.println(n + "th Element is " + list.get(n)); } private void SecondLastElement(LinkedList list) { System.out.println("SecondLastElement = " + list.get(list.size() - 1)); } private LinkedList createList() { LinkedList list = new LinkedList(); Supplier> supplier = () -> new LinkedList(); return (LinkedList) IntStream.range(2, 102).boxed().collect(Collectors.toCollection(supplier)); } }


  1. Function Interface
    1.  Java.util.function has special interface like bleow, it contains generic methods used as type for lambda expression with same type and signature.
    2. An interface with exactly one abstract method is called Functional Interface.
    3. @FunctionalInterface annotation is added so that we can mark an interface as functional interface.
    4.  If  we try to have more than one abstract method, it throws compiler error.
    5. The major benefit of java 8 functional interfaces is that we can use lambda expressions to instantiate them and avoid using bulky anonymous class implementation.
    6. Java 8 has defined a lot of functional interfaces in java.util.function package. Some of the useful java 8 functional interfaces are Runnable, Consumer, Supplier, Function and Predicate
      1. Custom
        1. @FunctionalInterface public interface Square { public int calculate(int number); } // Custom Square sq = (i) -> (i * i); int result = sq.calculate(2); System.out.println(result);
      2. Runnable
        1. Thread t = new Thread(() -> { System.out.println("Thread Running"); }); t.start();
      3. Predecate Interface
        1. The Functional Interface PREDICATE is defined in the java.util.Function package.It improves manageability of code, helps in unit-testing them separately, and contain some methods like:
          1. isEqual(Object targetRef) : Returns a predicate that tests if two arguments are equal according to Objects.equals(Object, Object).
          2. and(Predicate other) : Returns a composed predicate that represents a short-circuiting logical AND of this predicate and another.
          3. negate() : Returns a predicate that represents the logical negation of this predicate.
          4. or(Predicate other) : Returns a composed predicate that represents a short-circuiting logical OR of this predicate and another.
          5. test(T t) : Evaluates this predicate on the given argument.boolean test(T t)
        2. // Simple Predicate Predicate lesserThan = (i) -> (i > 10); System.out.println(lesserThan.test(11)); // Predicate Chaining Predicate lessThan = (i) -> (i < 10); Predicate greaterThan = (i) -> (i > 5); boolean result = lesserThan.and(greaterThan).test(6); System.out.println(result); // negation boolean result2 = lesserThan.and(greaterThan).negate().test(6); System.out.println(result2); Predicate equals = (i) -> (i == 2); boolean result3 = equals.test(2); System.out.println(result3);
  2. Static methods and Default methods
    1. Default methods
      1. Java 8 interface changes include static methods and default methods in interfaces. Prior to Java 8, we could have only method declarations in the interfaces. But from Java 8, we can have default methods and static methods in the interfaces.
      2. For creating a default method in java interface, we need to use “default” keyword with the method signature. For example,
      3. public interface Interface2 { void method2(); default void log(String str){ System.out.println("I2 logging::"+str); } }
    2. Static methods 
      1. Java interface static method is similar to default method except that we can’t override them in the implementation classes.
      2. public interface MyData { default void print(String str) { if (!isNull(str)) System.out.println("MyData Print::" + str); } static boolean isNull(String str) { System.out.println("Interface Null Check"); return str == null ? true : "".equals(str) ? true : false; } }
      3. Java interface static methods are good for providing utility methods, for example null check, collection sorting etc.
  3. Abstract Class Needs
    1. Abstract class can define constructor. They are more structured and can have a state associated with them. 
    2. The constraint on the default method is that it can be implemented only in the terms of calls to other interface methods, with no reference to a particular implementation's state. So the main use case is higher-level and convenience methods.

Wednesday, 28 November 2018

  1. Angular CLI Useful Commands
    1. ng g component my-new-component
    2. ng g directive my-new-directive
    3. ng g pipe my-new-pipe
    4. ng g service my-new-service
    5. ng g class my-new-class
    6. ng g guard my-new-guard
    7. ng g interface my-new-interface
    8. ng g enum my-new-enum
    9. ng g module my-module
  2. Create components. Template, component and js file will be generated with this and Module will be added to @ngModule decorations decalarion part of app.module.ts.
    1. ng g component login ng g component add-user ng g component edit-user ng g component list-user
    2. app.module.ts
    3. import { NgModule } from '@angular/core'; import { Routes, RouterModule } from '@angular/router'; import { LoginComponent } from "./login/login.component"; import { AddUserComponent } from "./add-user/add-user.component"; import { EditUserComponent } from "./edit-user/edit-user.component"; import { ListUserComponent } from "./list-user/list-user.component"; const routes: Routes = [ { path: 'login', component: LoginComponent }, { path: 'add-user', component: AddUserComponent }, { path: 'edit-user', component: EditUserComponent }, { path: 'list-user', component: ListUserComponent }, { path: '', component: LoginComponent } ]; @NgModule({ imports: [RouterModule.forRoot(routes)], exports: [RouterModule] }) export class AppRoutingModule { }
    4. Following is our routing configurtion.We have configured to use LoginComponent as a default component.Also, do not forget to include it in the main module - app.module.ts
    5. app.routing.ts
    6. import { NgModule } from '@angular/core'; import { Routes, RouterModule } from '@angular/router'; import { LoginComponent } from "./login/login.component"; import { AddUserComponent } from "./add-user/add-user.component"; import { EditUserComponent } from "./edit-user/edit-user.component"; import { ListUserComponent } from "./list-user/list-user.component"; const routes: Routes = [ { path: 'login', component: LoginComponent }, { path: 'add-user', component: AddUserComponent }, { path: 'edit-user', component: EditUserComponent }, { path: 'list-user', component: ListUserComponent }, { path: '', component: LoginComponent } ]; @NgModule({ imports: [RouterModule.forRoot(routes)], exports: [RouterModule] }) export class AppRoutingModule { }
  3. Create model user.model.ts
    1. ng g class model/User
    2.  user.model.ts
    3. export class User { id: number; firstName: string; lastName: string; email: string; }
  4. Create Service service/user.service.ts
    1. ng g service service/UserService
    2. We can make web api call using HttpClent package in service class by the use of model User
    3. user.service.ts 
    4. import { Injectable } from '@angular/core'; import { HttpClient } from '@angular/common/http'; import { User } from '../model/user'; @Injectable({ providedIn: 'root' }) export class UserServiceService { constructor(private http: HttpClient) { } baseUrl: string = 'http://localhost:8080/user-portal/users'; getUsers() { return this.http.get(this.baseUrl); } getUserById(id: number) { return this.http.get(this.baseUrl + '/' + id); } createUser(user: User) { return this.http.post(this.baseUrl, user); } updateUser(user: User) { return this.http.put(this.baseUrl + '/' + user.id, user); } deleteUser(id: number) { return this.http.delete(this.baseUrl + '/' + id); } }
  5. Edit Html and component class to make the application
    1. add-user.component.html
  6. 2.    add-user.component.ts
    1. import { Component, OnInit } from '@angular/core'; import {FormBuilder, FormGroup, Validators} from "@angular/forms"; import {UserService} from "../service/user.service"; import {first} from "rxjs/operators"; import {Router} from "@angular/router"; @Component({ selector: 'app-add-user', templateUrl: './add-user.component.html', styleUrls: ['./add-user.component.css'] }) export class AddUserComponent implements OnInit { constructor(private formBuilder: FormBuilder,private router: Router, private userService: UserService) { } addForm: FormGroup; ngOnInit() { this.addForm = this.formBuilder.group({ id: [], email: ['', Validators.required], firstName: ['', Validators.required], lastName: ['', Validators.required] }); } onSubmit() { this.userService.createUser(this.addForm.value) .subscribe( data => { this.router.navigate(['list-user']); }); } }
  7. Adding Materials
    1. With the release of Angular 6, we can directly run ng add @angular/material command to add material design capabilities to an existing Angular application. By executing below command we will be installing Angular Material and the corresponding theming into the project.


  • To generate new component employee. Go to app root in CLI and run below command
    • ng g c employee/create-employee --spec=false --flat=true ng g c employee/list-employees --spec=false --flat=true
  • component.ts, html and css files will be genarated in employee folder in app and it will add List Employee and Create Employee components in app.module.ts as below
    • app.module.ts
    • import { BrowserModule } from '@angular/platform-browser'; import { NgModule } from '@angular/core'; import { AppRoutingModule } from './app-routing.module'; import { AppComponent } from './app.component'; import { CreateEmployeeComponent } from './employee/create-employee.component'; import { ListEmployeesComponent } from './employee/list-employees.component'; @NgModule({ declarations: [ AppComponent, CreateEmployeeComponent, ListEmployeesComponent ], imports: [ BrowserModule, AppRoutingModule ], providers: [], bootstrap: [AppComponent] }) export class AppModule { }
  • Generate routing module to app. It will create app-routing.module.ts file and add Routing  to app.module.ts
    • ng g m app-routing --flat=true --module=app
  • Inside app-routing.module.ts, add  RouterModule object and call forRoot Method as below.
    • import { NgModule } from '@angular/core'; import { Routes, RouterModule } from '@angular/router'; import { ListEmployeesComponent } from './employee/list-employees.component'; import { CreateEmployeeComponent } from './employee/create-employee.component'; const routes: Routes = [ {path: 'list', component: ListEmployeesComponent}, {path: 'create', component: CreateEmployeeComponent} ]; @NgModule({ imports: [RouterModule.forRoot(routes)], exports: [RouterModule] }) export class AppRoutingModule { }
  • Go to app.component.html and add new routes using routerLink tag

Tuesday, 27 November 2018


  1. define AngularJS?
    1. AngularJS is an open-source JavaScript framework designed for creating dynamic single web page applications with fewer lines of code. 
  2. What is Angular 2?
    1. Angular is a framework to build large scale and high performance web application while keeping them as easy-to-maintain.
    2. Components − The earlier version of Angular had a focus of Controllers but now has changed the focus to having components over controllers. Components help to build the applications into many modules. This helps in better maintaining the application over a period of time.
    3. TypeScript − The newer version of Angular is based on TypeScript. This is a superset of JavaScript and is maintained by Microsoft.
    4. Services − Services are a set of code that can be shared by different components of an application. So for example if you had a data component that picked data from a database, you could have it as a shared service that could be used across multiple applications.
  3. What are the key components of Angular 2?
    1. Modules − This is used to break up the application into logical pieces of code. Each piece of code or module is designed to perform a single task.
    2. Component − This can be used to bring the modules together.
    3. Templates − This is used to define the views of an Angular JS application.
    4. Metadata − This can be used to add more data to an Angular JS class.
    5. Service − This is used to create components which can be shared across the entire application.
  4. Explain Modules in Angular 2.
    1. Modules are used in Angular JS to put logical boundaries in your application. Hence, instead of coding everything into one application, you can instead build everything into separate modules to separate the functionality of your application. A module is made up of the following parts −
      1. Bootstrap array − This is used to tell Angular JS which components need to be loaded so that its functionality can be accessed in the application. Once you include the component in the bootstrap array, you need to declare them so that they can be used across other components in the Angular JS application.
      2. Export array − This is used to export components, directives, and pipes which can then be used in other modules.
      3. Import array − Just like the export array, the import array can be used to import the functionality from other Angular JS modules.
  5. Explain Components in Angular 2.
    1. Each application consists of Components. Each component is a logical boundary of functionality for the application. You need to have layered services, which are used to share the functionality across components.Following is the anatomy of a Component. A component consists of −
      1. Class − This is like a C or Java class which consists of properties and methods.
      2. Metadata − This is used to decorate the class and extend the functionality of the class.
      3. Template − This is used to define the HTML view which is displayed in the application.
  6. What are Angular 2 directives? Explain with examples?
    1. A directive is a custom HTML element that is used to extend the power of HTML. Angular 2 has the following directives that get called as part of the BrowserModule module.
      1. ngIf
        1. The ngif element is used to add elements to the HTML code if it evaluates to true, else it will not add the elements to the HTML code.
        2. *ngIf = 'expression'
      2. ngFor
        1. The ngFor element is used to elements based on the condition of the For loop.
        2. *ngFor = 'let variable of variablelist'
  7. How will you handle errors in Angular 2 applications?
    1. Angular 2 applications have the option of error handling. This is done by including the ReactJS catch library and then using the catch function.
    2. The catch function contains a link to the Error Handler function.
    3. In the error handler function, we send the error to the console. We also throw the error back to the main program so that the execution can continue.
    4. Now, whenever you get an error it will be redirected to the error console of the browser.
  8. What is routing?
    1. Routing helps in directing users to different pages based on the option they choose on the main page. Hence, based on the option they choose, the required Angular Component will be rendered to the user.
    2. product.component.ts
    3. import { Component } from '@angular/core'; @Component ({ selector: 'my-app', template: 'Products', }) export class Appproduct { }
    4. app.module.ts
    5. import { NgModule } from '@angular/core'; import { BrowserModule } from '@angular/platform-browser'; import { AppComponent } from './app.component'; import { Appproduct } from './product.component'; import { AppInventory } from './Inventory.component'; import { RouterModule, Routes } from '@angular/router'; const appRoutes: Routes = [ { path: 'Product', component: Appproduct }, { path: 'Inventory', component: AppInventory }, ]; @NgModule ({ imports: [ BrowserModule, RouterModule.forRoot(appRoutes)], declarations: [ AppComponent,Appproduct,AppInventory], bootstrap: [ AppComponent ] }) export class AppModule { }
    6. app.component.ts
    7. [routerLink] = "['/Product']
  9. What is Dependency Injection? Explain with example?
    1. Dependency injection is the ability to add the functionality of components at runtime. Lets take a look at an example and the steps used to implement dependency injection.
    2. Create a separate class which has the injectable decorator. The injectable decorator allows the functionality of this class to be injected and used in any Angular JS module.
      1. @Injectable() export class classname { }
    3.  Next in your appComponent module or the module in which you want to use the service, you need to define it as a provider in the @Component decorator.
      1. @Component ({ providers : [classname] })
  10. Explain tsconfig.json file?
    1. This file is used to give the options about TypeScript used for the Angular JS project.
    2. { 'compilerOptions': { 'target': 'es5', 'module': 'commonjs', 'moduleResolution': 'node', 'sourceMap': true, 'emitDecoratorMetadata': true, 'experimentalDecorators': true, 'lib': [ 'es2015', 'dom' ], 'noImplicitAny': true, 'suppressImplicitAnyIndexErrors': true } }
      1. The target for the compilation is es5 and that is because most browsers can only understand ES5 typescript.
      2. The sourceMap option is used to generate Map files, which are useful when debugging. Hence, during development it is good to keep this option as true.
      3. The "emitDecoratorMetadata": true and "experimentalDecorators": true is required for Angular JS decorators. If not in place, Angular JS application will not compile.
  11.  package.json file?
      1. This file contains information about Angular 2 project. Following are the typical settings in the file..
      2. { "name": "angular-quickstart", "version": "1.0.0", "description": "QuickStart package.json from the documentation, supplemented with testing support", "scripts": { "build": "tsc -p src/", "build:watch": "tsc -p src/ -w", "build:e2e": "tsc -p e2e/", "serve": "lite-server -c=bs-config.json", "serve:e2e": "lite-server -c=bs-config.e2e.json", "prestart": "npm run build", "start": "concurrently \"npm run build:watch\" \"npm run serve\"", "pree2e": "npm run build:e2e", "e2e": "concurrently \"npm run serve:e2e\" \"npm run protractor\" --killothers --success first", "preprotractor": "webdriver-manager update", "protractor": "protractor protractor.config.js", "pretest": "npm run build", "test": "concurrently \"npm run build:watch\" \"karma start karma.conf.js\"", "pretest:once": "npm run build", "test:once": "karma start karma.conf.js --single-run", "lint": "tslint ./src/**/*.ts -t verbose" }, "keywords": [], "author": "", "license": "MIT", "dependencies": { "@angular/common": "<2 .4.0="" angular-in-memory-web-api="" angular="" canonical-path="" code="" compiler="" concurrently="" core-js="" core="" devdependencies="" forms="" http="" jasmine-core="" jasmine="" karma-chrome-launcher="" karma-cli="" karma-jasmine-html-reporter="" karma-jasmine="" karma="" lite-server="" lodash="" node="" platform-browser-dynamic="" platform-browser="" protractor="" repository="" rimraf="" router="" rxjs="" systemjs="" tslint="" types="" typescript="" zone.js="">
      3. There are two types of dependencies, first is the dependencies and then there are dev dependencies. The dev ones are required during the development process and the others are needed to run the application.
      4. The "build:watch": "tsc -p src/ -w" command is used to compile the typescript in the background by looking for changes in the typescript files.
  12. Explain systemjs.config.json file.
    1. This file contains the system files required for Angular JS application. This loads all the necessary script files without the need to add a script tag to the html pages. The typical files will have the following code.
      1. ** * System configuration for Angular samples * Adjust as necessary for your application needs. */ (function (global) { System.config({ paths: { // paths serve as alias 'npm:': 'node_modules/' }, // map tells the System loader where to look for things map: { // our app is within the app folder app: 'app', // angular bundles '@angular/core': 'npm:@angular/core/bundles/core.umd.js', '@angular/common': 'npm:@angular/common/bundles/common.umd.js', '@angular/compiler': 'npm:@angular/compiler/bundles/compiler.umd.js', '@angular/platform-browser': 'npm:@angular/platformbrowser/bundles/platform-browser.umd.js', '@angular/platform-browser-dynamic': 'npm:@angular/platform-browserdynamic/bundles/platform-browser-dynamic.umd.js', '@angular/http': 'npm:@angular/http/bundles/http.umd.js', '@angular/router': 'npm:@angular/router/bundles/router.umd.js', '@angular/forms': 'npm:@angular/forms/bundles/forms.umd.js', // other libraries 'rxjs': 'npm:rxjs', 'angular-in-memory-web-api': 'npm:angular-in-memory-web-api/bundles/inmemory-web-api.umd.js' }, // packages tells the System loader how to load when no filename and/or no extension packages: { app: { defaultExtension: 'js' }, rxjs: { defaultExtension: 'js' } } }); })(this);
      2. 'npm:': 'node_modules/' tells the location in our project where all the npm modules are located.
      3. The mapping of app: 'app' tells the folder where all our applications files are loaded.
  13. app.module.ts file.
    1. mport { NgModule } from '@angular/core'; import { BrowserModule } from '@angular/platform-browser'; import { AppComponent } from './app.component'; @NgModule({ imports: [ BrowserModule ], declarations: [ AppComponent ], bootstrap: [ AppComponent ] }) export class AppModule { }
    2. The import statement is used to import functionality from the existing modules. Thus, the first 3 statements are used to import the NgModule, BrowserModule and AppComponent modules into this module.
    3. The NgModule decorator is used to later on define the imports, declarations, and bootstrapping options.
    4. The BrowserModule is required by default for any web based angular application.
    5. The bootstrap option tells Angular which Component to bootstrap in the application.
  14. Angular Life Cycle
    1. ngOnChanges − When the value of a data bound property changes, then this method is called.
    2. ngOnInit − This is called whenever the initialization of the directive/component after Angular first displays the data-bound properties happens.
    3. ngDoCheck − This is for the detection and to act on changes that Angular can't or won't detect on its own.
    4. ngAfterContentInit − This is called in response after Angular projects external content into the component's view.
    5. ngAfterContentChecked − This is called in response after Angular checks the content projected into the component.
    6. ngAfterViewInit − This is called in response after Angular initializes the component's views and child views.
    7. ngAfterViewChecked − This is called in response after Angular checks the component's views and child views.
    8. ngOnDestroy − This is the cleanup phase just before Angular destroys the directive/component.

Friday, 23 November 2018

  • Need to remove common number 4 from both arrays
  • Input
    • int[] arr1 = { 1, 4, 6, 7, 8 };
    • int[] arr2 = { 2, 4, 5, 9, 0 };
  • Output
    • int[] arr1 = { 1, 6, 7, 8 };
    • int[] arr2 = { 2, 5, 9, 0 };
  • RemoveCommonElements.java
  • public class RemoveCommonElements { public static void main(String[] args) { RemoveCommonElements app = new RemoveCommonElements(); app.commonRemove(); } private void commonRemove() { int[] arr1 = { 1, 4, 6, 7, 8 }; int[] arr2 = { 2, 4, 5, 9, 0 }; for (int e : arr1) { if (contains(arr2, e)) { remove(arr1, e); remove(arr2, e); } } System.out.print(arr1); System.out.print(arr2); } private boolean contains(int[] arr, int e) { for (int i : arr) { if (i == e) { return true; } } return false; } private int[] remove(int[] arr, int e) { for (int i = 0; i < arr.length; i++) { if (arr[i] == e) { for (int j = i; j < arr.length-1; j++) { arr[j] = arr[j + 1]; } } } return arr; } } ######### RESULT ########## 1, 6, 7, 8, 8 2, 5, 9, 0 ,0
  • In below example CopyOnWriteArrayList will be concurrent in type and ConcurrentModificationException can be prevent
  • /** * Get UNION list * Get INTERSECTION LIST by retainAll //common elements * Remove intersection from union */ private static void removeCommonElements() { List list1 = Arrays.asList(1, 2, 3, 4, 5, 6); List list2 = Arrays.asList(10, 2, 3, 40, 50, 60); List union = new ArrayList(list1); union.addAll(list2); List intersection = new ArrayList<>(list1); // only common elements intersection.retainAll(list2); union.removeAll(intersection); System.out.println(union); }


  1. When Parent throws Generic Exception and Child throws specific exception
    1. Compiles and run successfully
    2. public class Animal { public void eat() throws Exception { System.out.println("Animal Eating..."); } } public class Cat extends Animal { @Override public void eat() throws FileNotFoundException { System.out.println("Cat Eating ....."); } } public class App { public static void main(String[] args) throws Exception { Animal cat = new Cat(); cat.eat(); } } ######### RESULT ########## Cat Eating .....
  2. When Parent throws specific exception and child throws generic exception
    1. Compile time error
  3. When Parent modifier is protected and child modifier is public
    1. Compiles and run successfully
  4. When Parent modifier is private and child modifier is public
    1. Compile time error
  5. When Parent modifier is public and child modifier is private
    1. Compile time error

Thursday, 22 November 2018


  • Java Object class comes with native clone() method that returns the copy of the existing instance.
  • To use java object clone method, we have to implement the marker interface java.lang.Cloneable so that it won’t throw CloneNotSupportedException at runtime.
  • Also Object clone is a protected method, so we will have to override it to use with other classes
  • Student.java
    • public class Student implements Cloneable { private int id; private String name; public int getId() { return id; } public void setId(int id) { this.id = id; } public String getName() { return name; } public void setName(String name) { this.name = name; } @Override protected Object clone() throws CloneNotSupportedException { return super.clone(); } }
  • App.java
    • public class App { public static void main(String[] args) throws CloneNotSupportedException { Student student1 = new Student(); student1.setId(1); student1.setName("Sonu"); Student student2 = (Student) student1.clone(); System.out.println(student2.getName()); if (student1.equals(student2)) { System.out.println("EQUAL"); } } }
  • s1 == s2: false
    • So s1 and s2 are two different object, not referring to same object. This is in agreement of the java clone object requirement.
  • s1.name == s2.name: true
    • So both s1 and s2 object variables refer to same object.
  • Shallow copy
    • Shallow copy is a bit-wise copy of an object. A new object is created that has an exact copy of the values in the original object. If any of the fields of the object are references to other objects, just the reference addresses are copied i.e., only the memory address is copied.
  • Deep copy
    • A deep copy copies all fields, and makes copies of dynamically allocated memory pointed to by the fields. A deep copy occurs when an object is copied along with the objects to which it refers.

Thursday, 15 November 2018

Structural design patterns are concerned with how classes and objects can be composed, to form larger structures.The structural design patterns simplifies the structure by identifying the relationships.These patterns focus on, how the classes inherit from each other and how they are composed from other classes.

  • Adapter Pattern 
    • An Adapter Pattern says that just "converts the interface of a class into another interface that a client wants".
    • In other words, to provide the interface according to client requirement while using the services of a class with a different interface.
    • The Adapter Pattern is also known as Wrapper.

Wednesday, 24 October 2018

  • Declarative Transaction Management

    • Declarative transaction management is the most common Spring implementation as it has the least impact on application code.
    • The XML declarative approach configures the transaction attributes in a Spring bean configuration file. Declarative transaction management in Spring has the advantage of being less invasive. 
    •  There is no need for changing application code when using declarative transactions.       All you have to do is to modify the application context.
  • Methods
    •  XML based
    • Annotations based
  • Steps
    • Step 1: Define a transaction manager in spring.xml
      • <bean class="org.springframework.jdbc.datasource.DataSourceTransactionManager" id="txManager"/>
    • Step 2: Turn on support for transaction annotations in spring.xml
      • <tx:annotation-driven transaction-manager="txManager"/>
    •  Step 3: Add the @Transactional annotation to the createEmployee Method      @Transactional
    • public int createEmployee(Employee employee) {
..

  • Annotation attributes
    • Transaction readOnly
      • If you don’t explicitly set readOnly attribute to true, you will have read/write transactions.Its always better to explicitly specify the readOnly attribute, as we have noticed some massive performance improvements with Hibernate because of this.
    • Transaction propagation
      • Transaction propagation is REQUIRED by default, which means that the same transaction will propagate from a transactional caller to transactional callee. It will create a new transaction or reuse the one if available. For example, if a read-only transaction calls a read-write transaction method, the whole transaction will be read-only.
      • Depending on the transaction propagation attribute (like for REQUIRES_NEW), sometimes the existing transaction is suspended/paused at some point, a new one is always started and eventually committed, and after that the first transaction is resumed.
    • Isolation Level
      • Isolation level defines a contract between transactions.
        • Read Uncommitted – Allows dirty reads, when a transaction is not yet committed by a thread and another thread is reading the dirty data.
        • Read Committed – Does not allow dirty reads. Only lets a thread to read values which have already been committed by other running transactions in another threads.
        • Repeatable Read – If the same data is read twice in the same transaction, it will always be the same. This level guarantees that any data once read cannot change.
        • Serializable – Transactions occur with locking at all levels (read, range and write locking), because of which they are executed in a fixed sequence. It doesn’t allow concurrent transactions and leads to a performance hit.

  •  Programmatic Transaction Management
    • Spring provides Support for programmatic transaction management by using TransactionTemplate or PlatformTransactionManager implementation.



  • An object is chunk of memory , and the reference to the object is way to reach up to that object in the memory
  • Object reference variable contains address of the object which is declared in the heap memory
  • Class Box { double height; double width; double depth; } Box b1; //declare reference ..it will be null b1 = new Box(); //Create an instance assign to reference variable b1 Box b2 = b1; //Only creates reference to this object.. will not create object memory..use same object memory..
  • Now I  am going to set some property in b1
  • b1.height = 10; b1.width = 20; b1.depth = 30;
  • Then I printed b2 properties
  • System.out.println(b2.height); System.out.println(b2.width); System.out.println(b2.depth); //output 10 20 30
  • I got same values for b2, because it using same memory but different reference


  • This algorithm uses idea of divide and conquer.
  • Find the element called pivot which divides the array into two halves
    • Left side elements should be smaller than the pivot
    • Right side elements are geater than the pivot
  • Steps
    • Bring the pivot to its appropriate position such that left of the pivot is smaller and right is greater.
    • Quick sort left part
    • Quick sort right part
  • There are many different versions of quickSort that pick pivot in different ways.
    • Always pick first element as pivot.
    • Always pick last element as pivot (implemented below)
    • Pick a random element as pivot.
    • Pick median as pivot

  • Reference:

Monday, 22 October 2018


  • Create shared Semaphore object , lock and unlock in consumer - producer  blocks by limiting thread accessible count to 1
  • Output will be like produce 1 and consume 1 , produce 2 and consume 2 ...
  • Producer.java
    • public class Producer implements Runnable { private static List LIST; private static Semaphore semaphore; public Producer(List LISTv, Semaphore semaphoreV) { LIST = LISTv; semaphore = semaphoreV; } public void run() { produce(); } private static void produce() { try { int i = 1; while (true) { semaphore.acquire(); LIST.add(i); System.out.println(i + " Produced"); i++; semaphore.release(); if (i > 100) { break; } } } catch (Exception e) { e.printStackTrace(); } } }
  • Consumer.java
    • public class Consumer implements Runnable { private static List LIST; private static Semaphore semaphore; public Consumer(List LISTv, Semaphore semaphoreV) { LIST = LISTv; semaphore = semaphoreV; } public void run() { consume(); } private static void consume() { int index = 0; try { while (true) { semaphore.acquire(); index = LIST.size() - 1; System.out.println(LIST.get(index) + " Removed"); LIST.remove(index); semaphore.release(); } } catch (Exception e) { e.printStackTrace(); } } }
  • App.java
    • public class App { private static List LIST = new ArrayList(); private static Semaphore SEMAPHORE = new Semaphore(1, true); public static void main(String[] args) { Thread producer = new Thread(new Producer(LIST, SEMAPHORE)); Thread consumer = new Thread(new Consumer(LIST, SEMAPHORE)); producer.start(); consumer.start(); } }
  • Output
    • 1 Produced 1 Removed 2 Produced 2 Removed 3 Produced 3 Removed 4 Produced 4 Removed 5 Produced 5 Removed 6 Produced 6 Removed 7 Produced 7 Removed 8 Produced 8 Removed 9 Produced 9 Removed 10 Produced 10 Removed


  • BlockingQueue amazingly simplifies implementation of Producer-Consumer design pattern by providing outofbox support of blocking on put() and take().
  • No need of manual empty or full check, Blocking Queue handle it internally.
  • Only put and take operation required
  • Output is like one N produce and then consume like FIFO ordedr
  • Producer.java
    • public class Producer implements Runnable { private static BlockingQueue QUEUE; public Producer(BlockingQueue QUEUE_V) { QUEUE = QUEUE_V; } public void run() { try { int i = 1; while (true) { QUEUE.put(i); System.out.println(i + " Produced"); i++; } } catch (Exception e) { e.printStackTrace(); } } }

  • Consumer.java

    • public class Consumer implements Runnable { private static BlockingQueue QUEUE; public Consumer(BlockingQueue QUEUE_V) { QUEUE = QUEUE_V; } public void run() { consumer(); } private static void consumer() { int item; try { Thread.sleep(1000); while (true) { item = QUEUE.take(); System.out.println(item + " Consumed"); } } catch (Exception e) { e.printStackTrace(); } } }

  • App.java

    • private static BlockingQueue QUEUE = new ArrayBlockingQueue(10); public static void main(String[] args) { Thread producer = new Thread(new Producer(QUEUE)); Thread consumer = new Thread(new Consumer(QUEUE)); producer.start(); consumer.start(); }

  • Output

    • 1 Produced 2 Produced 3 Produced 4 Produced 5 Produced 6 Produced 7 Produced 8 Produced 9 Produced 10 Produced 1 Consumed 11 Produced 2 Consumed 3 Consumed 4 Consumed 5 Consumed 12 Produced 6 Consumed 13 Produced 7 Consumed 14 Produced 8 Consumed 15 Produced 9 Consumed 16 Produced 10 Consumed 17 Produced 11 Consumed 18 Produced


    • If List is full then our PRODUCER thread waits until CONSUMER thread consume one item and make space in your queue and call notify() method to inform PRODUCER thread. Both wait() and notify() method are called on shared object which is List in our case.
    • Need synchronised block
    • Check for List size manually and wait for consumer and producer threads
    • Since it is synchronised,
      • Needs to wait till N production
      • Needs to wait till N Consumption
    • Producer.java
      • public class Producer implements Runnable { private static List LIST; private static int SIZE; public Producer(List LIST_V, int SIZE_V) { LIST = LIST_V; SIZE = SIZE_V; } public void run() { producer(); } private static void producer() { try { int i = 1; while (true) { synchronized (LIST) { if (LIST.size() == SIZE) { System.out.println("Producer Waiting for consumer to consume object"); LIST.wait(); } LIST.add(i); System.out.println(i + " Produced"); LIST.notify(); } i++; if (i > 100) { break; } } } catch (Exception e) { e.printStackTrace(); } } }
    • Consumer.java
      • public class Consumer implements Runnable { private static List LIST; public Consumer(List LIST_V) { LIST = LIST_V; } public void run() { consumer(); } private static void consumer() { int index = 0; try { Thread.sleep(2000); while (true) { synchronized (LIST) { if (LIST.size() == 0) { System.out.println("Consumer is waiting for producer to produce"); LIST.wait(); } System.out.println(LIST.get(index) + " Consumed"); LIST.remove(index); LIST.notify(); } } } catch (Exception e) { e.printStackTrace(); } } }
    • App.java
      • public class App { private static List LIST = new ArrayList(); private static int SIZE = 10; public static void main(String[] args) { Thread producer = new Thread(new Producer(LIST, SIZE)); Thread consumer = new Thread(new Consumer(LIST)); producer.start(); consumer.start(); } }
    • Output
      • 1 Produced 2 Produced 3 Produced 4 Produced 5 Produced 6 Produced 7 Produced 8 Produced 9 Produced 10 Produced Producer Waiting for consumer to consume object 1 Consumed 2 Consumed 3 Consumed 4 Consumed 5 Consumed 6 Consumed 7 Consumed 8 Consumed 9 Consumed 10 Consumed Consumer is waiting for producer to produce 11 Produced 12 Produced 13 Produced

    Saturday, 13 October 2018


    • LRU Cache (Java) Design and implement a data structure for Least Recently Used (LRU) cache. 
    • The LRU caching scheme is to remove the least recently used frame when the cache is full and a new page is referenced which is not there in cache.
    • Properties are,
      • Fixed Size: Cache needs to have some bounds to limit memory usages.
      • Fast Access: Cache Insert and lookup operation should be fast , preferably O(1) time.
      • Replacement of Entry in case , Memory Limit is reached: A cache should have efficient algorithm to evict the entry when memory is full.
    • When we think about O(1) lookup , obvious data structure comes in our mind is HashMap. HashMap provide O(1) insertion and lookup.
    • but HashMap does not has mechanism of tracking which entry has been queried recently and which not.To track this we require another data-structure which provide fast insertion ,deletion and updation , in case of LRU we use Doubly Linkedlist . 
    • Reason for choosing doubly LinkList is O(1) deletion , updation and insertion if we have the address of Node on which this operation has to perform.
    •  HashMap will hold the keys and address of the Nodes of Doubly LinkedList . And Doubly LinkedList will hold the values of keys.
    • As We need to keep track of Recently used entries, We will use a clever approach. We will remove element from bottom and add element on start of LinkedList and whenever any entry is accessed , it will be moved to top. so that recently used entries will be on Top and Least used will be on Bottom.

    • package com.iwq.LRU.withhashmap; import java.util.HashMap; import java.util.Map; public class LRUcache { Map hashmap = new HashMap(); Entry start; Entry end; int LRU_SIZE = 4; public void put(int key, int value) { // if key already exists update value and move to top if (hashmap.containsKey(key)) // Key Already Exist, just update the value and move it to top { Entry entry = hashmap.get(key); entry.value = value; removeNode(entry); addAtTop(entry); } else { Entry newnode = new Entry(); newnode.left = null; newnode.right = null; newnode.value = value; newnode.key = key; if (hashmap.size() > LRU_SIZE) // We have reached maxium size so need to make room for new element. { hashmap.remove(end.key); removeNode(end); addAtTop(newnode); } else { addAtTop(newnode); } hashmap.put(key, newnode); } } public void removeNode(Entry node) { // remove from left and right nodes if (node.left != null) { node.left.right = node.right; } else { start = node.right; } if (node.right != null) { node.right.left = node.left; } else { end = node.left; } } public void addAtTop(Entry node) { // node.right = start; node.left = null; if (start != null) start.left = node; start = node; if (end == null) end = start; } }

    Friday, 12 October 2018


    1. What is a Query Plan?
      1. A query plan is a set of steps that the database management system executes in order to complete the query. 
      2. The reason we have query plans is that the SQL you write may declare your intentions, but it does not tell SQL the exact logic flow to use.  The query optimizer determines that.  The result of that is the query plan
      3. The lesson to learn from this when in doubt check the execution plan.  If you feel a query is running slow and an equivalent query, such as a join, may be faster, write one up and check the plan.  Check to see which uses more efficient steps.  This much better than guessing.  As you get better a reading plans you’ll start to notice things about your databases, such as whether you need to add an index.
    2. Database Indexes
      1. Indexes are special lookup tables that the database search engine can use to speed up data retrieval.
      2. An index helps to speed up SELECT queries and WHERE clauses, but it slows down data input, with the UPDATE and the INSERT statements. Indexes can be created or dropped with no effect on the data.
      3. Single-Column Indexes
        1. A single-column index is created based on only one table column.
      4. Unique Indexes
        1. Unique indexes are used not only for performance, but also for data integrity. A unique index does not allow any duplicate values to be inserted into the table. The basic syntax is as follows
      5. Composite Indexes
        1. A composite index is an index on two or more columns of a table. Its basic syntax is as follows.
      6. CREATE INDEX id_index ON students (stdId) USING BTREE;
        1. Will do ab binary search on stdId column and it increases the performance.
      7. A database index is somewhat similar to this table of contents in a book. Indexing will help a data base query to be retrieved fast (Because the query does not require to go through the entire table to get the data, but it will find the data blocks from the index.).
    3. What is the difference between UNION and UNION ALL?
      1. UNION removes duplicate records (where all columns in the results are the same), UNION ALL does not.
      2. There is a performance hit when using UNION instead of UNION ALL, since the database server must do additional work to remove the duplicate rows, but usually you do not want the duplicates.
    4.  How to retrieve data from json string saved in a column?
      1. JSON
      2. {"male" : 2000, "female" : 3000, "other" : 600}
      3. Query
      4. SELECT ID, CITY, json_extract(POPULATION_JSON_DATA, '$.male') AS POPULATION_MALE, json_extract(POPULATION_JSON_DATA, '$.female') AS POPULATION_FEMALE, json_extract(POPULATION_JSON_DATA, '$.other') AS POPULATION_OTHER FROM JSON_TABLE;
    5. Oracle bind variables?
      1. Using :variable instead of direct values
      2. Before Oracle runs a SQL statement it checks it's valid and determines how to access the tables and join them together. This is called parsing. The optimizer has the task of figuring out which table access and join methods to use. This produces an execution plan. When Oracle sends a statement to the optimiser to do this it's called a hard parse.
      3. If a plan already exists for a query, Oracle doesn't need to go through the optimization process again. It can reuse the existing plan. This is referred to as soft parsing.
      4. select * from orders where order_id = :ord;
      5. If we run this query twice in succession with different values for :ord there's one hard parse and one soft parse and increases the performance.


    • Java Memory Management, with its built-in garbage collection, is one of the language's finest achievements
    •  It allows developers to create new objects without worrying explicitly about memory allocation and deallocation, because the garbage collector automatically reclaims memory for reuse
    • How Garbage Collection Really Works
      • Many people think garbage collection collects and discards dead objects. In reality, Java garbage collection is doing the opposite! Live objects are tracked and everything else designated garbage.
      • Object creation is faster because global synchronization with the operating system is not needed for every single object. An allocation simply claims some portion of a memory array and moves the offset pointer forward . The next allocation starts at this offset and claims the next portion of the array.
      • When an object is no longer used, the garbage collector reclaims the underlying memory and reuses it for future object allocation. This means there is no explicit deletion and no memory is given back to the operating system.
    • All objects are allocated on the heap area managed by the JVM. Every item that the developer uses is treated this way, including class objects, static variables, and even the code itself. As long as an object is being referenced, the JVM considers it alive. Once an object is no longer referenced and therefore is not reachable by the application code, the garbage collector removes it and reclaims the unused memory.
    • There are four kinds of GC roots in Java:
      • Local variables are kept alive by the stack of a thread. This is not a real object virtual reference and thus is not visible. For all intents and purposes, local variables are GC roots
      • Active Java threads are always considered live objects and are therefore GC roots. This is especially important for thread local variables. 
      • Static variables are referenced by their classes. This fact makes them de facto GC roots. Classes themselves can be garbage-collected, which would remove all referenced static variables. This is of special importance when we use application servers, OSGi containers or class loaders in general. We will discuss the related problems in the Problem Patterns section.
      • JNI References are Java objects that the native code has created as part of a JNI call. Objects thus created are treated specially because the JVM does not know if it is being referenced by the native code or not. Such objects represent a very special form of GC root, which we will examine in more detail in the Problem Patterns section below.
    • Marking and Sweeping Away Garbage
      • To determine which objects are no longer in use, the JVM intermittently runs what is very aptly called a mark-and-sweep algorithm
      • it's a straightforward, two-step process:
        • The algorithm traverses all object references, starting with the GC roots, and marks every object found as alive.
        • All of the heap memory that is not occupied by marked objects is reclaimed. It is simply marked as free, essentially swept free of unused objects.
        • It's possible to have unused objects that are still reachable by an application because the developer simply forgot to dereference them. Such objects cannot be garbage-collected.




    1. Java 8  : How to Sort a List using lambdas?
      1. Integer[] arr = { 1, 7, 3, 9, 4, 67, 100, 23, 26, 76, 8 }; List list = Arrays.asList(arr); list.sort((a1, a2) -> a1.compareTo(a2)); System.out.print(list);
    2. Is singleton is lazy initialisation?
      1. Yes. As in the below code singleton is lazy initialisation.
      2. We will not initialise on static instance property. And initialise only on getInstance method, so it will initialise on required time only and lazy loading. If we need to make it as eager initialise instance on static field, then it will initialise on the time of class loading.
      3. public class Singleton { private static Singleton instance; private LazyInitializedSingleton(){ } public static Singleton getInstance(){ if(instance == null){ synchronized (DclSingleton.class) { instance = new Singleton(); } } return instance; } }
    3. How to find the nth element In QUEUE?
      1. The fact that accessing elements by index is not part of the concept of a queue.If you need to access elements by index, you want a list, not a queue.
    4. What is a partially checked exception in Java?
      1. A checked exception is said to be partially checked exception if and only if some of its child classes are unchecked 
        1. Ex: Exception
      2. The only possible partially checked exception in java are  
        1. Exception 
        2. Throwable 
    5. String intern()?
      1. The java string intern() method returns the interned string. It returns the canonical representation of string.
      2. It can be used to return string from memory, if it is created by new keyword. It creates exact copy of heap string object in string constant pool.
      3. public class InternExample{ public static void main(String args[]){ String s1=new String("hello"); String s2="hello"; String s3=s1.intern();//returns string from pool, now it will be same as s2 System.out.println(s1==s2);//false because reference variables are pointing to different instance System.out.println(s2==s3);//true because reference variables are pointing to same instance } }
    6. Can an abstract class have main method and run it?
      1. Yes. It have main method and run. But canot create its own object,.
      2. public abstract class AbstractMainEx { public static void main(String[] args) { System.out.println("Hi"); } public abstract boolean test(); }
    7. How to create a custom Exception?
      1. To create you own exception extend the Exception class or any of its subclasses.
      2. class New1Exception extends Exception { } // this will create Checked Exception class NewException extends IOException { } // this will create Checked exception class NewException extends NullPonterExcpetion { } // this will create UnChecked exception
    8. Can we have private constructor for parent constructor in inheritance?
      1. No
      2. Implicit super constructor Animal() is not visible for default constructor. Must define an explicit 
         constructor



    • It is the first concurrent utility framework in java and used for standardising invocation, scheduling, execution and control of asynchronous tasks in parallel threads.
    • Executor implementation in java uses thread pools which consists of worker threads. The entire management of worker threads is handled by the framework. So the overhead in memory management is much reduced compared to earlier multithreading approaches.
    • The Java Executor framework creates tasks by using instances of Runnable or Callable. In case of Runnable, the run () method does not return a value or throw any checked exception. But Callable is a more functional version in that area. It defines a call () method that allows the return of some computed value which can be used in future processing and it also throws an exception if necessary.
    • The FutureTask class is another important component which is used to get future information about the processing. An instance of this class can wrap either a Callable or a Runnable. You can get an instance of this as the return value of submit () method of an ExecutorService. You can also manually wrap your task in a FutureTask before calling execute () method.
    • Following are the functional steps to implement the Java ThreadPoolExecutor.
      • Create an executor
        •  Executor class has a number of static factory methods to create an ExecutorService depending upon the requirement of the application.
          • The newFixedThreadPool () returns a ThreadPoolExecutor instance with an initialized and unbounded queue and a fixed number of threads.
          • The newCachedThreadPool () returns a ThreadPoolExecutor instance initialized with an unbounded queue and unbounded number of threads
        • newFixedThreadPool ()
          • No extra thread is created during execution
          • If there is no free thread available the task has to wait and then execute when one thread is free
        • newCachedThreadPool ()
          • Existing threads are reused if available. But if no free thread is available, a new one is created and added to the pool to complete the new task. Threads that have been idle for longer than a timeout period will be removed automatically from the pool.
        • This is a fixed pool of 10 threads.
        private static final Executor executor = Executors.newFixedThreadPool(10);
        • This is a cached thread pool
        private static ExecutorService exec = Executors.newCachedThreadPool();
        • Following is an example of customized thread pool executor. The parameter values depend upon the application need. Here the core pool is having 8 threads which can run concurrently and the maximum number is 12. The queue is capable of keeping 250 tasks. Here one point should be remembered that the pool size should be kept on a higher side to accommodate all tasks. The idle time limit is kept as 5 ms.
        private static final Executor executor = new ThreadPoolExecutor(5, 12, 50000L, TimeUnit.MILLISECONDS, new LinkedBlockingQueue(250));
      • Create one or more tasks and put in the queue
        • After creating the executor now it’s time for creating tasks. Create one or more tasks to be performed as instances of either Runnable or Callable. In this framework, all the tasks are created and populated in a queue. After the task creation is complete the populated queue is submitted for concurrent execution.
      • Submit the task to the Executor
        • After creating the ExecutorService and proposed tasks, you need to submit the task to the executor by using either submit () or execute () method. Now as per your configuration the tasks will be picked up from the queue and run concurrently. For example if you have configured 5 concurrent executions, then 5 tasks will be picked up from the queue and run in parallel. This process will continue till all the tasks are finished from the queue.
      • Execute the task
        • Next the actual execution of the tasks will be managed by the framework. The Executor is responsible for managing the task’s execution, thread pool, synchronization and queue. If the pool has less than its configured number of minimum threads, new threads will be created as per requirement to handle queued tasks until that limit is reached. If the number is higher than the configured minimum, then the pool will not start any more threads. Instead, the task is queued until a thread is freed up to process the request. If the queue is full, then a new thread is started to handle it. But again it depends upon the type of constructor used during executor creation.
      • Shutdown the Executor
        • The termination is executed by invoking its shutdown () method. You can choose to terminate it gracefully, or abruptly.


    • A Semaphore is a thread synchronization construct that can be used either to send signals between threads to avoid missed signals, or to guard a critical section like you would with a lock
    • Simple Semaphore implementation:
      • The take() method sends a signal which is stored internally in the Semaphore. The release() method waits for a signal. When received the signal flag is cleared again, and the release() method exited.
      • // Prototype pattern public abstract class Prototype implements Cloneable { public Prototype clone() throws CloneNotSupportedException{ return (Prototype) super.clone(); } } public class ConcretePrototype1 extends Prototype { @Override public Prototype clone() throws CloneNotSupportedException { return (ConcretePrototype1)super.clone(); } } public class ConcretePrototype2 extends Prototype { @Override public Prototype clone() throws CloneNotSupportedException { return (ConcretePrototype2)super.clone(); } }
      • Using a semaphore like this you can avoid missed signals. You will call take() instead of notify() and release() instead of wait().
    • Using Semaphores for Signaling
      • Here is a simplified example of two threads signaling each other using a Semaphore:
      • MainApp.java
      • Semaphore semaphore = new Semaphore(); SendingThread sender = new SendingThread(semaphore); ReceivingThread receiver = new ReceivingThread(semaphore); receiver.start(); sender.start();
      • SendingThread.java
      • public class SendingThread { Semaphore semaphore = null; public SendingThread(Semaphore semaphore){ this.semaphore = semaphore; } public void run(){ while(true){ //do something, then signal this.semaphore.take(); } } }
      • RecevingThread.java
      • public class RecevingThread { Semaphore semaphore = null; public ReceivingThread(Semaphore semaphore){ this.semaphore = semaphore; } public void run(){ while(true){ this.semaphore.release(); //receive signal, then do something... } } }
    • Semaphore to block threads


    1. How Volatile in Java works?
      1. The Java volatile keyword cannot be used with method or class and it can only be used with a variable.
      2. Java volatile keyword also guarantees visibility and ordering and write to any volatile variable happens before any read into the volatile variable.
      3. Example: Singleton Class 
        1. public class Singleton{ private static volatile Singleton _instance; //volatile variable public static Singleton getInstance(){ if(_instance == null){ synchronized(Singleton.class){ if(_instance == null) _instance = new Singleton(); } } return _instance; }
        2. writer thread comes out of synchronized block, memory will not be synchronized and value of _instance will not be updated in main memory. With Volatile keyword in Java, this is handled by Java himself and such updates will be visible by all reader threads.
      4. If a variable is not shared between multiple threads, you don't need to use volatile keyword with that variable.
      5. Both T1 and T2 can refer to a class containing this variable. You can then make this variable volatile, and this means that changes to that variable are immeditately visible in both threads.
      6. public class App { public static volatile boolean isEven = true; public static void main(String[] args) { Object mutex = new Object(); Thread odd = new Thread(new Runnable() { @Override public void run() { try { int i = 0; while (i < 20) { synchronized (mutex) { if (isEven) { mutex.wait(); } System.out.println("Odd"); isEven = true; mutex.notify(); } i++; } } catch (Exception e) { e.printStackTrace(); } } }); Thread even = new Thread(new Runnable() { @Override public void run() { try { int i = 0; while (i < 20) { synchronized (mutex) { if (!isEven) { mutex.wait(); } System.out.println("Even"); isEven = false; mutex.notify(); } i++; } } catch (Exception e) { e.printStackTrace(); } } }); odd.start(); even.start(); } }
      7. Volatile keyword in Java guarantees that value of the volatile variable will always be read from main memory and not from Thread's local cache.
      8. In Java reads and writes are atomic for all variables declared using Java volatile keyword (including long and double variables).
      9. Using the volatile keyword in Java on variables reduces the risk of memory consistency errors because any write to a volatile variable in Java establishes a happens-before relationship with subsequent reads of that same variable.
      10. Java volatile keyword doesn't mean atomic, its common misconception that after declaring volatile ++ will be atomic, to make the operation atomic you still need to ensure exclusive access using synchronized method or block in Java.
    2. How is CountDownLatch used in Java Multithreading?
      1. CountDownLatch works in latch principle, the main thread will wait until the gate is open. One thread waits for n threads, specified while creating the CountDownLatch.
      2. Any thread, usually the main thread of the application, which calls CountDownLatch.await() will wait until count reaches zero or it's interrupted by another thread. 
      3. All other threads are required to count down by calling CountDownLatch.countDown() once they are completed or ready.
      4. As soon as count reaches zero, the waiting thread continues. One of the disadvantages/advantages of CountDownLatch is that it's not reusable: once count reaches zero you cannot use CountDownLatch any more.
    3. can we make array volatile in java?
      1. Yes, you can make an array (both primitive and reference type array e.g. an int array and String array) volatile in Java
      2. But declaring an array volatile does NOT give volatile access to it's fields. you're declaring the reference itself volatile, not it's elements.
        1. protected volatile int[] primes = new int[10];
          1. then if you assign a new array to primes variable, change will be visible to all threads, but changes to individual indices will not be covered under volatile guarantee i.e
        2. primes = new int[20];
          1. will follow the "happens-before" rule and cause memory barrier refresh visible to all threads
        3. primes[0] = 10;
          1.  will not visible changes in all threads
      3. Same for collections also
      4. In other words you're declaring a volatile set of elements, not a set of volatile elements. The solution here is to use AtomicIntegerArray in case you want to use integers
    4.  Thread Local?
      1. The ThreadLocal class in Java enables you to create variables that can only be read and writte by the same thread
      2. private ThreadLocal myThreadLocal = new ThreadLocal();
      3. Now you can only store strings in the ThreadLocal instance.
      4. myThreadLocal.set("Hello ThreadLocal"); String threadLocalValue = myThreadLocal.get();
    5. How immutable objects manage memory ?
      1. The advantage we get with String is that a common pool of string literals is kept by the virtual machine stopping the Heap getting filled up . The reasoning behind this is that much of the memory of a program can be taken up with storing commonly used strings.
    6. How to throw exceptions from Runnable.run?
      1. Do not use Runnable interface from Thread library, but instead create your own interface with the modified signature that allows checked exception to be thrown
      2. public interface MyRunnable { void myRun ( ) throws MyException; }
      3. You may even create an adapter that converts this interface to real Runnable ( by handling checked exception ) suitable for use in Thread framework.
    7. Difference Between Daemon and User Threads?
      1. Java offers two types of threads: user threads and daemon threads.
      2. JVM will wait for all active user threads to finish their execution before it shutdown itself.
      3. Daemon thread doesn't get that preference, JVM will exit and close the Java program even if there is a daemon thread running in the background
      4.  Daemon threads are low-priority threads whose only role is to provide services to user threads..
      5. A daemon thread is a thread that does not prevent the JVM from exiting when the user thread finishes but the thread is still running. An example for a daemon thread is the garbage collection.
      6. That’s why infinite loops, which typically exist in daemon threads, will not cause problems, because any code, including the finally blocks, won’t be executed once all user threads have finished their execution. For this reason, daemon threads are not recommended for I/O tasks.
      7. // Java program to demonstrate the usage of // setDaemon() and isDaemon() method. public class DaemonThread extends Thread { public DaemonThread(String name){ super(name); } public void run() { // Checking whether the thread is Daemon or not if(Thread.currentThread().isDaemon()) { System.out.println(getName() + " is Daemon thread"); } else { System.out.println(getName() + " is User thread"); } } public static void main(String[] args) { DaemonThread t1 = new DaemonThread("t1"); DaemonThread t2 = new DaemonThread("t2"); DaemonThread t3 = new DaemonThread("t3"); // Setting user thread t1 to Daemon t1.setDaemon(true); // starting first 2 threads t1.start(); t2.start(); // Setting user thread t3 to Daemon t3.setDaemon(true); t3.start(); } } ######OUT PUT#### t1 is Daemon thread t2 is User thread

    Search This Blog

    Contact us

    Name

    Email *

    Message *