Microservices with Quarkus – GraphQL API+ Reactive MySQL

Quarkus(https://quarkus.io) is a Subatonic Java framework. Its specially designed for micro-services and similar to Micronaut.Usually SpringBoot comes with lot of features inbuilt and bit heavy solution if we are focusing more on micro-service patterns such as Reactive Streams, Messaging, Low latency API calls etc .Quarkus has a quick boot time compared to Spring Boot and designed as “Container First” strategy.

In this article I am going to illustrate how we combine super cool features such as GraphQL, Reactive SQL together and build high performing micro-service component.

Please use this Git Repo as a reference :

https://github.com/dumindarw/graphql-quarkus

Step 1: Creating Quarkus Project

Quarkus echo-system contains lot of cool and great extension list, in this project I am going to use below extensions.

quarkus-vertx-graphql – Eclipse VertX GraphQL plugin integration with Quarkus
graphql-java-extended-scalars – Support for scalar types such as Date, Time etc.
quarkus-resteasy – REST library comes with Quarkus
quarkus-reactive-mysql-client – MySQL reactive support library
quarkus-config-yaml – Enable YAML extension instead of legacy property files

mvn io.quarkus:quarkus-maven-plugin:1.2.1.Final:create \
    -DprojectGroupId=com.duminda \
    -DprojectArtifactId=quarkus-graphql-project \
    -Dextensions="reactive-mysql-client,config-yaml,vertx-graphql,resteasy"

After creating the project go to “quarkus-graphql-project” add below dependency manually.

<dependency>
  <groupId>com.graphql-java</groupId>
  <artifactId>graphql-java-extended-scalars</artifactId>
  <version>1.0</version>
</dependency>

Rename application.properties to application.yml

Step 2: Create GraphQL types and scalars

Create group.graphqls file inside your resources folder and add Types, Functions etc.

scalar Date

type Group {
  id: String
  name: String
  createdBy: String
  createdDate: Date
  blacklisted: Boolean
}

type Query {
  allGroups(isBlackListed: Boolean = false): [Group]
}

type Mutation {
  addGroup( name: String, createdBy: String): Boolean
}

Here I have a one model called Group , one Query function, and one Mutation function

Create same model as a POJO class (Group.java)

Step 3: Service layer implementation

MySQL table model (tbl_mygroups)

In group.graphqls file I have two functions called allGroups and addGroup. We have to create those two methods inside our service layer.

public CompletionStage<List<Group>> allAvailableGroups(DataFetchingEnvironment env) {
	boolean isBlackListed = env.getArgument("isBlackListed");

	return client.query("SELECT id, name, createdBy, createdDate, blacklisted FROM tbl_mygroups")
			.thenApply(rowSet -> {
				List<Group> list = new ArrayList<>(rowSet.size());
				for (Row row : rowSet)
					if (!isBlackListed)
						list.add(from(row));

				return list;
			});
}

In above method inside our service class returns asynchronous computation.
Since our mysql query function returns reactive stream we have to pass the returning data to our API layer which is GraphQL Observable Router.

public void init(@Observes Router router) {

	GraphQL graphQL = setupGraphQL();
	GraphQLHandler graphQLHandler = GraphQLHandler.create(graphQL);

	router.route("/graphql").handler(graphQLHandler);
}

Another important part is RuntimeWiring

RuntimeWiring runtimeWiring = RuntimeWiring.newRuntimeWiring()
			.type("Query", builder-> builder.dataFetcher("allGroups", taskRepo::allAvailableGroups))
			.type("Mutation",builder -> builder.dataFetcher("addGroup", taskRepo::addGroup))
			.scalar(ExtendedScalars.Date)
			.build();

We have to call our Repo functions(allAvailableGroups, addGroup) during runtime and add to our GraphQL APIs(allGroups, addGroup).

Step 4: Database configurations (application.yml)

quarkus:
  http:
    port: 8088
  datasource:
    url: vertx-reactive:mysql://x.x.x.x:3306/test_db
    username: user
    password: pass

Step 5: Run project in development mode

./mvnw compile quarkus:dev

Step 6: Access GraphQL browser and call APIs

http://localhost:8088/graphql-ui/

#Query API to get all groups

query {
  allGroups {
    id,
    name
  }
}

#Mutation to add new group

mutation{
  addGroup(name:"TestGroup",createdBy:"Rajitha")
}

Run ApacheDS on Docker

ApacheDS is a free and opensource simple directory management software. If you need to quickly setup LDAP for any of your projects , ApacheDS will be the easiest and most prominent solution. There are some advance and feature-full solutions such as FreeIPA but setup will be a pain in the ass for developers (if you are not from DevOps background)

Unfortunately there are no official images in the Docker Hub for ApacheDS. Here are some simple steps to up and run your own docker based LDAP server !

Build the ApacheDS Image

docker build -t apacheds 

Run the container

docker run -dt --name apacheds_container -p 389:10389 -p 636:10636 apacheds:latest

Here we expose 336 as our LDAP port to outside world

LDAP Clients

There are various LDAP clients available to examine and view directory structures.

It is recommended to use Apache Directory Studio but for a quick demo I will use JXplorer (default admin password is ‘secret’)

Now you have full access to your LDAP. You can create new partitions and add entries as you wish.

Command Query Responsibility Segregation Pattern Implementation – Spring Boot

CQRS is an Architectural pattern which solves complexity issues in larger projects. CQRS simplifies your service layer by dividing into main two blocks. Most people confusing about this pattern but it is simple.

  1. Commands
  2. Queries
  1. What is a Command – Command is basically a CRUD without R , that means CUD, simple right? CUD – CREATE, UPDATE, DELETE. Commands are database transaction operations, for example you are going to change the state of your database by editing, inserting or deleting a record or set of records.
  2. What is a Query – Query is R in CRUD, means RETRIEVE. Queries are fetching database records and they are not deal with transaction stuff.

Is this pattern something you should try?

Yes but “Use Responsibly”, This may increase the complexity of small projects but ideal if you need clean separation between commands and queries.

What technologies we should use for this

In Spring Ecosystem, ideally an ORM aka any JPA technology works for command implementation since they optimized to persist any complex objects. but I recommend Spring Data JPA Crud repository over plain Hibernate since Spring Data CRUD is well optimized and can do almost all the CRUD operations. If you have really complex DB architecture use Hibernate with Spring Data.

For Queries we can use MyBatis. Hibernate would attempt to load the entire object graph and you’d need to start tuning queries with LazyLoading tricks to keep it working on a large domain. This is important when running complex analytic queries that don’t even return entity objects. Hibernate only offers SqlQuery and bean Transformers in this case with huge default types like BigDecimal, while myBatis could easily map to a simple POJO non-entity (https://softwareengineering.stackexchange.com/questions/158109/what-are-the-advantages-of-mybatis-over-hibernate). If you want to just fetch information, use myBatis.

Technically, that’s how you gonna separate the two components, you can find simple implementation from below repository.

https://github.com/dumindarw/CQRS

In above example there are two end-points for Query and Command

Command – http://localhost:8081/doc/upload/Salary [POST]

Query – http://localhost:8081/doc/download/Salary/1 [GET]

First end-point is multipart file upload end-point and it will persist the CSV content to the table and it is a CREATE operation. I have used JPA for this operation.

Second end-point is for retrieve data from the DB by documentID. I have used MyBatis for this operation.

GraphQL – APIs to Next Level

Have you ever tried GraphQL for your APIs? Still sticked with traditional REST/SOAP? Give GraphQL a try !

GraphQL aka Graph Query Language is originally developed by facebook sometimes ago for their product lines and became available to public few years ago. If you are intend to design and develop lightweight but powerful API which is dynamic and respond with not more or not less data, this is ideal selection for you.

Most of the time we are writing APIs in day-today to request a resource and respond with bunch of data. But think, different components in your system require different combination of data, for example, you need a user APIs which requests profile details from server for one component of your application and another component need your birth-date only, how you are going to design such a scenario? If you are develop your APIs using REST you have to design two APIs right? for get user profile(/user/profile) and get get user birth day(/user/birthday) or sometimes you think why I can’t use profile api(/user/profile) for both? But reusing such a time consuming and bulk APIs again and again slow down your system, specially you are more into mobile friendly design. Using two APIs solves the problem but lot of code to write and increase your app complexity and app maintainability.

Here is the rescuer, GraphQL

Sample express-graphql implementation with unit test available here for your reference.

https://github.com/dumindarw/graphql-mongo-test.git

GraphQL is a kind of a query language, it provides you what you want without going here and there for requesting resources ­čÖé

There are two main two categories for GraphQL

  • Query
  • Mutation

Query is used when you want to ‘GET’ data, means, whole data or partial of data
Mutation is used when you want to ‘POST’ or ‘PUT’, means you need to insert record or update records, simple as that !

How to write queries

Get all users

query a{
users{
id,
nic,
username,
currentaddr{
district
}
}
}

Get user by id

query b{
user(username: "dum2"){
nic
firstname
currentaddr {
district
}
location{
coordinates
}
}
}

The awesomeness here is you can add remove fields as you wish.

How to write mutations

Save User

mutation c{
adduser(user: {username:"dumindaw",firstname:"xxxx",
lastname:"xxxx",password:"abc123",nic:"870750986V",
deviceid:"1234567", email:"deemind@gmail.com",tp:"0779906999",
location:{type:"Point", coordinates: [7.88,81.44]},
currentaddr:{district:"Kurunegala",dsdivision:"YMP"},verified:false,blackListed:false}){
insertedId
}
}

Spring Boot Microservices with Keycloak

Micro-service architecture is the new revolution of the software industry and it has been popular topic since later 2015 and now it became the most suited architecture for bigger and complex systems. It is a nice and maintainable replacement to the traditional monolithic (aka all in one) system. As the name says it all, you have to develop and deploy services as small business components, not only in your application layer but also your db layer need to be componentized. Functionality and behaviour should unique to the individual service and dependencies need to be injected via service calls, not using tightly bound db relationships.

Here I am going to explain the basic components of the micro-service system and how to develop a basic functioning micro-service with proper security. In my sample application I have separated my system to few layers

  1. Simple API (Micro-service; you can have multiple MSs)
  2. Service Registry
  3. API Gateway
  4. Identity and Access Management

Later you can introduce more layers to the system, such as config service (Spring Cloud Config), log service (Sleuth), latency and fault tolerance (Hystrix), client side load balancing (Ribbon) etc,  but as a first step I am going to prepare the ground layer for the basic start up system.

You can refer below GIT repository for code setup.

https://github.com/dumindarw/springboot-microservices-starter.git

Step 1 : Configure your KeyCloak Server

Keycloak is an open source Identity and Access Management solution developed by JBoss (on top of WildFly).Download and extract zip/tar file from here “https://www.keycloak.org/” and start up the server using

C:\keycloak-4.5.0.Final\bin
╬╗ .\standalone.bat -Djboss.socket.binding.port-offset=100

Server will startup and on port 8180 . Visit http://localhost:8180/auth/ and create new admin login first.

Visit http://localhost:8180/auth/admin/master/console/#/create/realm and import the JSON file found in above GIT repo (zuul-server/config). It will create realm, clients, roles and users for you (Demo purpose). You can create them manually as you wish.

Step 2 : Start your Eureka Server

Netflix Eureka is a service registry and service discovery component. It will ease our work by registering each micro-service to centralized location and it provides easier way for inter process communication.

Build and run using maven, execute below command withing the project location.

C:\Users\Duminda\spring-workspace\microservices-project\eureka-server
╬╗ mvn install && mvn spring-boot:run

This will start your Eureka server on port 8761

Step 3 : Start your Micro-service

C:\Users\Duminda\spring-workspace\microservices-project\api-service
╬╗ mvn install && mvn spring-boot:run

This will start your API service on port 8080

Step 4 : Start your Zuul API Gateway

Netflix Zuul is a API gateway. It will provide easy routing towards your microservices.

C:\Users\Duminda\spring-workspace\microservices-project\zuul-server
╬╗ mvn install && mvn spring-boot:run

This will start your ZUUL server on port 8762

Step 5 : Get your access token first

We have created a user duminda inside keycloak (Admin console) with password 12345(or you can create your own). We are now able to get the token using below POST request. You can use Postman, SoapUI or CUrl for that.

curl -k -u app-authz-rest-springboot:secret -d "grant_type=password&username=duminda&password=12345" -H "Content-Type: application/x-www-form-urlencoded" http://localhost:8180/auth/realms/spring-boot-quickstart/protocol/openid-connect/token

Above request will return JWT token with some additional data like token expiry time, refresh token etc.

{
  "access_token": "eyJhbGci......",
  "expires_in": 1800,
  "refresh_expires_in": 1800,
  "refresh_token": "eyJhbG .....",
  "token_type": "bearer",
  "not-before-policy": 0,
  "session_state": "1947e319-0793-4cb4-99a6-624d4961a209",
  "scope": "profile email"
}

6. Use above JSON Web Token to call your web service through API Gateway

curl -H "Authorization: Bearer access_token" http://localhost:8762/api/v1/resourceb

Replace access_token with the JWT then you can access  the API resource. Without Authorization token, ZUUL proxy will reject your request and it will not reach to the destination (Microservice1 or Microservice2) and return unauthorized response.

This architecture diagram shows how the system behaves top to bottom.

Architecture Diagram

Note that this is a basic architecture. You can add more components to the architecture and add refresh token capability to the system as well.

Setup Jasper Reporting Server through NGINX virtual host with SSL in Windows

Jasper server is running top of a Tomcat instance and by default it uses the port 8080.
All jasper related files located inside the webapp folder

Ex- C:\Jaspersoft\jasperreports-server-cp-6.4.2\apache-tomcat\webapps\jasperserver

First we will set jasper to run on the http://localhost:8080 instead of http://localhost:8080/jasperserver
to do that, we will make a folder called webapps2 in apache-tomcat folder and inside that we will create a ROOT directory.
Then, we have to copy apache-tomcat/webapps/jasperserver content to newly created apache-tomcat/webapps/ROOT/ folder.
After that, we have to tell tomcat to look into our new webapps2 folder instead of webapps folder, to do that

Open apache-tomcat/conf/server.xml and change the appBase to webapps2

Restart the jasper service or tomcat and now you can see your jasper server run on http://localhost:8080

Configuring the Virtual Host

Open windows host file (C:\Windows\System32\drivers\etc\hosts) and add below line


127.0.0.1 reporting.duminda.com

Installing NGINX and setup as a windows service

Download NGINX for windows and extract it into your C:/ drive
Your NGINX HOME would be C:\nginx-1.15.2

open nginx.conf file located in conf directory and add below section


server {

listen 443 ssl;
server_name reporting.duminda.com;

access_log logs/reporting.duminda.com.access.log;

ssl_certificate C:/Users/Duminda/certificate.crt;
ssl_certificate_key C:/Users/Duminda/private-key.key;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

ssl_session_cache shared:SSL:1m;
ssl_session_timeout 5m;

ssl_ciphers HIGH:!aNULL:!MD5;
ssl_prefer_server_ciphers on;

location / {

proxy_pass http://127.0.0.1:8080/;

}
}

Create crt and key files in your machine and replace ssl_certificate (your Self Signed Certificate) ,ssl_certificate_key with appropriate paths.

Above configuration will create a virtual host and point to our jasper server.

So all the incoming requests receiving to https://reporting.duminda.com will pass to http://127.0.0.1:8080

Setup NGINX as a windows service

Download NSSM (https://nssm.cc/download) and install service by executing below command (as admin)


nssm.exe install nginx

1

Set arguments as -p C:\nginx-1.15.2

Now you have installed nginx service, go to windows services and start the nginx

Visit the https://reporting.duminda.com and enjoy jaspering ….

“Luhn” Algorithm using ECMAScript 6

Luhn Algorithm (https://en.wikipedia.org/wiki/Luhn_algorithm) is a very popular algorithm among developers. It is mostly used for Credit Card number validations.

The basic idea is you have a set of numbers with length 15 and using these 15 digits you are calculating a check digit for the 16th position.

The Simplified algorithm definition as follows

  • Lets take a sample 15 digit number (123456789123456) and you need to get the check digit and complete the sequence.
  • First you need to reverse the number sequence – 654321987654321
  • You have to take the odd position numbers and need to multiply them by 2 – 12, 8, 4,18,14,10,6,2
  • If the multiplied value is grater than 9, we need to subtract 9 from them. – 3,8,4,9,5,1,6,2
  • The Even number list is not subjected to change – 5,3,1,8,6,4,2
  • We need to add those numbers together and need to get a total

(3+8+4+9+5+1+6+2)+(5+3+1+8+6+4+2) = 67

  • To get the check digit we need to subtract the output value from next 10th multiplier. In our case it is 70

70 – 67 = 3

Lets do this in ECMA6Script way…