Microservices with Quarkus – GraphQL API+ Reactive MySQL

Quarkus(https://quarkus.io) is a Subatonic Java framework. Its specially designed for micro-services and similar to Micronaut.Usually SpringBoot comes with lot of features inbuilt and bit heavy solution if we are focusing more on micro-service patterns such as Reactive Streams, Messaging, Low latency API calls etc .Quarkus has a quick boot time compared to Spring Boot and designed as “Container First” strategy.

In this article I am going to illustrate how we combine super cool features such as GraphQL, Reactive SQL together and build high performing micro-service component.

Please use this Git Repo as a reference :


Step 1: Creating Quarkus Project

Quarkus echo-system contains lot of cool and great extension list, in this project I am going to use below extensions.

quarkus-vertx-graphql – Eclipse VertX GraphQL plugin integration with Quarkus
graphql-java-extended-scalars – Support for scalar types such as Date, Time etc.
quarkus-resteasy – REST library comes with Quarkus
quarkus-reactive-mysql-client – MySQL reactive support library
quarkus-config-yaml – Enable YAML extension instead of legacy property files

mvn io.quarkus:quarkus-maven-plugin:1.2.1.Final:create \
    -DprojectGroupId=com.duminda \
    -DprojectArtifactId=quarkus-graphql-project \

After creating the project go to “quarkus-graphql-project” add below dependency manually.


Rename application.properties to application.yml

Step 2: Create GraphQL types and scalars

Create group.graphqls file inside your resources folder and add Types, Functions etc.

scalar Date

type Group {
  id: String
  name: String
  createdBy: String
  createdDate: Date
  blacklisted: Boolean

type Query {
  allGroups(isBlackListed: Boolean = false): [Group]

type Mutation {
  addGroup( name: String, createdBy: String): Boolean

Here I have a one model called Group , one Query function, and one Mutation function

Create same model as a POJO class (Group.java)

Step 3: Service layer implementation

MySQL table model (tbl_mygroups)

In group.graphqls file I have two functions called allGroups and addGroup. We have to create those two methods inside our service layer.

public CompletionStage<List<Group>> allAvailableGroups(DataFetchingEnvironment env) {
	boolean isBlackListed = env.getArgument("isBlackListed");

	return client.query("SELECT id, name, createdBy, createdDate, blacklisted FROM tbl_mygroups")
			.thenApply(rowSet -> {
				List<Group> list = new ArrayList<>(rowSet.size());
				for (Row row : rowSet)
					if (!isBlackListed)

				return list;

In above method inside our service class returns asynchronous computation.
Since our mysql query function returns reactive stream we have to pass the returning data to our API layer which is GraphQL Observable Router.

public void init(@Observes Router router) {

	GraphQL graphQL = setupGraphQL();
	GraphQLHandler graphQLHandler = GraphQLHandler.create(graphQL);


Another important part is RuntimeWiring

RuntimeWiring runtimeWiring = RuntimeWiring.newRuntimeWiring()
			.type("Query", builder-> builder.dataFetcher("allGroups", taskRepo::allAvailableGroups))
			.type("Mutation",builder -> builder.dataFetcher("addGroup", taskRepo::addGroup))

We have to call our Repo functions(allAvailableGroups, addGroup) during runtime and add to our GraphQL APIs(allGroups, addGroup).

Step 4: Database configurations (application.yml)

    port: 8088
    url: vertx-reactive:mysql://x.x.x.x:3306/test_db
    username: user
    password: pass

Step 5: Run project in development mode

./mvnw compile quarkus:dev

Step 6: Access GraphQL browser and call APIs


#Query API to get all groups

query {
  allGroups {

#Mutation to add new group


Letsencrypt – HTTPS is the better way

Letsencrypt is free SSL Certificate Authority (CA). Provides you renewable free of charge secure connection to your resources from the internet. (https://letsencrypt.org/) Here i am going to explain how to secure web app (in my case its Jenkins run on port 8080) using Letsencrypt and NginX.

First you have to have your app running on a cloud environment like Azure or AWS. Make sure you get a domain name for your VM instance (Ex: blog.southeastasia.cloudapp.azure.com). You need a proper domain name configured with your cloud instance otherwise Letsencrypt reject your certificate request.

Things to per-configure in your cloud before begin

  1. Open both 80 and 443 to public
  2. Make sure you have a domain name for your resource

In my case, I have Jenkins up and running in a docker container with internal port 8080 and external port 8081, I have added my Jenkins to a external network called ‘nginx-network’

version: '3.7'


    image: jenkins/jenkins:latest
    container_name: jenkins
    user: root
     - JENKINS_ARGS="--prefix=/jenkins"
     - ./jenkins_home:/var/jenkins_home
     - /var/run/docker.sock:/var/run/docker.sock
     - /usr/bin/docker:/usr/bin/docker
     - /usr/local/bin/docker-compose:/usr/local/bin/docker-compose
     - 8081:8080
     - 50000:50000
     - nginx-network

    external: true

Configuring NginX with Letsencrypt + Certbot

Download Letsencrypt automated script from here


Replace the domain name to yours

Put ‘init-letsencrypt.sh’ in the same folder where your ‘docker-compose.yml’ exists. The docker-compose for spin up Nginx and get certificates from Letsencript using Certbot as follows.

version: '3.7'


    image: nginx:1.17.6-alpine
    container_name: nginx
     - ./nginx_data:/etc/nginx/conf.d
     - ./certbot_data/conf:/etc/letsencrypt
     - ./certbot_data/www:/var/www/certbot
     - 80:80
     - 443:443
     - nginx-network

    image: certbot/certbot
     - ./certbot_data/conf:/etc/letsencrypt
     - ./certbot_data/www:/var/www/certbot
     - nginx-network

    external: true

Create all empty directories and put your ‘app.conf’ file inside ‘nginx_data’ directory.

upstream jk {
    server jenkins:8080;
    keepalive 256;

server {

        listen 80 default_server;
        listen [::]:80 default_server;

        server_name xxx.southeastasia.cloudapp.azure.com;

        location /.well-known/acme-challenge/{
                root /var/www/certbot;
        location / {
                return 301 https://$host$request_uri;

server {
        listen 443 ssl;
        server_name xxx.southeastasia.cloudapp.azure.com;

        ssl_certificate /etc/letsencrypt/live/xxx.xx.cloudapp.azure.com/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/xxx.xx.cloudapp.azure.com/privkey.pem;
        include /etc/letsencrypt/options-ssl-nginx.conf;
        ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;

        location / {

                proxy_set_header        Host $host:$server_port;
                proxy_set_header        X-Real-IP $remote_addr;
                proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
                proxy_set_header        X-Forwarded-Proto $scheme;
                proxy_redirect http:// https://;
                proxy_pass http://jk;

                proxy_http_version 1.1;
                proxy_request_buffering off;
                proxy_buffering off;

Now the configurations are done. Lets get the certificates now.

Execute ‘init-letsencrypt.sh’ file and see the magic !

Then visit to your web app in the cloud. You can see now your free SSL certificate is issued from Letsencrypt is in action.

Website SSL
Certificate Details

Lets Encrypt , Coz Self Signed is for Kids 🙂 …..

SpringBoot CI/CD with Azure Container Registry and Jenkins (Part 1)

Create Git Clone, ACR Push/Pull, Docker Run Pipeline Stages

Jenkins is the mostly and widely used pipelining tool for continuous deployments. Here I am planing to explain you that how to push an SpringBoot application build image to Azure Container registry and reused it inside your docker-compose file while you run your deployment. There are some basic steps to complete before the Jenkins pipeline part.

  1. From your Azure Console, create container registry and get your login password from Settings -> Access keys (We are going to create Jenkins credential id using this)
  2. You have a SpringBoot app with Dockerfile, docker-compose file and Jenkins file located in same directory structure
SpringBoot Application Directory Structure
Azure Container Registry

Here is the sample content of above files

docker-compose file

In your Jenkins, if you are using Jenkins docker image you have to have below configurations

First execute below in your Jenkins host machine

sudo adduser jenkins
sudo usermod -a -G docker jenkins

Here I map docker and docker-compose executable files from host machine to Jenkins docker container, so Jenkins can execute docker commands.

Jenkins docker-compose file

Jenkins – Create pipeline using existing Jenkinsfile

Log into your Jenkins and Select “New Item” -> Pipeline , put a suitable name for your build

You can add triggers later, for now , go to “Pipeline Section” and select “pipeline script from SCM”

Put your Git repo, if your repo is a private one add credentials by clicking Add button

Mention your branch, if you are using different branch than master, put that name

Script path is “Jenkinsfile”, its relative url to your project

Adding Azure Container registry credentials to Jenkins

Go to Credentials -> System -> Global credentials (unrestricted) -> Add Credentials

Username/password Entry

In your Pipeline script , put the same credential ID as above

Now you can execute “Build Now” command

Build Now

Summery of the flow (pipeline)

  1. Compile your java code using maven
  2. Package it to jar
  3. Optimize and jar (break fat jar)
  4. Build docker image
  5. Push the image to ACR with tag
  6. Pull back the image
  7. Remove all previous docker containers
  8. Spin up all the containers (including dependencies) using docker-compose

You can check the containers after the deployment

Running Containers



To be continued ….

Run ApacheDS on Docker

ApacheDS is a free and opensource simple directory management software. If you need to quickly setup LDAP for any of your projects , ApacheDS will be the easiest and most prominent solution. There are some advance and feature-full solutions such as FreeIPA but setup will be a pain in the ass for developers (if you are not from DevOps background)

Unfortunately there are no official images in the Docker Hub for ApacheDS. Here are some simple steps to up and run your own docker based LDAP server !

Build the ApacheDS Image

docker build -t apacheds 

Run the container

docker run -dt --name apacheds_container -p 389:10389 -p 636:10636 apacheds:latest

Here we expose 336 as our LDAP port to outside world

LDAP Clients

There are various LDAP clients available to examine and view directory structures.

It is recommended to use Apache Directory Studio but for a quick demo I will use JXplorer (default admin password is ‘secret’)

Now you have full access to your LDAP. You can create new partitions and add entries as you wish.

Command Query Responsibility Segregation Pattern Implementation – Spring Boot

CQRS is an Architectural pattern which solves complexity issues in larger projects. CQRS simplifies your service layer by dividing into main two blocks. Most people confusing about this pattern but it is simple.

  1. Commands
  2. Queries
  1. What is a Command – Command is basically a CRUD without R , that means CUD, simple right? CUD – CREATE, UPDATE, DELETE. Commands are database transaction operations, for example you are going to change the state of your database by editing, inserting or deleting a record or set of records.
  2. What is a Query – Query is R in CRUD, means RETRIEVE. Queries are fetching database records and they are not deal with transaction stuff.

Is this pattern something you should try?

Yes but “Use Responsibly”, This may increase the complexity of small projects but ideal if you need clean separation between commands and queries.

What technologies we should use for this

In Spring Ecosystem, ideally an ORM aka any JPA technology works for command implementation since they optimized to persist any complex objects. but I recommend Spring Data JPA Crud repository over plain Hibernate since Spring Data CRUD is well optimized and can do almost all the CRUD operations. If you have really complex DB architecture use Hibernate with Spring Data.

For Queries we can use MyBatis. Hibernate would attempt to load the entire object graph and you’d need to start tuning queries with LazyLoading tricks to keep it working on a large domain. This is important when running complex analytic queries that don’t even return entity objects. Hibernate only offers SqlQuery and bean Transformers in this case with huge default types like BigDecimal, while myBatis could easily map to a simple POJO non-entity (https://softwareengineering.stackexchange.com/questions/158109/what-are-the-advantages-of-mybatis-over-hibernate). If you want to just fetch information, use myBatis.

Technically, that’s how you gonna separate the two components, you can find simple implementation from below repository.


In above example there are two end-points for Query and Command

Command – http://localhost:8081/doc/upload/Salary [POST]

Query – http://localhost:8081/doc/download/Salary/1 [GET]

First end-point is multipart file upload end-point and it will persist the CSV content to the table and it is a CREATE operation. I have used JPA for this operation.

Second end-point is for retrieve data from the DB by documentID. I have used MyBatis for this operation.

GraphQL – APIs to Next Level

Have you ever tried GraphQL for your APIs? Still sticked with traditional REST/SOAP? Give GraphQL a try !

GraphQL aka Graph Query Language is originally developed by facebook sometimes ago for their product lines and became available to public few years ago. If you are intend to design and develop lightweight but powerful API which is dynamic and respond with not more or not less data, this is ideal selection for you.

Most of the time we are writing APIs in day-today to request a resource and respond with bunch of data. But think, different components in your system require different combination of data, for example, you need a user APIs which requests profile details from server for one component of your application and another component need your birth-date only, how you are going to design such a scenario? If you are develop your APIs using REST you have to design two APIs right? for get user profile(/user/profile) and get get user birth day(/user/birthday) or sometimes you think why I can’t use profile api(/user/profile) for both? But reusing such a time consuming and bulk APIs again and again slow down your system, specially you are more into mobile friendly design. Using two APIs solves the problem but lot of code to write and increase your app complexity and app maintainability.

Here is the rescuer, GraphQL

Sample express-graphql implementation with unit test available here for your reference.


GraphQL is a kind of a query language, it provides you what you want without going here and there for requesting resources 🙂

There are two main two categories for GraphQL

  • Query
  • Mutation

Query is used when you want to ‘GET’ data, means, whole data or partial of data
Mutation is used when you want to ‘POST’ or ‘PUT’, means you need to insert record or update records, simple as that !

How to write queries

Get all users

query a{

Get user by id

query b{
user(username: "dum2"){
currentaddr {

The awesomeness here is you can add remove fields as you wish.

How to write mutations

Save User

mutation c{
adduser(user: {username:"dumindaw",firstname:"xxxx",
deviceid:"1234567", email:"deemind@gmail.com",tp:"0779906999",
location:{type:"Point", coordinates: [7.88,81.44]},

Spring Boot Microservices with Keycloak

Micro-service architecture is the new revolution of the software industry and it has been popular topic since later 2015 and now it became the most suited architecture for bigger and complex systems. It is a nice and maintainable replacement to the traditional monolithic (aka all in one) system. As the name says it all, you have to develop and deploy services as small business components, not only in your application layer but also your db layer need to be componentized. Functionality and behaviour should unique to the individual service and dependencies need to be injected via service calls, not using tightly bound db relationships.

Here I am going to explain the basic components of the micro-service system and how to develop a basic functioning micro-service with proper security. In my sample application I have separated my system to few layers

  1. Simple API (Micro-service; you can have multiple MSs)
  2. Service Registry
  3. API Gateway
  4. Identity and Access Management

Later you can introduce more layers to the system, such as config service (Spring Cloud Config), log service (Sleuth), latency and fault tolerance (Hystrix), client side load balancing (Ribbon) etc,  but as a first step I am going to prepare the ground layer for the basic start up system.

You can refer below GIT repository for code setup.


Step 1 : Configure your KeyCloak Server

Keycloak is an open source Identity and Access Management solution developed by JBoss (on top of WildFly).Download and extract zip/tar file from here “https://www.keycloak.org/” and start up the server using

λ .\standalone.bat -Djboss.socket.binding.port-offset=100

Server will startup and on port 8180 . Visit http://localhost:8180/auth/ and create new admin login first.

Visit http://localhost:8180/auth/admin/master/console/#/create/realm and import the JSON file found in above GIT repo (zuul-server/config). It will create realm, clients, roles and users for you (Demo purpose). You can create them manually as you wish.

Step 2 : Start your Eureka Server

Netflix Eureka is a service registry and service discovery component. It will ease our work by registering each micro-service to centralized location and it provides easier way for inter process communication.

Build and run using maven, execute below command withing the project location.

λ mvn install && mvn spring-boot:run

This will start your Eureka server on port 8761

Step 3 : Start your Micro-service

λ mvn install && mvn spring-boot:run

This will start your API service on port 8080

Step 4 : Start your Zuul API Gateway

Netflix Zuul is a API gateway. It will provide easy routing towards your microservices.

λ mvn install && mvn spring-boot:run

This will start your ZUUL server on port 8762

Step 5 : Get your access token first

We have created a user duminda inside keycloak (Admin console) with password 12345(or you can create your own). We are now able to get the token using below POST request. You can use Postman, SoapUI or CUrl for that.

curl -k -u app-authz-rest-springboot:secret -d "grant_type=password&username=duminda&password=12345" -H "Content-Type: application/x-www-form-urlencoded" http://localhost:8180/auth/realms/spring-boot-quickstart/protocol/openid-connect/token

Above request will return JWT token with some additional data like token expiry time, refresh token etc.

  "access_token": "eyJhbGci......",
  "expires_in": 1800,
  "refresh_expires_in": 1800,
  "refresh_token": "eyJhbG .....",
  "token_type": "bearer",
  "not-before-policy": 0,
  "session_state": "1947e319-0793-4cb4-99a6-624d4961a209",
  "scope": "profile email"

6. Use above JSON Web Token to call your web service through API Gateway

curl -H "Authorization: Bearer access_token" http://localhost:8762/api/v1/resourceb

Replace access_token with the JWT then you can access  the API resource. Without Authorization token, ZUUL proxy will reject your request and it will not reach to the destination (Microservice1 or Microservice2) and return unauthorized response.

This architecture diagram shows how the system behaves top to bottom.

Architecture Diagram

Note that this is a basic architecture. You can add more components to the architecture and add refresh token capability to the system as well.