Consulting Services

Ashnik Consulting Services

Strategic Consulting, System Integration and Implementation


Digitally transform
your business


Scale out and
automate your IT


Make your IT agile
and efficient


Help you adopt
an open source culture


Database Platform

We design the database platform that meets your requirements and digital goals. We help you install, configure and migrate to Postgres.

Postgres Backup and Replication


  • Ensuring high availability of data to the business
  • Minimize loss of data and transaction during failures
  • Minimize the time spent on recovering the database post failure
  • Setup of Disaster Recovery site for instances of major failure
  • Make an appropriate choice from various High Availability methods and tools
  • Identify the appropriate replication technology and mode
  • Setup load balancing to keep the database available for concurrent requests


  • We help you identify and implement the right replication technology based on your setup

  • and key requirements, e.g. Synchronous or Asynchronous replication, Snapshot or Change Data replication, continuous or scheduled replication, single master or multi-master replication, heterogeneous replication to/from Oracle or SQL Server
  • We implement High Availability setup suited to your needs from the varied available choices of HA methodologies, e.g. EnterpriseDB Postgres Plus Fail over Manager, Red Hat Cluster Suit, PG pool, Streaming Replication etc.
  • We setup your Disaster Recovery site and support your team to carry out a mock disaster response activity
  • We identify your backup needs based on your recovery criteria and align your backup strategy according to your business needs


  • Numerous alternatives of vastly available open source or commercially available tools
  • Our team of experts has successfully implemented these setups for business critical workloads. For example, we helped an enterprise setup asynchronous replication scaling up to 1000 transactions per second with continuous synchronization
  • We empower your team to ensure HA, DR and replication setup can co-exist
  • We help you setup a backup policy which aligns with the High Availability and Disaster strategies
  • We work on a success based pricing – if we implement successfully, only then do we charge you

Postgres Monitoring and Health Check


  • Minimizing business downtime
  • Avoiding failures and performance degradation
  • Keeping an up-to-date hardware and software configuration
  • Monitoring database regularly for critical issues and abrupt behaviour
  • Optimum utilization of resources


  • Selecting the right tools (such as Postgres Enterprise Manager) and training your team to analyze the results and information

  • Configuring database to gather statistics on various performance related parameters during peak hours
  • Checking whether the OS parameters are tuned to provide what is needed for the database
  • Providing tuning parameters for the database to maximize what is available


  • Ensuring your database does not slow down with time
  • Database growth is addressed well in time
  • Optimizing system utilization to maximize database throughout

  • Get proper diagnostics and resolution of the root cause of existing issues, plus it helps point out a problem before it becomes critical
  • Maximize the usage of resources available
  • Get an updated health check report of your database
  • Our experts will training your team to conduct daily monitoring of the database server

Oracle to Postgres Migration


  • Save cost of developing skills or hiring new resources
  • A ‘trusted resource on demand’ facility for all your PostgreSQL needs
  • Expertise of certified professionals in maintaining and keeping your database up-to-date
  • Tuning the database parameters on a continual basis
  • Timely upgradation of database


  • Assessment report: A detailed object level analysis, result conveying the compatibility quotient of your database, including analysis of your stored procedures, functions, packages and views for any incompatible programming construct or keywords
  • Effort estimation: Based on the above report, we give you a detailed effort estimation with responsibilities outlined clearly
  • Actual Migration services, inclusive of:
    • Migration of Schema and Structure
    • Data migration
  • Setup of replication for smoother transition and reduced downtime


  • Only a highly skilled team experienced in complex migration projects undertakes responsibility of Migration/Upgrade
  • A compatibility score of your database to assess the risks and effort is known upfront
  • automated tools and experts to take care of Migration and Upgrade
  • Support during testing
  • Minimum downtime

Postgres Remote and Annual Maintenance


  • Recruiting and retaining full time resources to support database effectively
  • Diagnosing and taking necessary measures to maintain data on regular basis


  • Regular health check-ups of your database
  • DBA support during OS or Hardware maintenance activities
  • Restoring/recovering the database after a failure

  • Upgrading from one version to the next
  • Assisting during critical issues with database server or for resolving
  • critical application related issue in database, for example, need for
  • tuning a poor performing stored procedure


  • Save cost of developing skills or hiring new resources
  • A ‘trusted resource on demand’ facility for all your PostgreSQL needs

  • Expertise of certified professionals in maintaining and keeping your database up-to-date
  • Tuning the database parameters on a continual basis
  • Timely upgradation of database


Adopting Microservices Architecture but not sure how to? Work with us on how to build a sustainable Microservices Architecture pattern.

Building a scalable Microservices Architecture – Consulting services for designing a Microservices platform


  • A clear analysis of the decision boundaries based on standard enterprise patterns of Microservices
  • Design of a Microservices approach for one application. This design would be provided as a series of documents and diagrams, details follow:
    • Context View – The overall understanding of the system at the enterprise level. This macro view allows one to see the various functions and practices in the whole initiative. This view is a bird’s eye view of the system design.
    • Container View – Describes the ‘containers’ the final system is deployed as. This relates specific technologies being implemented in a given context, how they relate to each other. When the system is building a series of Microservices, this is the level that describes the micro-services and how they interact with each other. This view covers more detailed aspects of application design.
  • Documentation around the non-functional requirements that emerge out of the decision boundary discovery.

  • Technology recommendations around each of the decision areas

Out of Scope

  • Hands-on decomposition of monolithic applications – the implementation
  • Documentation of any feature sets or user stories – these must be provided by the product management or design teams
  • The component and code views are ‘implementation’ views and are considered out-of-scope for the engagement
  • Documentation of any form for the existing application
  • Exchange or review of any dataset or information deemed confidential or sensitive by the engaging party


  • Scheduled meetings with the relevant points of contact as may be needed for appropriate discovery of needs and decision boundaries (per the details in the approach section above)
  • Documentation around the decision boundaries, where available

  • Existing monolithic application along with any available documentation
  • Any potential user stories, especially business rules that must be kept in consideration when designing the new solution
  • A documentation of the database (or databases) involved along with working sample data
  • The sample data must be sanitized and any sensitive information must be appropriately masked or obfuscated while still maintaining the integrity of the schema
  • Familiarity with Linux and open source tools such as NGINX, Docker, Mesos, etc.


  • All documentation converted to PDF and delivered as a ZIP file
  • Half-day session on sharing of observations, best practises and recommendations for Microservices


Looking for agile and continuous model of software development and delivery? Explore our offering here:

Setting up Container Platform – Designing a highly-available, scalable and secure platform using Docker


  • Docker platform architecture designed for high-availability and scalability
  • Advisory on container runtime security
  • Image security with Docker Content Trust to verify both contents and publisher of images
  • Insights into exposure and vulnerabilities on software/libraries with image scanning
  • Enforcement of image signing for image validation
  • Access control to improve DevOps collaboration while maintaining clear boundaries
  • Recommendations on container orchestrations and best practises

Out of Scope

  • Configuring any Docker environment
  • Writing Dockerfile and building Docker images
  • Integrating Docker with other tools
  • Fixing any existing Docker issue which is not related to Docker Architecture as scoped


  • Fair knowledge on Docker, Swarm or Kubernetes
  • Existing usage of Docker in Dev/Test/Prod
  • Experience in building Docker image for case study
  • Documentation of current Docker architecture and process


  • Documentation in PDF for recommendations and best practise on areas as covered in the scope
  • Half-day session on sharing of observation, best practise and recommendation for existing Docker setup

Optimizing CI/CD Pipeline – Consulting Services for DevOps best practices


  • Review current CI/CD pipeline across build, test, deploy and monitor
  • Access current capabilities and level of maturity for automation
  • Analyse areas for improvement to expedite software development lifecycle
  • Assist in recommendation to enhance monitoring of infrastructure and gain better visibility on microservices
  • Review security and compliance across container development and deployment lifecycle

Out of scope

  • Configuration of current CI/CD tools sets
  • Integration of CI/CD toolsets with other components not related to DevOps
  • Investigation of CI/CD pipeline breakdown
  • Performance testing of toolsets


  • Documentation of current CI/CD processes and architecture
  • Provide details on current CI/CD toolset and version used

  • Provide details on observations / challenges in current CI/CD pipeline
  • Sample application with full CI/CD pipeline


  • Documentation in PDF for recommendations and best practise on areas as covered in the scope
  • Half-day session on sharing of observation, best practises and recommendation for DevOps

Data Pipeline

Building Data Pipeline and Log Aggregation using ELK


  • Discussion on understanding of requirements for different data sources in your organization.
  • Design and architect appropriate ELK cluster based on data size and data store requirement.
  • Install and configure ELK cluster with elasticsearch as data store, logstash as data modeler and kibana as data visualization.
    • Configure all the Elasticsearch nodes in High availability and replication mode.
    • Configure Elasticsearch indexes in appropriate shards.
    • Install and configure Logstash for data inputs from beats and output to Elasticsearch for data ingestion.
    • Build grok pattern for given data source to get appropriate fields in index for data enrichment.
    • Install and configure Kibana for data search and visualization in Active – Passive mode.
    • Install and configure keepalived or similar software for Kibana HA.
    • Install and configure X-Pack for Logstash, Elasticsearch nodes and Kibana.
    • Configure SSL/TLS security for elastic cluster.
    • Activate the subscription model for support.
    • Install & Configure appropriate beats as per the agreed sources.

  • Build email alerts and SMS alerts using watcher features of elastic x-pack.

Out of Scope

  • New integrations with 3rd party software
  • Hardware, OS and Network related issues
  • Installation / Implementation of hardware – Server and Network
  • Review/Changes in application code
  • Changes in server platform or OS
  • Changes in application design
  • Functional testing for application and any code written in database
  • Reports creation unless specified

Pre-requisites as below but not limited to

  • User access (root or similar access) to perform required installation on source and sandbox (RPM preferred)
  • Please ensure that all the nodes are installed with latest patches for OS and kernel.
  • All software and hardware components should be under valid subscriptions and validity.
  • For more details of supported platforms and compatibility, pls refer to the online documentation:
  • The ports 9200(ES), 5601(Kibana), 5044(Logstash) to be opened between the subnets
  • All nodes to be in same subnet

  • If there is custom user installation and custom folder for log, data and binaries, please provide pseudo user along with mount point/ folder for data, log and binaries for elasticsearch, logstash, kibana and filebeat.
  • The latest softwares for Elasticsearch, Logstash, Kibana should be downloaded (RPM or tar).
  • Single Point of Contact from client side who is aware of and has understanding of infrastructure as well as project use cases.


  • Sizing of the elasticsearch cluster based on the inputs provided
  • Installation and configuration of Elasticsearch cluster
  • Installation and configuration of Logstash and Kibana with HA
  • Data ingestion from different sources to Elasticsearch cluster
  • Data enrichment using Logstash
  • Build the entire data pipeline based on the scope mentioned.
  • Create sample dashboards on best effort basis
  • Create sample alerts on best effort basis
  • Documentation in PDF and Knowledge transfer on setup.

Want to know more?

Get in touch with queries for your specific use case, pricing of our consulting services, technology product information, solution offerings or to know how open source can upgrade your enterprise platform.