Architecture – Teknonauts https://teknonauts.com Stay Tuned With Technology Fri, 31 Dec 2021 08:24:43 +0000 en-US hourly 1 https://wordpress.org/?v=5.7.5 https://teknonauts.com/wp-content/uploads/2021/01/cropped-teknonauts_favicon_original-1.png Architecture – Teknonauts https://teknonauts.com 32 32 #35 Technology Trends – What 2021 Year gave us ?. https://teknonauts.com/technology-trends-2021/ https://teknonauts.com/technology-trends-2021/#respond Fri, 31 Dec 2021 08:24:34 +0000 https://teknonauts.com/?p=4379

Below is the list of technology advancement which happened in 2021 year. Though 2021 year was tough from COVID perspective, it impacted our lives in every manner. But wait, there is silver lining which has happened in technology world.

Have a look at the Technology trends 2021 list:

1. mRNA Vaccine

The two most important vaccines against the coronavirus are based on the messenger RNA, a technology that has been in the labs for 20 years. When the covid-19 pandemic started last year, researchers at several biotech companies were quick to turn to mRNA as a way to create effective vaccines; in late 2020, at a time when more than 1.5 million had died from covid-19 worldwide, the vaccines were approved in the US, marking the starting of the end of the pandemic.

How mRNA Vaccines works ?

The new covid vaccines are based on a technology which were never before used in world, and it could transform medicine, leading to vaccines against various infectious diseases, including malaria. And if this coronavirus keeps mutating, mRNA vaccines can be easily and quickly modified. Messenger RNA also holds great promise as the basis for cheap gene fixes to sickle-cell disease and HIV. Also in the works: using mRNA to help the body fight off cancers.

Source credit : MIT mRNA Research

For more detail how mRNA vaccines work, please check this – How mRNA Vaccine works ?

2. Lithium Metal Batteries

Electric vehicles come with a tough sales pitch; they’re relatively expensive, and you can drive them only a few hundred miles before they need to recharge—which takes far longer than stopping for gas. All these drawbacks have to do with the limitations of lithium-ion batteries. A well-funded Silicon Valley startup now says it has a battery that will make electric vehicles far more palatable for the mass consumer.

Doubling battery power of consumer electronics | MIT News | Massachusetts  Institute of Technology
Generations of Batteries

It’s called a lithium-metal-battery and is being developed by QuantumScape. According to early test results, the battery could boost the range of an EV by 80% and can be rapidly recharged. The startup has a deal with VW, which says it will be selling EVs with the new type of battery by 2025.

The battery is still just a prototype that’s much smaller than one needed for a car. But if QuantumScape and others working on lithium-metal batteries succeed, it could finally make EVs attractive to millions of consumers.

Source credit: How Lithium metal batteries work ?

3. Multi-Skilled AI

Despite the immense progress in artificial intelligence in recent years, AI and robots are still dumb in many ways, especially when it comes to solving new problems or navigating unfamiliar environments. They lack the human ability, found even in young children, to learn how the world works and apply that general knowledge to new situations.

One promising approach to improving the skills of AI is to expand its senses; currently AI with computer vision or audio recognition can sense things but cannot “talk” about what it sees and hears using natural-language algorithms. But what if you combined these abilities in a single AI system? Might these systems begin to gain human like intelligence? Might a robot that can see, feel, hear, and communicate be a more productive human assistant?

Source Credit: AI Armed with Multiple Senses.

4. Digital contact tracing

As the coronavirus began to spread around the world, it was believed that digital contact tracing might help us. Smartphones could use GPS or Bluetooth to create a database of people who had recently interacted with someone. if one of them tested positive later on, He could inform the central repsoitory which in turn can raise an alarm to all the people who contacted him.

Digital Contact Tracing: Advantages and Disadvantages

But digital contact tracing largely failed to make much impact on the virus’s spread. Apple and Google quickly pushed out features like exposure notifications to many smartphones, but public health officials struggled to persuade residents to use them. The lessons we learn from this pandemic could not only help us prepare for the next pandemic but also carry over to other areas of health care.

Source Credit: Digital contact tracing

5. Tik-Tok Recommendation Engine

The recommendation engine is not new to the Data Science community. Instead, some consider it as the old generation AI system due to a lack of dizzying effects like image recognition or language generation.

Nevertheless, the recommendation is still one of the predominant AI systems which have the most extensive implementation in almost all online services and platforms. For example, YouTube video suggestion, campaign email you received from Amazon, book you might also like when you are browsing the kindle bookshop.

Apart from the basic, industrialized recommendation engine need a robust backend and architecture design for integration. Below is a primary example.

Recommendation Engine — PC by Catherine Wang, All Rights Reserved

A real-time system should have a solid data basis (for collection and storage) to support the multiple abstract layers(algorithm layer, serving layer, and application layer ) on top that addresses different business problems.

Source Credit: Tik-Tok Engine

6. Green Hydrogen

Hydrogen has always been an intriguing possible replacement for fossil fuels. It burns cleanly, emitting no carbon dioxide; it’s energy dense, so it’s a good way to store power from on-and-off renewable sources; and you can make liquid synthetic fuels that are drop-in replacements for gasoline or diesel. But most hydrogen up to now has been made from natural gas; the process is dirty and energy intensive.

Green Hydrogen

The rapidly dropping cost of solar and wind power means green hydrogen is now cheap enough to be practical. Simply zap water with electricity, and presto, you’ve got hydrogen. Europe is leading the way, beginning to build the needed infrastructure.

Source Credit: Green Hydrogen

7. Data Trust

Technology companies have proven to be poor stewards of our personal data. Our information has been leaked, hacked, and sold and resold more times than most of us can count. Maybe the problem isn’t with us, but with the model of privacy to which we’ve long adhered—one in which we, as individuals, are primarily responsible for managing and protecting our own privacy.

Data trusts offer one alternative approach that some governments are starting to explore. A data trust is a legal entity that collects and manages people’s personal data on their behalf. Though the structure and function of these trusts are still being defined, and many questions remain, data trusts are notable for offering a potential solution to long-standing problems in privacy and security.

Source Credit: Data Trust

Do Check this article for more knowledge on data trust

8. Hyper Accurate Positioning

We all use GPS every day; it has transformed our lives and many of our businesses. But while today’s GPS is accurate to within 5 to 10 meters, new hyper-accurate positioning technologies have accuracies within a few centimeters or millimeters. That’s opening up new possibilities, from landslide warnings to delivery robots and self-driving cars that can safely navigate streets.

Hyper-accurate positioning is rolling out worldwide in 2021 | Precision  agriculture, Technology review, Accurate

China’s BeiDou (Big Dipper) global navigation system was completed in June 2020 and is part of what’s making all this possible. It provides positioning accuracy of 1.5 to two meters to anyone in the world. Using ground-based augmentation, it can get down to millimeter-level accuracy. Meanwhile, GPS, which has been around since the early 1990s, is getting an upgrade: four new satellites for GPS III launched in November and more are expected in orbit by 2023.

Source Credit: Hyper Accurate Positioning

9. Remote everything

The pandemic fundamentally changed how the world responds to remote work. Education and health care are two areas that changed radically, as remote learning and telehealth use grew exponentially. Even as the pandemic winds down, we will continue to see significant use of these technologies.

Best WFH Tech for Working Remotely in 2021 💻🙋‍♀

Credit Source: New Remote world

10. GPT-3

This 175 billion parameter language model has the remarkable ability to write fluent text, complete imaginative story lines and do very well in random conversations. However, the uber-environmental footprint to train the model and the biases it perpetuates by using a large corpus of internet content are sobering counterweights to its use. Nevertheless, it is a big technology trends breakthrough which clearly needs to be tamed to be useful.

How Crowdbotics is Using GPT-3

Source Credit: GPT-3

Conclusion – technology trends

Though 2021 was also tough likewise 2020. But it gave humans a new ray of hope and new way of living. Wishing a lot new world of technology trends to be discovered in 2022. Happy new to all the teknonauts out there.

Please explore more at Teknonauts.com

]]>
https://teknonauts.com/technology-trends-2021/feed/ 0
#23 Seven Best Practices for Enterprise Application Development https://teknonauts.com/best-practices-application-development/ https://teknonauts.com/best-practices-application-development/#respond Tue, 25 May 2021 06:54:34 +0000 https://teknonauts.com/?p=4073

Developing a new application is the art of your thinking and creativity. You not only focus on your business but research a lot about how your customer will respond. That is the reason we have experienced a very good focus on customer experience these and days. Here are some of the best practices that will help to develop an application for an Enterprise.

UI best practices

  1.  Provide a familiar look and feel:  Standardizing the look and feel of software allows users to transfer the skills they learn on one piece of software to another. Training costs are minimized
  2. Provide consistency: Standardization may occur in varying scopes. Examples include the various components of an application, applications that will be used together, and an operating system and all the software that runs on it. The more broadly a standard can be applied, the greater the benefits
  3. Use human factors findings: Standards take advantage of the large body of human factors research and accepted practice. The authors of the standards assimilate and interpret the research, turning it into guidelines (“best practice”) for designers to follow
  4. Streamline development: Standards make many design decisions routine. This frees designers to spend time on decisions that are more difficult or critical
  5. Evaluate usability:  Standards provide one basis for judging the usability of products. All else being equal, a product that meets an HCI standard should be more usable than one that does not
  6. Comply with requirements: Standards compliance for the software may be required by the buyer (like Country specific rules, law etc.)

Follow User Interface Design Guidelines: 10 Rules of Thumb for more details

Single Sign-On best practices

All Web Portals and Mobile App would require secured access through sign in. The user profile would be maintained in the user session and would be used by all other applications. Here are some best practices to be followed

  • Choose the right product as an Identity Provider (IDP)
  • Verify that the identity directory is accurate
  • Secure all the components of the SSO system
  • Consider user privileges: Provide authorized access to each module

Learn more about how to implement security and single sign-on with your application – https://www.onelogin.com/learn/how-single-sign-on-works

Omni-Channel best practices

All Web Portals and Mobile App to be made compatible to defined service delivery channels. In a multi-channel environment, the user has access to a variety of communication options that aren’t necessarily synchronized or connected. However, during an omni-channel experience, there aren’t only multiple channels, but the channels are connected so you can move between them seamlessly. Best practices to be followed are –

  • Provide Real-Time Updates across all channels
  • Digitally Supplement In-Person Experiences: Not only should online interactions inform in-person experiences, but the two should blend for new, unique experiences
  • Leverage Messaging to Meet Audience Needs: More people than ever are using messaging to communicate with businesses, such as Facebook Messenger, so build your integration across these messaging platform

Loosely Coupled and Bounded Context Applications and modules

All IT application and their modules should be developed such that the features and functionality are made available as loosely coupled, self-contained, standards based and configurable services.

The Application functional dispositions are designed aligned to the loosely coupled architecture principle with business service supported by ministry specific applications connected to the integration platform. The common shared or supporting services would follow the common application principle. Following best practices may be followed to achieve decoupling.

  • Decouple at the module level
  • Decouple at the object level
  • Decouple at the processing level

Common Applications For Common/Shared Business Services

Construct and agree on common business supporting services.  Develop application to support those services or enhance existing IT Systems to support the services. Best practice for developing shared application may be decided based on the need and functions of the enterprise.

Low Code/No Codding Service Platform

A low code/No Code development platform provides an environment to create services through graphical user interfaces and configuration instead of traditional computer programming. Promoting such best practices for application development may reduce lot of cost and time to the market.

 Integrated Service Delivery Platform

An integrated service delivery platform, that forms the basic building blocks of domain specific services that would be built and re-used across the enterprise to develop all services

Conclusion

There are many best practices for designing the architecture for your application, better implementation, coding, and testing. There are best practices to automate your DevOps and another manual task during development and application rollout. But when you start thinking about an Enterprise as a whole, remember these 7 best practices to drive your application development and provide a unique experience to your customers.

Explore more at Teknonauts.com

]]>
https://teknonauts.com/best-practices-application-development/feed/ 0
#19 Microservices Data Management: 7 Important Design Patterns https://teknonauts.com/7-design-patterns-for-microservices/ https://teknonauts.com/7-design-patterns-for-microservices/#respond Sun, 25 Apr 2021 06:24:29 +0000 https://teknonauts.com/?p=3917

What are microservices

Microservices – also known as the microservice architecture – is an architectural style that structures an application as a collection of loosely coupled services, which implement business capabilities. The microservice architecture enables the continuous delivery/deployment of large, complex applications. It also enables an organization to evolve its technology stack.

Need of design patterns for Microservices Data Management

 We have decomposed the application in Microservices but Business demand integrated data that’s where we miss monolithic architecture. Since we have addressed some of the major challenges with microservices we introduced some powerful design patterns to address the data management need in microservices architecture.

Design Patterns for Data management

Database Per Service

When we have segregated the responsibilities at the application layer we want to maintain this decoupled nature till the database layer. So, what database architecture we should follow?

Forces

  • Services must be loosely coupled so that they can be developed, deployed and scaled independently
  • Some business use cases are not independent, they must interact with multiple services either to fetch the data or updated the data owned by other services
  • Some business use cases need to join the data owned by multiple services
  • Databases must sometimes be replicated and sharded in order to scale
  • Every microservice has different data storage requirement. For some services, RDBMS is the best option and for some NOSQL is the right fit

Solution

“Keep each microservice’s persistent data private to that service and accessible only via its API. A service’s transactions only involve its database”

Microservices
Figure 1 Microservices – Database per Service

In the above diagram Data, A will be private to Service A and will not never be accessed directly by any service. There are many ways to keep a service’s persistent data private. You not necessarily need to set up a separate database service for all microservices. But you can use below for instance if you are using RDBMS

  • Private-tables-per-service – each service owns a set of tables that must only be accessed by that service
  • Schema-per-service – each service has a database schema that’s private to that service
  • Database-server-per-service – each service has it’s own database server.

Benefits   

  • Services are loosely coupled
  • Each services are free to use best suited database for their need

Drawbacks

  • Implementing business transactions that span multiple services is not straightforward
  • Implementing queries that join data that is now in multiple databases is challenging
  • Complexity of managing multiple SQL and NoSQL databases

Shared Database

There are use cases when each business cases are so interlinked that you will end up touching most of Microservices in a single call. For systems, we must use a shared database.

Forces

  • Services must be loosely coupled so that they can be developed, deployed and scaled independently
  • Many use cases require data owned by multiple services
  • Use cases need to update data owned by multiple services

Solution

Use a (single) database that is shared by multiple services. Each service freely accesses data owned by other services using local ACID transactions.

Microservices
Figure 2 Microservices – Shared database

Benefits   

  • Easy for developer to create a query for required data
  • Easy to operate single set of data for ETL & other data management activities

Drawbacks

  • Database became monolithic and we have coupled the responsibilities at data layer
  • Reduced freedom to use best fit database for each microservices
  • Manage the schema switch at run time will slow down system

Saga

As we talked about business cases where we require to get the data owned by multiple services also there are transactions that need to update the data owned by multiple microservices. This pattern will address the problem related to transactions that span across multiple microservices.

Forces

  • Services must be loosely coupled so that they can be developed, deployed and scaled independently
  • Use cases need to update data owned by multiple services

Solution

Each business transaction that span across multiple services should be implemented as Saga. Saga is basically a sequence of local transactions where each transaction updates the database and trigger a event or message to execute the next transaction. In this series we also implement the fallback mechanism linked to each transaction so that if local transaction fails saga can execute sequence of compensating transaction to rollback the changes made by earlier transactions.

Figure 3 Microservices – Saga

There are two ways of coordination sagas:

  • Choreography – each local transaction publishes domain events that trigger local transactions in other services
  • Orchestration – an orchestrator (object) tells the participants what local transactions to execute

Example – Choreography-based saga

Figure 4: Microservices – Saga – Choreography

Example – Orchestration-based saga

Figure 5 Microservices – Saga – Orchestration

Benefits   

  • It enables an application to maintain data consistency across multiple services without using distributed transactions
  • Resilient Architecture with high performance

Drawbacks

  • The programming model is more complex
  • In order to be reliable, a service must atomically update its database and publish a message/event

API Composition

API composition is designed to address the problem that comes as an outcome of implementing a Database per service pattern for Microservices. When we implement a Database per service pattern, we can no longer write queries that join data from multiple services. API composition will provide ways to implement queries in Microservices.

Forces

  • Services must be loosely coupled so that they can be developed, deployed and scaled independently
  • Address the use cases requesting data from multiple data sources

Solution

Implement a query by defining an API Composer, which invoking the services that own the data and performs an in-memory join of the results.

Figure 6 Microservices – API Composition

Benefits   

  • It a simple way to query data in a microservice architecture

Drawbacks

  • Some queries would result in inefficient, in-memory joins of large datasets.

Command Query Responsibility Segregation (CQRS)

Command Query Responsibility Segregation pattern is also designed to address the issues after implementing Database per service pattern. it is no longer straightforward to implement queries that join data from multiple services. Also, if you have applied the Event sourcing pattern then the data is no longer easily queried. CQRS provide another mechanism to implement query hat retrieves data from multiple services in a microservice architecture.

Forces

  • Services must be loosely coupled so that they can be developed, deployed and scaled independently
  • Address the use cases requesting data from multiple data sources

Solution

In this approach, we define a view database which is the read-only replica of the actual data. This database is designed to support the query that require data from multiple data sources. CQRS define the application responsibility to keep the data up-to-date by subscribing to event published by service that own the data.

Figure 7 Microservices – CQRS

Benefits   

  • Supports multiple denormalized views that are scalable and performant
  • Improved separation of concerns = simpler command and query models
  • Necessary in an event sourced architecture

Drawbacks

  • Increased complexity
  • Potential code duplication
  • Replication lag/eventually consistent views

Domain Event

A service often needs to publish data/events when it updates its database. These events might be needed, for example, to update a CQRS view database. Alternatively, the service might participate in a choreography-based saga, which uses events for coordination. Domain Event pattern provide a method for a service to publish an event

Forces

  • Services must be loosely coupled so that they can be developed, deployed and scaled independently
  • Address the use cases requesting data from multiple data sources

Solution

Organize the business logic of a service as a collection of DDD aggregates that emit domain events when they created or updated. The service publishes these domain events so that they can be consumed by other services.

Figure 8 Microservices – Domain event

Event Sourcing

As we learned, CQRS and Saga. In CQRS, a command needs to update the database and publish the message so that other services and perform the required action. So the database update and sending a message must be atomic in order to avoid data inconsistency. Event sourcing talks about how reliably and automatically update the database and publish the message/event.

Forces

  • Services must be loosely coupled so that they can be developed, deployed and scaled independently
  • Update the database and sending message must be atomic to avoid data inconsistency

Solution

Use the event sourcing when you are updating the database and publishing a message for that event. Applications persist events in an event store, which is a database of events. The store has an API for adding and retrieving an entity’s events. The event store also behaves like a message broker. It provides an API that enables services to subscribe to events. When a service saves an event in the event store, it is delivered to all interested subscribers.

Some entities, such as a Customer, can have many events. In order to optimize loading, an application can periodically save a snapshot of an entity’s current state. To reconstruct the current state, the application finds the most recent snapshot and the events that have occurred since that snapshot. As a result, there are fewer events to replay.

Example

Taking example of Booking and Passenger service. Let’s develop it using event sourcing and CQRS.

Figure 9 Microservices – Event Sourcing

Benefits   

  • It solves one of the key problems in implementing an event-driven architecture and makes it possible to reliably publish events whenever state changes.
  • Because it persists events rather than domain objects, it mostly avoids the object relational impedance mismatch problem.
  • It provides a 100% reliable audit log of the changes made to a business entity
  • It makes it possible to implement temporal queries that determine the state of an entity at any point in time.
  • Event sourcing-based business logic consists of loosely coupled business entities that exchange events. This makes it a lot easier to migrate from a monolithic application to a microservice architecture.

Drawbacks

  • It is a different and unfamiliar style of programming and so there is a learning curve.
  • The event store is difficult to query since it requires typical queries to reconstruct the state of the business entities. That is likely to be complex and inefficient. As a result, the application must use Command Query Responsibility Segregation (CQRS) to implement queries. This in turn means that applications must handle eventually consistent data.

Conclusion

When we move from monolithic to Microservices architecture, we address many challenges like scalability, agility, flexibility, and many more. But when using this architecture there are numerous issues that we must address. When we work on some business cases we need to play around with the data, if our solution architecture is not planned with design patterns instead of getting the benefits of Microservices we will loop into the various issues. So, understand these design patterns and take the maximum benefits of Microservices architecture.

References

Microservice Architecture – https://microservices.io/

Explore more at Teknonauts.com

]]>
https://teknonauts.com/7-design-patterns-for-microservices/feed/ 0
#11 Industry 4.0 Revolution – Positive Impact on world https://teknonauts.com/industry-4-0/ https://teknonauts.com/industry-4-0/#respond Wed, 31 Mar 2021 14:41:16 +0000 https://teknonauts.com/?p=3606

What is Industry 4.0? Here’s Everything You Need to Know

The manufacturing industry is home to many different sorts of technical terms and jargon, but none are as all-encompassing as Industry 4.0. In fact, the term itself was coined due to a massive shift in the way we produce products.

Industry 4.0

WHAT IS INDUSTRY 4.0?

The term Industry 4.0 sounds like the name of an obscure sci-fi movie, which may appear confusing to those who are new to the manufacturing sector. However, the term Industry 4.0 actually refers to the 4th Industrial Revolution, a phase in the evolution of mankind’s manufacturing processes.

We have had three Industrial Revolutions in the past, the first took place in Britain during the 18th century, with mechanisation.

The Second Industrial Revolution took place around the early 20th century with improved manufacturing processes and assembly lines.

The Third Industrial Revolution took place in the 1960s with the implementation of digital technology.

Industry 4.0 really started to take shape in the 2010s, as computers became more powerful and the internet became more interconnected than ever before.

To have a deep insight please check our video:

Explore more at Teknonauts.com

]]>
https://teknonauts.com/industry-4-0/feed/ 0
# 10 A quick guide to become Oracle Cloud Infrastructure Architect Associate https://teknonauts.com/oracle-cloud-infrastructure-architect-associate/ https://teknonauts.com/oracle-cloud-infrastructure-architect-associate/#respond Wed, 31 Mar 2021 05:40:54 +0000 https://teknonauts.com/?p=3545

I have recently cleared Oracle Cloud Infrastructure Architect Associate (1Z0-1072-20) exam. Just wanted to share my experience. Here are step by step guide that I followed to clear this exam

Available Oracle Cloud Infrastructure
Oracle Cloud Infrastructure Certifications

1. Go through the complete course given on page ( Oracle Cloud Infrastructure Architect Associate )

Oracle Cloud Infrastructure Architect Associate by Rohit Rahi

2. Do some hands as suggested by Rohit Rahi (focused on Oracle Cloud Infrastructure Architect Associate )

3. Take practice exam from Udemy ( Oracle Cloud Infrastructure Architect Associate ) –

Practice Test for Oracle Cloud Infrastructure Architect Associate on Udemy

4. Do the practice test available in the first link and score more than 80%

You are ready for the exam Oracle and become Oracle Cloud Infrastructure Architect Associate. Good luck!

Here are some notes, I took during my preparation for Oracle Cloud Infrastructure Architect Associate

  • A DATA disk group is for storage of oracle database data files 
  • RECO disk group is primarily used for storing the FAST RECOVERY AREA (FRA ) where oracle database can create and manage various file 
  • related to backup and recovery (RMAN backup, Archive redo log)
  • A bucket can be associated with Single compartment
  • 2-node RAC DB systems – Enterprise Edition – Extreme Performance 
  • Bare metal DB systems allow scale without downtime
  • DRG used for ipsec VPN, Fast connect & remote peering
  • FastConnect used for both private and public peering. Private peering usages DRG
  • DenseIO shapes Designed for large databases, big data workloads, and applications that require high-performance local storage
  • You can scale up/down your Autonomous Database to scale both in terms of compute (CPU) and storage only 
  • RAID 1: An exact copy (or mirror) of a set of data on two or more disks
  • RAID 10: Stripes data across multiple mirrored pairs
  • RAID 6: Block-level striping with two parity blocks distributed across all member disks
  • bronze policy includes monthly incremental backups, e silver policy includes weekly incremental backups, gold policy includes daily incremental backups
  • Overwrite destination object  used for any copy operation, default  no etag limit, override destination
  • 2-Node VM DB system and Exadata DB system support Real Application Cluster (RAC)
  • BM (Dense IO) provide NVMe drivers and BM standard provide block storage only 
  • instances to meet compliance and regulatory requirements for isolation that prevent you from using shared infrastructure – Dedicated VM hosts
  • In Virtual machine DB systems, you can scale up the storage as needed at any time
  • tpurgent:high priority time critical, tp: For typical transaction processing, high: For high priority reporting and batch operations, medium: For typical reporting and batch operations
  • low: For low priority reporting and batch operations
  • Autonomous Databases have the Dedicated and Shared Exadata infrastructure options 
  • Automatic backups are scheduled daily
  • provide IAM a name that unique accross all user in tenancy 
  • CUSTOM RESOLVER- let instances resolve the hostnames of hosts in your on-premises network connected to your VCN by IPSec VPN
  • Oracle recomend to configure instance to use OCI NTP service  used to set date and time of your Compute and Database instances from within (VCN)
  • Oracle recommends configuring both tunnels to use BGP dynamic routing.
  • The allowable VCN size range is /16 to /30
  • Dynamic Routing Gateway ( IPSec VPN & Fast Connect Private peering)
  • Configure two or more CPE(Customer Premises Equipmenent) to leaverage IPSec Tunnel
  • Dedicated Exadata Infrastructure offer Multitenant DB Arch, allow over-subscription of CPU
  • NFS export options are a set of parameters within the export that specify the level of 
  • access granted to NFS clients when they connect to a mount target
  • Load balancer,File storage and database supporrted by OCI CLI whereas block volumes are not
  • Compute Images and block volume backup are regional resources. Compartment is not a regional resource
  • DWROLE is a predefined database role to connect ADW database
  • Default security List and Default Route table components cannot be deleted in OCI
  • Customer provided encryption key always stored in OCI Vault service
  • OCI OKE Replica Set -maintains stable set of replica pod running at any given time
  • By default, object versioning is disabled on a bucket. And when you enable its NOT enabled at namespace level.
  • Oracle Data Guard implementation: Both database should be in same compartment. The DB systems must be the same shape type.
  • the database versions and editions must be identical.
  • If your primary and standby databases are in different regions, then you must peer the virtual cloud networks. 
  • Primary is 1-Node RAC and secondary can be 1-Node or 2-Node
  • You can move object storage bucket, Block volumes and file storage mount target between the compartment
  • File systems use Oracle-managed key by default
  • Higher Performance elastic performance option is recommended for workloads with the highest I/O requirements, requiring the best possible performance, such as large databases
  • You can only create a clone for a volume within the same region, availability domain and tenant.
  • You can create a clone for a volume between compartments as long as you have the required access permissions for the operation.
]]>
https://teknonauts.com/oracle-cloud-infrastructure-architect-associate/feed/ 0
#1 Enterprise Service Bus or API Gateway : What is best for Microservices architecture https://teknonauts.com/enterprise-service-bus-api-gateway-microservices/ https://teknonauts.com/enterprise-service-bus-api-gateway-microservices/#respond Tue, 30 Mar 2021 17:47:51 +0000 https://teknonauts.com/?p=1395

Let me start with the background of integration & need of enterprise service bus which emerged from a concept called interoperability.

What is interoperability?

Interoperability is the ability of an enterprise and its architecture domains i.e., business, data, applications and technology to share information and services, and seamlessly communicate with other architecture domains. We can achieve this through the standardization and adoption of common systems, standards and data exchange protocols.

Why interoperate?

A connected business requires strategic collaboration and coordination with data & services. Exchange of data and services enable them with many new capabilities and deliver significant business value to their customers.

Interoperability architecture viewpoints

Interoperability architecture established in terms of establishing and identifying design architectures that used as the foundational blueprint for establishing collaboration between organizations or providing a reference guideline for solution architects to implement solutions. Interoperable architecture viewpoints established with different viewpoints depending on:

Architecture viewpoint for an enterprise architect

As a part of the enterprise architecture design and the development of an organizational blueprint, it is necessary that the blueprint considers inter and intra organization transactions and establishes a mechanism to facilitate smooth and seamless exchange of information between the two entities. Establishing interoperability demands focus on identification of the transactions or business processes inter-linked or dependent on other organizations, review of the information systems required or available around the data exchange requirements and the supporting technology for the same.

The architecture domain viewpoints entail focus on the core architecture dimensions of enterprise architecture including:

  • Business
  • Information systems (data and applications)
  • Technology
Need of Enterprise Service Bus & API Manager/Gateway in Microservices architecture

Implementation viewpoint for a solution architect

In this context, it is essential to understand the two terminologies — interoperability and integration at the beginning. While interoperable looks at a broader spectrum of compatibility of two disparate and distinct systems, integration is a sub-component of interoperability as explained below

Need of Enterprise Service Bus & API Manager/Gateway in Microservices architecture

Enterprise Service Bus

In simple term, Enterprise Service Bus (ESB) is a flexible connectivity infrastructure for integrating applications and services.

An Enterprise Service Bus performs the following:

  • Matches and routes communication between services (Routing, Mediation & Transformation)
  • Converts between different transport protocols
  • Transforms message formats between requestor and service
  • Identifies and distributes business events from disparate sources
  • An Enterprise Service Bus is based on open standards (Work of principal Connecting Anything to Anything)

At the heart of the Enterprise Service Bus architecture — is the enterprise services bus, a collection of middleware services that provides integration capabilities. The bus provides the medium for messages (In Message Broker architecture) to reach their destinations.

Enterprise Service Bus provided services are themselves distributed in the sense that different components come and play their role in providing the infrastructure services promised by the Enterprise Service Bus.

In short, ESB is a centralized middleware, which replaces complex point-to-point communication and provides flexible & highly scalable solution to integrate with multiple heterogeneous systems/applications.

API Gateway

API Manager is a solution for designing and publishing APIs, creating & managing API documentation. API manager is also known as API gateway, which is, used as an entry point for all your enterprise APIs so you can monitor, secure & transform them as per the need. These use case is more relevant for microservices architecture.

API Manager perform following

  • Design and Prototype APIs
  • Publish and Govern API Use
  • Control Access and Enforce Security
  • Create a Store of all Available APIs
  • Manage Developer Community
  • Manage API Traffic
  • Monitor API Usage and Performance

Fundamentals of Microservices Architecture

When service-oriented architecture & REST has proven their solutions and widely accepted for system-to-system communications as compared to legacy technologies like EJB, distributed computing concept has been extended for service-to-service communications that emerged as Microservice architecture.

Microservices is an architecture style that designs an application as collections of services based on functions or any other logical unit. The aim is to breakdown services from not only functions/logical unit perspective but also consider maintainability & deployment strategy that can benefit the OSGi concept.

Let me share an architecture of Microservices, I have implemented recently. We have separated core data & business functions into respective microservices.

Need of Enterprise Service Bus & API Manager/Gateway in Microservices architecture
Microservice Architecture

While developing Microservices has become very easy with the frameworks like spring boot, cross-cutting concerns like security, data validation, logging, caching, monitoring, etc. also made available as a pluggable component. So ideally, you do not want to bother about these cross-cutting concerns while developing your services rather you want to focus on your business logic.

Now if we observed the need in Microservices we do not see the scope of any to any connectivity (which is a concept of ESB). All cross-cutting concerns fulfilled by API Manager/Gateway if you refer features as described before. However, another driving principle that is not present in the architecture above is “third party integrations”. Therefore, if you see features of ESB and understand the need for third-party integrations, it is worth implementing along with Microservices.

The Mobile-first approach (due to the explosion in mobile device adoption), has forced most of the applications to expose their services over HTTP as REST by default irrespective of the technology you are using for development which has limited the scope for ESB.

Most of the ESB products come with build-in API gateway feature and I have seen examples where ESB is being implemented but actually utilizing API gateway features. We should avoid and assess well before buying ESB. We should pay only for what you use 🙂

Conclusion

Recent market demand and trends have unified the approach of service interoperability, which has limited the need for any to any connectivity ( Enterprise Service Bus ). API Gateway makes more sense to implement with Microservice architecture. However, the API Manager is not a replacement for an ESB.

For more information refer – API friends and explore more on Teknonauts

]]>
https://teknonauts.com/enterprise-service-bus-api-gateway-microservices/feed/ 0
#8 Demand of IOT Based Predictive maintenance in future times https://teknonauts.com/predictive-maintenance/ https://teknonauts.com/predictive-maintenance/#comments Tue, 30 Mar 2021 14:08:40 +0000 https://teknonauts.com/?p=2947

What is Predictive Maintenance ?

In very basic terms, Predictive maintenance is the application of machine learning algorithms in industrial machine so that they have predictive capability. Predictive maintenance has always focused on how to predict when certain conditions are going to occur and when machines will fail.

With the advancement in the field of machine learning & ability to do it at a large scale, we have now many use case where we can apply it. It is not just reserved for a few organization any more. It is now available to all those industries which have a heavy use of assets or machines.

What is the Need of Predictive maintenance ?

The interest to have predictive capabilities is growing in the organization is growing day by day because:

  1. Manufacturers need to know when a machine is about to fail so they can better plan for maintenance. For example, as a manufacturer, you might have a machine that is sensitive to various temperature, velocity, or pressure changes. When these changes occur, they might indicate a failure.
  2. With predictive maintenance in place you can do a load balancing among your machineries. For e.g. You have 100’s of motor installed in your plant, only 20% of them are running in peak times, With relevant amount of statistical model you can plan so that all motor run in equal amount.
  3. Cost reduction: Since you can plan in advance, you do not have to pay yearly maintenance or reactive maintained cost. Some machines or asset does not need maintenance every year but based on amount of usage but industries still pays annual AMC to the vendors. This cost could be reduced.

Evolution of Maintenance

predictive maintenance

Reactive maintenance

In this type people used to fix any trouble in machine only when its is broke down.

Preventative maintenance

In this people used to do maintenance on a fixed schedule basis. Similar to your car service annually based on running km say 10000 km or every 12 months. The key here is you define a certain threshold.

Problem here was

  1. you often end maintaining the machines which doesn’t require any maintaining or you would find things break down before the schedule threshold.
  2. Also, Manufactures took this type of maintenance for their advantage, they set such limits so that they can limit the risk & increase the profit by the warranty clauses.
  3. Quite often assets are maintained on a higher frequency than what is required & this would create maintenance induced failures in machinery.

Manual predictive maintenance

AS technology advancement took place, engineers started to take measurement of parameters affecting the condition of an asset. They used vibration monitors or ultrasonic devices or other means. You created a predictive maintenance capability but it was still manual. This is present in our world from sometime but it often required operator going out and taking measurement. They have to physically go out and take readings, capture that data and plot it out.

It takes hours of effort of creating a spreadsheet, then on basis of their analysis they would come up with predictive maintenance schedule. It was a great idea but execution was done on an adhoc basis.

IoT-based predictive maintenance (where we are right now)

With IOT in place the way we look at predictive maintenance has changed, you can now monitor your assets in real time at a very low cost that sends data to an algorithm on a continuous basis. Then algorithm can take a decision whether there is something wrong happening with the machine or not. Also predict when the maintenance needs to be done.

That is where predictive maintenance is going now with what we call IoT-based predictive maintenance.

You can then schedule maintenance based on that data.

How to get started ?

To get started you need to follow some below key points:

  1. Start analyzing your assets from parameter point of view, means which parameter affect the health of your machine. E.g. identify key variables for a battery, we get temperature & voltage.
  2. Identify the sensors which are capable of monitoring those variables. In our case we take temperature sensor and voltage sensors.
  3. Identify your gateway to aggregate the sensor data.
  4. Select your IOT to platform to collect the data and analyze it.
  5. Choose your machine learning algorithm  as per your use case.

Prediction, sometimes referred to as inference, requires machine-learning (ML) models based on large amounts of data for each component of the system. The model is based on a specified algorithm that represents the relationships between the values in the training data. You use these ML models to evaluate new data from the manufacturing system in near real-time. A predicted failure exists when the evaluation of the new data with the ML model indicates there is a statistical match with a piece of equipment in the system.

Lets go into more depth of technology by understanding reference architecture of an IOT.

IOT Based – Reference architecture

  1. Sensors – they process data from all the machines or asset to process further.
  2. Field Gateway – They act as aggregators for all the sensor data and deposit them onto the cloud environment.
  3. Cloud Gateway – Process those data and pass it to streaming data processors as the data velocity is high you need specialized streamers.
  4. Data Lakes – Streaming data is now stored into lakes as transactional data.
  5. Big Data Warehouses – The data stores where all the machine learning operations will be executed.
  6. Machine Learning Algorithms – They continuously run on big data warehouses, if desired conditions are not met failure models are called.
  7. Control Application – They do two things if there failure module is activated, first they trigger command to sensor to stop or hold working & second they notify maintenance system about the need of maintenance or failure.
  8. Data Analytics – They are graphical or tabular dashboards for providing insights in the big data warehouses.
  9. User Applications – Frontend application for users or operators.
predictive maintenance

Implementing reference architecture using AWS Technology Stack

predictive maintenance

Application in varied industries

Predictive maintenance by industries

Engineers across industries are now considering application of predictive maintenance. Teknonauts trying to list some possible applications from manufactures point of view.

Discrete manufacturing

Major discrete manufacturers are using predictive maintenance based on IoT to monitor, for example, the health of spindles in milling machines. They are prone to breaking, while their repair is expensive. An IoT-based predictive maintenance solution can help to predict potential damage by collecting data from ultrasonic and vibration sensors attached to the spindle.

Process manufacturing

In process manufacturing, pulp processing and paper manufacturing companies leverage IIoT to monitor the state of paper-making machines. For example, Maastricht Mill equipped their press rolls with temperature and vibration sensors and rolled out a cloud-based predictive maintenance solution to predict damages of bearings and gears.

Scheduling maintenance for a press roll based on a cover failure prediction

Another example is the steel industry. Steel plants have multiple furnaces that use water cooling panels to control temperature. Leakages in the panels may cause safety issues and production losses. An IoT-based predictive maintenance solution can help detect anomalies and carry out a root cause analysis, preventing production delays and equipment failures.

Oil and gas

Oil & gas companies particularly benefit from applying predictive maintenance solutions. Physical inspection of oil & gas production equipment requires personnel to go into hazardous environment to examine the equipment, which in some cases is not feasible. IoT-based predictive maintenance allows oil & gas companies to identify potential failures and increase the production of highly critical assets.

Electric power industry

Electric power plants have to ensure reliable power supply, particularly, during the periods of peak demand. An IoT-based maintenance solution can help to ensure uninterrupted power generation and detect evolving flaws in a gas/wind/steam turbine’s rotating components. For that, a turbine gets equipped with vibration sensors. The data collected by sensors is relayed to the cloud and run through ML algorithms to determine how each turbine performs.

Scheduling maintenance for a wind turbine based on a main bearing failure prediction

Railways

Railway companies apply IoT-based predictive maintenance to ensure the rails and the rolling stock are in proper condition. The solution helps to improve safety, reliability and velocity of the rolling stock, as well as reduce train delays caused by equipment malfunctions.

Construction

In construction, predictive maintenance is applied to monitor the state of heavy machinery, e.g. excavators, bulldozers, loaders, lifts, etc. Sensors can be attached to a machine to monitor transmission and brake temperature, engine RPM, tire pressure, fuel consumption and other values. The cloud identifies potential problems with exhaust after-treatment systems, as well as rotating and static components damages.

Conclusion

IoT-based predictive maintenance improves equipment’s life, helps to eliminate as much as 30 percent of the time-based maintenance routine, and reduces equipment downtime by 50 percent. For a mature and reliable predictive maintenance solution. Its better to start thinking now of it otherwise you would lack behind.

Do follow our Youtube channel for latest videos.

Explore more at Teknonauts

]]>
https://teknonauts.com/predictive-maintenance/feed/ 7
#5 The Twelve Factor App methodology – Blessing for architects https://teknonauts.com/the-twelve-factor-app/ https://teknonauts.com/the-twelve-factor-app/#comments Tue, 30 Mar 2021 09:39:21 +0000 https://teknonauts.com/?p=1291

Introduction

In the modern era, software is commonly delivered as a service: called web apps, or software-as-a-service. The twelve factor app is a methodology for building software-as-a-service apps that:

12 factor

•Use declarative formats for setup automation, to minimize time and cost for new developers joining the project;

•Have a clean contract with the underlying operating system, offering maximum portability between execution environments;

•Are suitable for deployment on modern cloud platforms, obviating the need for servers and systems administration;

•Minimize divergence between development and production, enabling continuous deployment for maximum agility;

•And can scale up without significant changes to tooling, architecture, or development practices.

The twelve-factor methodology can be applied to apps written in any programming language, and which use any combination of backing services (database, queue, memory cache, etc).

Who should read this document?

Any developer building applications which run as a service. Ops engineers who deploy or manage such applications.

High Level Snapshot of 12 Factors

twelve factor

The Twelve Factors Applied to Microservices

Codebase

One codebase per service, tracked in revision control; many deploys

The Twelve Factor App recommends one codebase per app. In a microservices architecture, the correct approach is actually one codebase per service. Additionally, we strongly recommend the use of Git as a repository, because of its rich feature set and enormous ecosystem. GitHub has become the default Git hosting platform in the open source community, but there are many other excellent Git hosting options, depending on the needs of your organization.

Dependencies

Explicitly declare and isolate dependencies

As suggested in The Twelve Factor App, regardless of what platform your application is running on, use the dependency manager included with your language or framework. How you install operating system or platform dependencies depends on the platform:

•In noncontainerized environments, use a configuration management tool (Chef, Puppet, Ansible) to install system dependencies.

•In a containerized environment, do this in the Docker file.

Config

Store configuration in the environment

Anything that varies between deployments can be considered configuration. The Twelve Factor App guidelines recommend storing all configuration in the environment, rather than committing it to the repository. We recommend the following specific practices:

•Use non‑version controlled .env files for local development. Docker supports the loading of these files at runtime.

•Keep all .env files in a secure storage system, such as Vault, to keep the files available to the development teams, but not committed to Git.

•Use an environment variable for anything that can change at runtime, and for any secrets that should not be committed to the shared repository.

•Once you have deployed your application to a delivery platform, use the delivery platform’s mechanism for managing environment variables.

Backing Services

Treat backing services as attached resources

The Twelve Factor App guidelines define a backing service as “any service the app consumes over the network as part of its normal operation.” The implication for microservices is that anything external to a service is treated as an attached resource, including other services. This ensures that every service is completely portable and loosely coupled to the other resources in the system. Additionally, the strict separation increases flexibility during development – developers only need to run the service(s) they are modifying, not others.

Build, Release, Run

Strictly separate build and run stages

To support strict separation of build, release, and run stages, as recommended by The Twelve Factor App, we recommend the use of a continuous integration/continuous delivery (CI/CD) tool to automate builds. Docker images make it easy to separate the build and run stages. Ideally, images are created from every commit and treated as deployment artifacts.

Processes

Execute the app in one or more stateless processes

For microservices, the important point in the Processes factor is that your application needs to be stateless. This makes it easy to scale a service horizontally by simply adding more instances of that service. Store any state-ful data, or data that needs to be shared between instances, in a backing service.

Data Isolation

Each service manages its own data

As a modification to make the Port binding factor more useful for microservices, we recommend that you allow access to the persistent data owned by a service only via the service’s API. This prevents implicit service contracts between microservices and ensures that microservices can’t become tightly coupled. Data isolation also allows the developer to choose, for each service, the type of data store that best suits its needs.

Concurrency

Scale out via the process model

The Unix process model is largely a predecessor to a true microservices architecture, insofar as it allows specialization and resource sharing for different tasks within a monolithic application. In a microservices architecture, you can horizontally scale each service independently, to the extent supported by the underlying infrastructure. With containerized services, you further get the concurrency recommended in the Twelve‑Factor App, for free.

Disposability

Maximize robustness with fast startup and graceful shutdown

Instances of a service need to be disposable so they can be started, stopped, and redeployed quickly, and with no loss of data. Services deployed in Docker containers satisfy this requirement automatically, as it’s an inherent feature of containers that they can be stopped and started instantly. Storing state or session data in queues or other backing services ensures that a request is handled seamlessly in the event of a container crash. We are also proponents of using a backing store to support crash‑only design.

Dev/Prod Parity

Keep development, staging, and production as similar as possible

Keep all of your environments – development, staging, production, and so on – as identical as possible, to reduce the risk that bugs show up only in some environments. To support this principle, we recommend, again, the use of containers – a very powerful tool here, as they enable you to run exactly the same execution environment all the way from local development through production. Keep in mind, however, that differences in the underlying data can still cause differences at runtime.

Logs

Treat logs as event streams

Instead of including code in a microservice for routing or storing logs, use one of the many good log‑management solutions on the market, several of which are listed in the Twelve‑Factor App. Further, deciding how you work with logs needs to be part of a larger APM and/or PaaS strategy.

Admin Processes

Run admin and management tasks as one‑off processes

In a production environment, run administrative and maintenance tasks separately from the app. Containers make this very easy, as you can spin up a container just to run a task and then shut it down.

Conclusion

Use the Twelve‑Factor App and these additional principles to help you create robust microservices‑based apps that are optimized for continuous delivery.

Explore more at teknonauts.

]]>
https://teknonauts.com/the-twelve-factor-app/feed/ 1
#3 Best Data Architecture need for modern applications https://teknonauts.com/data-architecture/ https://teknonauts.com/data-architecture/#comments Tue, 30 Mar 2021 09:00:07 +0000 https://teknonauts.com/?p=2882

Basics of Data Architecture

Data architecture is a critical component of the enterprise applications, data does not leads to only information and knowledge but it define the business strategy of any organization.  It is the basis of providing any service in the digital world. The indicative characteristics of an information system around the data it holds would include:

  • Personal and sensitive data
  • Large size and large number of datasets
  • Interdependent, Complex and diverse 
  • Dynamic e.g. stock market data
  • Unstructured e.g. social media data
  • Short and long life data

Because of above diverse characteristics designing the Data architecture is very important and necessary factor while designing enterprise application.

However, before we get into the Architecture, let us talk about what is happening with data in modern days. Why RDBMS limiting its use cases? Why everyone talking about NOSQL? Why everyone talking about BIG data and data analytics? Why data science became a stream for research?

 Over the past few decades, we have collected huge amount of data and only some percentage are  supposed to be meaningful data people used to fit them in RDBMS but when we realized the potential of ignored, unstructured massive data,  World started shifting towards NOSQL, Big data and data analytics. In addition, with advancement of the processing power Speed became one of the key driver to access huge unstructured data.

So, enterprise data architecture of modern days applications is a break from traditional data application where data is disconnected from other application and analytics at the same time.

The enterprise data architecture supports fast data created in a multitude of new endpoints, operationalizes the use of that data in applications, and moves data to a “data lake” where services are available for the deep, long-term storage and analytics needs of the enterprise. The enterprise data architecture can be represented as a data pipeline that unifies applications, analytics, and application interaction across multiple functions, products, and disciplines

Modern Data and databases

Key to understanding the need for an enterprise data architecture is an examination of the “database universe” concept, which illustrates the tight link between the age of data and its value.

Most technologists support data existence in a time continuum. In almost every business, data moves from function to function to inform business decisions at all levels of the organization. While data silos still exist, many organizations are moving away from the practice of  dumping data in a database—e.g., Oracle, Postgres, DB2, MSSQL, etc.—and holding it statically for long periods of time before taking action.

Why Architecture Matters

Interacting with fast data is a fundamentally different process than interacting with big data that is at  rest, requiring systems that are architected differently. With the correct assembly of components that

reflect the reality that application and analytics are merging, an enterprise data architecture can be built that achieves the needs of both data in motion (fast) and data at rest (big).

Building high-performance applications that can take advantage of fast data is a new challenge. Combining these capabilities with big data analytics into an enterprise data architecture is increasingly becoming table stakes.

Objective of designing Enterprise Data Architecture

The objective of designing the enterprise architecture is to deliver key service delivery value to the Enterprises. The value delivered by investing in designing the architecture can be evaluated using follow 

  • Cost savings
  • Efficiency
  • Service quality
  • Strategic control
  • Availability

To achieve two things emerge as need for today application- FATS & BIG. Enterprise applications should able to process big amount of data and server as fast as possible to achieve the highest output from it.

Here is the reference architecture for Modern application considering the above facts

                                                              Reference Architecture for Modern Enterprise Application

The first thing to notice is the tight coupling of fast and big, although they are separate systems; they have to be, at least at scale. The database system designed to work with millions of event decisions per second is wholly different from the system designed to hold petabytes of data and generate extensive historical reports.

Big Data, the Enterprise Data Architecture, and the Data Lake

The big data portion of the architecture is centered around a data lake, the storage location in which the enterprise dumps all of its data. This component is a critical attribute for a data pipeline that must  capture all information. The data lake is not necessarily unique because of its design or functionality; rather, its importance comes from the fact that it can present an enormously cost-effective system to store everything. Essentially, it is a distributed file system on cheap commodity hardware.

Today, the Hadoop Distributed File System (HDFS) looks like a suitable alternative for this data lake, but it is by no means the only answer. There might be multiple winning technologies that provide solutions to the need.

The big data platform’s core requirements are to store historical data that will be sent or shared with other data management products, and also to support frameworks for executing jobs directly against the data in the data lake.

Necessary components for Enterprise Architecture

  1. Business intelligence (BI) – reporting

Data warehouses do an excellent job of reporting and will continue to offer this capability. Some data will be exported to those systems and temporarily stored there, while other data will be accessed directly from the data lake in a hybrid fashion. These data warehouse systems were specifically designed to run complex report analytics, and do this well.

  • SQL on Hadoop

Much innovation is happening in this space. The goal of many of these products is to displace the data warehouse. These systems have a long way to go to get near the speed and efficiency of data warehouses, especially those with columnar designs. SQLon- Hadoop systems exist for a couple of important reasons:

  1. SQL is still the best way to query data
  2. Processing can occur without moving big chunks of data around
  • Exploratory analytics

This is the realm of the data scientist. These tools offer the ability to “find” things in data: patterns, obscure relationships, statistical rules, etc.

  • Job scheduling

  This is a loosely named group of job scheduling and management tasks that often occur in Hadoop. Many Hadoop use cases today involve pre-processing or cleaning data prior to the use of the analytics tools described above. These tools and interfaces allow that to happen.

The big data side of the enterprise data architecture has gained huge attention in Modern Enterprise Applications. Few would debate the fact that Hadoop has sparked the imagination of what is possible when data is fully utilized. However, the reality of how this data will be leveraged is still largely unknown.

Integrating Traditional Enterprise Applications into the Enterprise Data Architecture

The new enterprise data architecture can coexist with traditional applications until the time at which those applications require the capabilities of the enterprise data architecture. They will then be merged

into the data pipeline. The predominant way in which this integration occurs today, and will continue for the foreseeable future, is through an extract, transform, and load (ETL) process that extracts, transforms as required, and loads legacy data into the data lake where everything is stored. These applications will migrate to full-fledged fast + big data modern applications.

Conclusion

It is absolutely necessary to understand the promise and value of fast data but it is not sufficient enough for guaranteed success for enterprise working on implementing Big data initiatives. However, technologies and skillset to take advantage of fast data is necessary and critical for business and enterprises across the globe.

Fast data is a product of Big data, while there are unleashed opportunities from mining the data to derive business insights to enable growth there are much that still need to be accomplished. So by collecting vast amounts of data for exploration and analysis will not prepare a business to act in real time, as data flows into the organization from millions of endpoints: sensors, mobile devices, connected systems, and the Internet of Things.

We have to understand the architectural requirement for both fast and Big data separately and address the challenges with the right tools and technologies. But to take the business advantage we have to architecturally integrate and serve the applications on fast data processed from big data.

For more information refer wiki and explore more at Teknonauts

]]>
https://teknonauts.com/data-architecture/feed/ 2