The Next wave – Digital workforce, Software 2.0 and Infra 3.0 – Part 1

The Next wave is coming – “You either embrace it or get uprooted”

Digital transformation is now not talked about in the strategy meeting or inside the closed board rooms of the companies it has in fact become a necessity for firms to survive in this era of disruptive business models and rapidly changing markets.

Undoubtedly, Service Delivery Automation (#SDA) solutions such as Robotic Process Automation (#RPA) and #ArtificialIntelligence (AI) can play a significant role to enable this transformation and create value for enterprises.

Before we start talking about RPA/AI/Cognitive ML, enterprise need to think through some of these key questions

  • What are the key business problems in the traditional workforce model?
  • What are the key automation technologies that enable a digital workforce?
  • How can these automation technologies and solutions that can we used to give the best possible ROI for the enterprise? It is the most critical question “Think about ROI before choosing tools”
  • How can enterprises transform their front- and back-office operations through a smart digital workforce to become future-ready?
  • How can enterprises achieve strategic business impact by leveraging digital capabilities?

1.PNG

When it would be useful to bring in #RPA/AI into the enterprise and start thinking about the digital workforce as per the

Gartner report – “By 2020, robotic process automation will eliminate 20% of non-value-added tasks within the office”

21

The biggest challenge is bringing in the #Data into the systems, to create a framework the organization needs to think

  • What the different kind of data
  • How the data is generated in the organization or coming in the organization.
  • How to handle data via People
  • How to handle data via structured sources
  • Data from Social Media (Facebook, twitters, blogs etc.).
  • Data from Business Process as a Service ( #BPaaS)
  • Data from Electronic #data exchanges

22.png

There are 3 entities who are creating the data – Customers, Suppliers and the employees in the organisation.

23.png

Business processes in an organization can be classified as

  • Strategic
  • Knowledge-based
  • Transactional processes

And the Data can be classified on their

  • Type – (Referential, Transnational or Factual data)
  • Nature – (Slow changing or Rapidly changing data)
  • Complexity (Depends on amount processing required for consumption)

Bots that can handle most of the transnational processes and some knowledge-based processes can significantly free up the time of human agents to handle high-value work, the strategic work would be handled by the humans. As the system and the algorithms starts to learn more and move from the supervised learning toward more cognitive learning, both the human agents along with the cognitive agents starts capturing and processing information into a structure format, these structured data would be passed to the robots which would process most of the transnational processing and only complex queries or exceptions would be passed on to the humans agent.

24

25.png

#Software2.0

Intelligence is not moving from one technology to another technology, it is about adapting to changes quickly, always use the same resources, easy to optimize for performance and portability from one technology to other based on the problem statement with minimal efforts.

Example – #Google translate v1.0 had over 1 million lines of code, and the Google Translate v.2.0 has less than 1000 lines of code it more about the data.

Data is the driver of any Software 2.0 and data management principals are going to be the core fundamentals for Software 2.0 and how organizations are going to breach the wall of complexity to keep themselves ahead of the completion, but this is not going to be that simple the current models have data dependencies, the feedback loops both direct or hidden still needs to identify the anti-patterns and ignore the dead experimental code.

26

Part 2 – Infra 3.0 and a deep dive in a use case 

Leave your feedback and Comments

Abhinav Gupta

Product Manager

 

 

 

UX Research Cheat Sheet

UX Research Cheat Sheet

One of the questions we get the most is, “When should I do user research on my project?” There are three different answers:

  • Do user research at whatever stage you’re in right now. The earlier the research, the more impact the findings will have on your product, and by definition, the earliest you can do something on your current project (absent a time machine) is today.
  • Do user research at all the stages. As we show below, there’s something useful to learn in every single stage of any reasonable project plan, and each research step will increase the value of your product by more than the cost of the research.
  • Do most user research early in the project (when it’ll have the most impact), but conserve some budget for a smaller amount of supplementary research later in the project. This advice applies in the common case that you can’t get budget for all the research steps that would be useful.

The chart below describes UX methods and activities available in various project stages.

ux_methods_activities_nng_800pxmost-frequent-ux-research-methods-nielsen_norman_group

 

Principles behind the Agile Manifesto

12 Principals of agile

  • Welcome changing requirements, even late in  development. Agile processes harness change for
    the customer’s competitive advantage.
  • Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.
  • Business people and developers must work together daily throughout the project.
  • Build projects around motivated individuals.Give them the environment and support they need, and trust them to get the job done.
  • The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.
  • Working software is the primary measure of progress.
  • Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.
  • Continuous attention to technical excellence and good design enhances agility.
  • Simplicity–the art of maximizing the amount of work not done–is essential.
  • The best architectures, requirements, and designs emerge from self-organizing teams.
  • At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.
  • Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.

How to get the end dates in an #Agile project – Practical approach to story point estimation!

Traditionally the estimates are given in days, months or person-hours with a probable end date to the project and these estimates don’t include time in meetings, email, and other non-project related activities all of this considered to be as a “PROJECT BUFFER-TIME.”

Traditionally how it happens

Estimates for our project

  • One month for design and architecture
  • Four months for development
  • One month for testing
  • 15 days of buffer time

Scenario 1 – your estimation of design is off by 2 weeks, what do you do?

You have already given the date of final deliverable. Usually, dates have an emotional attachment to them, and relative estimation removes the emotional attachment of the end date.

Scenario 2 – During the start of the project you don’t know much of the requirements to start, possibility of wrong estimations are higher.

111

The cone of uncertainty defines – during the start of the project the probability of estimates going wrong are very high, due to high degree of uncertainty.

“Story point represents the amount of effort required to implement a user story in a product backlog”

So while defining a story point to a particular user story in the product backlog one would consider the following factors

  1. Risk
  2. Complexity ( which governs the effort)
  3. Uncertainty of the requirement ( which involves dependencies and doubts

222

So, if story points should estimate of how long it will take to develop a user story all the factors such as the risk and complexity should be factored into effort involved in implementing it. Story points represent time. This has to be so because time is what our bosses, clients and customers care about. They only care about complexity to the extent it influences the amount of time something will take.

What agile manifesto advocates is “Estimates may not be accurate, but they need to be consistent, the sprint velocity would correct the inaccuracy of the estimates”   The idea to compare your last given estimates with the next estimates time and again in order to reach consistency in your estimates.

Assumptions to be made

  • One of the most important considerations that need to be made while defining the story point is assuming the ideal day, where everything works as per the plan, no sick day, no one interrupts etc.
  • No individual task should be more than 16 hours ( you can decide a number to define based on your project
  • Assuming that your team size is constant and team understands their roles

Identification of the base/reference story

Breaking of the user stories until you reach to a small story which cannot be further broken. Making this is a base story or the reference story based on which you would relatively estimate the product backlog. This is important to create the initial product backlog.

Tuning the base/reference story

After the team has created the initial product backlog, pick one of those stories that are a good representation of an “average” story from that list few user stories and do a relative mapping to the story against the average story and assigning a story point to this reference story and realign the product backlog with the new number. The product backlog grooming is a continuous process which happens throughout the entire project.

Techniques of estimating

There are various techniques for assigning user story points to a user story.  Common estimating methods include t-shirt sizes (S, M, L, and too big), powers of 2 (1, 2, 4, 8), the Fibonacci sequence (1, 2, 3, 5, 8, etc.)

333.png

Agile emphasises that the story point estimation should happen at the story level not at the task level

444.png

The estimation should not happen at task level but at the user story level as some of the tasks might not be required as we come closer to the final delivery of the user story or even the user story may get dropped during the sprint planning meeting. One of the key things to remember during the estimation is working with the product owner and identifying the user story which is the most granular part of the product backlog which means it is simply small or needs to be split. As a part of the sprint meeting the team needs to define two key points would the team able to take this user story in the current sprint or this need to be broken into smaller deliverable user stories.

Learn from past estimates

Retrospectives are a time for the team to incorporate insights from past iterations–including the accuracy of their estimates. Many agile tools tracks story points, which makes reflecting on and re-calibrating estimates a lot easier. For example, pulling up the last 5 user stories the team delivered with the story point value 8. Discuss whether each of those work items had a similar level of effort. Use that insight in future estimation discussions.

Like everything else in agile, estimation is a practice. You’ll get better and better with time.

Global Regulatory and Tax Accounting- Point Solution

A point solution which has limited adoption time and that can be integrated with the downstream and upstream application in the business. Instead of created a platform or a product for all the global regulatory changes made by various Global regulators like the IRS, FINRA, SEC, FCA, BASEL, etc. creating a point solution which can be then customized based on the customer technology, business requirements and sophistication of the users and technical capabilities within the organisation

End to End Solution

reg-1

Advantages of having point solution over platform/products

  1. One size does not fit all
  2. Customer technology culture – Some companies can handle outsourced services and are not interested in customizing platforms. Some IT departments, even in the largest of companies, are more focused on user experience than control and support departments in their efforts match their innovative processes
  3. Easy to get going and straightforward – client can make a decision whether to outsource the entire element to a service provider. Also, reduces the technology complexity when using multiple technologies for the same line of business the diagnosis of the problem is always challenging
  4. Low cost compared to the big enterprise platforms, as this may minimize the data complexity as each client tool would have its database with its structure and integrating these databases is hard
  5. The point solutions can be converted to enterprise platforms through M&A.

Key Success Indicators

  1. Regulatory risk reduction
  2. Cost Optimization through automated workflows
  3. Business Change adaptability with minimum business impact
  4. Robust control framework

 

Artificial Intelligence and Risk Management

How Can Artificial Intelligence help Investment Banking Risk Management?

Beyond Just Calculations….

The risk management algorithm has always been more about complex calculations. There are various models such as the binomial model, VaR (Value at Risk), Black-Sholes Morten model. These models are put into different simulation modeling algorithms such as Monte Carlo simulation, GARCH (1, 1) also known as Generalized autoregressive conditional heteroskedasticity model, EWMA, which are currently used by the banks. CME (Chicago Mercantile Exchange) developed risk management algorithms PC-SPAN for portfolio margining

  • SPAN has been reviewed and approved by market regulators and participants worldwide.
  • SPAN is the official Performance Bond mechanism of over 50 exchanges and clearing organizations worldwide, making it the global standard for portfolio margining

Risk management is the application of the risk management process which consists in:

Capture

The future challenge to integrate Risk management in every area of a company means “operational,” “economical”, and “strategically, Enterprise Risk Management (ERM) will be a need for the future management processes.

The latest developments that are shaping up in the Fintech focused on the Risk Management is an application of Artificial Intelligence in the Financial Risk Management.

How different methodology of a quantitative risk analysis to develop a formal risk management can leverage Artificial Neural Networks

What is similar between an engineer controlling an industrial facility and a bank operations manager controlling payment processing? Both deal with operational risks that require immediate action at the earliest sign of trouble.  Process for both is same as described in below diagram

AI_2

Financial Institutions are primarily risk managers and manage a variety of financial risks — market, credit, operational, currency, liquidity, and others. For having a robust risk management the bank need to ensure that it should embrace data-informed technologies that are being applied to the following

 

AI_3

Why need the Black Box?

Data-Informed Operations

Data-informed operations are the basis for day to day operations. No matter how sophisticated the data collection and processing systems are, a trained human is ultimately responsible for making critical decisions.  The bank operations managers can perform exception processing and error recovery based upon system-generated communications.

Any analytics applications that are guiding their decisions will submit their findings to a human,  to effect any changes to operations. The reason why operations department run by large sized teams who are challenged to keep improving their effectiveness through improved process and use of the latest cutting-edge technology.

Chasing False Alarms

Risk management teams use expert systems, primarily based on rules, to monitor operations and generate alerts. Expert risk managers set alerts based on rules, historical thresholds, specific KPIs, and the tuning of each of these over time takes up a lot of the time of experts to maintain the balance between risk exposure and team efficiency, and that has its consequences.

One of the biggest challenges of improving effectiveness is false alarms raised by analytics technologies currently used. Not being sensitive enough means that the risk exposure is higher, being too sensitive also means the operations team under much pressure is chasing false alarms.

No algorithm is useful in isolation, but instead from the perspective of how it interacts with its environment (data sampling, filtering, and reduction) and also how it manipulates or alters its environment. Therefore, the algorithm depends on an understanding of the environment and also a way to manage the environment

Ai_4

Any rules-based systems pose a major dilemma to an operations person because; the cost of missing an actual exception, models may be tuned extremely conservatively. As a result of this, it significantly increases operational cost, but they also create “alarm fatigue”, in which operators expect false alarms to such an extent that they miss a genuinely positive and allow an improper transaction.

Harry Henderson proposed an AI model which has both the old rules memory and a working memory where the model intelligently learns from the current environment and the rule matching system quickly re-tweaks the rules so as to avoid false alarms without losing the original exceptions

 

Ai_5

 

Trends and Human Processing (What is in the BLACK BOX?)

A robust risk management is about dealing with real-time transactional data and historical trends/learnings; there is an important aspect of time that affects how decisions are made. In general, humans are good at interpreting simple trends by looking at slopes and levels, but human have limitations to describe complex patterns. A solution to the problem is if expert AI systems can encode these trends

ai_6

The problem with the AI system is when different pieces of information don’t arrive at the same time or rate, incorporating the trends in such data into any AI system tends to be difficult.

For example – In the case of the Monte-Carlo simulation the inputs may be fixed, but the frequency of the inputs A, B, C, and D may vary leading to which the random number generation of the Monte-Carlo model(MCM).  May not follow the probability distribution curve which is a function of (PDF), to ensure that various risk has been factored the outputs of the MCM  would be evaluated and tested for the hypothesis using multiple regressive inputs to validate the confidence level of the model.

A critical aspect for the Risk Modeler is to select the appropriate distribution function according to the data available; it can follow any Log function, Normal distribution, Chi –squared distribution function, etc. The modeler also needs to understand the behavior of the data in the practice; typically it is based on an available historical database.

ai_7

Monte Carlo Simulation Model for risk assessment

This problem is exacerbated in financial services applications where trends are formed (and change) over periods of days, weeks or even years. Operations users cannot be expected to recognize long-term trends in customer behavior without expert system assistance. The result of such difficulties is it increases the number of effort people has to put into confirming alarms by interpreting patterns.

 

Expert instead of Learning Systems (Mind in the Box)

The next issue is that expert systems do not change by themselves as they have to be programmed by experts. The main advantage of AI system is that the whole process (training and testing) mimics the human brain reasoning like learning occurs in the minds of experts who then apply the lessons of their learning into the next versions of the rule engine base.

ai_8

With the rapidly changing financial business and data landscape, the operational systems have not evolved quickly enough. This leads to more risk exposure and less optimized use of the margin money.

For exampleGame of Chess

AI_9

But then the questions comes Can machine ever have a mechanism to reach a conclusion based on just common sense which is beyond logical reasoning? Can machines present facts which are beyond the mathematical formulas? Can machines make a logical deduction of the cases which has the rarest or the rare possibility of occurrence to ensure optimal use of time and resource?

Would AI system get thumbs up or thumbs down in future, is something that only time will tell, as the commercial application of AI has to withstand the challenge of diversity as different Banks, Insurance companies, funds and financial firms don’t speak the same risk – language. Never the less each one of them performs the cost-benefit analysis, sensitivity analysis, scenario analysis which permits to perform both the quantitative and qualitative analysis

 

Happy Reading!!

Abhinav Gupta

Reference data Systems – Legacy Modernization and Transformation

What is Reference data, in the financial industry?

The Industry Definition of reference data is that foundational data that provides the basis to generate, structure, categorize, or describe business transactions. The Reference data is the basis to view, monitor, analyze and report on transactions.  The below diagram shows that there are five main elements of a financial transaction some of the  

Drivers for Legacy modernization of reference data systems

Reference data systems are critical for a bank or financial institution and are the core asset of the bank. These systems should be adequately managed, governed, enhanced in a systematic fashion. The reference data system impacts all the operational functions of a bank. However when it comes to managing these reference data systems to drive the business; most of the banks/financial institutes are using on old technology stack.

With the continuously changing regulatory requirements, increase physical and information security challenges it has become imperative for the banks to use the twenty-first-century technology ; to simply management of the financial instrument, client and counterparty accounts, market data, historical transactional data information like the settlement instructions, etc. to minimize the risk by reducing the overall complexity

Key Points considered for defining the modernization initiative at any financial institution

application-development

  • Decompose monolithic applications into discrete services and process flows by creating componentized applications and services which are agnostic in nature
  • Architect for real-time straight-through-processing prefers to eliminate batch cycles which help in moving the system from EOD process to near real time process
  • Segregation into business services and common services, which would decouple the core business service system with the common services. For example – creating a common module for tax calculation for different types of trades and asset service transactions
  • Provide centralized user access/experience via Securities Workstation, which reduces the complex configuration management systems. Users should be allowed to login into multiple systems using single sign-on
  • Business exceptions should be handled by a standard work-item management layer, having a proper workflow management system with 4 eyes (maker-checker) reducing the operational inefficiencies
  • Data stored in a golden master database, creating a golden copy helps reduce the overall risk associated with the operational workflow and running predictive analytics on top of golden copy database would yield a better result with higher confidence levels

 

Abhinav Gupta

Alternative medicine and Anti- Cancer Diet

View at Medium.com

I was researching for someone who suffers from Multiple Myeloma ( Blood Cancer) III (a) he has been undergoing Chemotherapy some of the below treatments have really helped him to withstand the side effects of chemo and steroids

Some of the Anti-Cancer Diets which was helpful

Budwig diet is rich in vegetables, fibre and fruits. You also need to avoid meat, sugar and fats like margarine, salad oil and butter.

The Budwig protocol in healing patients with cancer has an 80 to 93% success rate, this is based on Dr. Budwig’s report and other organisations that support his method in treating cancer.

Normally, flax seed is taken orally, but for severe cases it is used in enema form. The other part of the Budwig Protocol is a Specialised Diet. The results are usually noticed within 90 days, and in some cases after a week. Patients with cancer should continue with the protocol for a minimum of 6 months, regardless of whether the symptoms disappear or not.

A bowl, add 1 tsp of honey and 2 tbsps of freshly ground flax seeds. It is important that you only use freshly ground flax seeds.

Add fresh organic fruits mixture (fruits like peaches, apples, berries, grapes and others). Do not use bananas because, according to Dr. Budwig, this can quickly increase the blood sugar levels in cancer patients.

Mix 3 tbsps of flax seed oil with 100g of quark/ cottage cheese. Add 3 tbsps of unhomogenized milk to produce a smooth mixture. Blend it to get it mixed thoroughly.

You can add more flax seed oil depending on your personal taste, but in case the quark is not able to absorb the oil fully, add more as needed. Add the mixture to the bowl. You can add cinnamon or vanilla to improve the flavor, but this is optional.

The cinnamon will help in regulating blood sugar. You can add organic nuts on top.

Vitamin C injections and oxygen therapies should not be combined with the protocol.

All viruses , parasites and pathogens are, by nature, anaerobic, meaning that they thrive in the absence of oxygen, but cannot survive in an oxygen rich environment.

Hyperbaric Oxygen Therapy

Same thing goes for cancer cells. They cannot exist in an oxygen-rich environment. HBOT, or Hyperbaric Oxygen Therapy, involves the breathing of pure oxygen whilst in a closed chamber that has been pressurized at 1–1/ 2 to three times normal atmospheric pressure.

The Scientific Evidence Showing HBOT is Effective Scientific evidence shows HBOT is very effective in providing remedy in a number of diseases. The Committee on HBOT of the Undersea and Hyperbaric Medicine suggests it for the treatment of: ·

Abscess in the brain or head

Anemia due to severe blood loss ·

Arterial gas embolism

Blockage of the retinal artery

Carbon monoxide poisoning

Certain wounds that are not healing with standard treatment ·

Crushing injuries in which there is not enough oxygen to the tissues

Decompression sickness

Fighting Cancer with Vitamins and Supplements

There are many vitamins and minerals that people claim are very effective in treating cancer, however, only a few were actually proven to be very effective. Turmeric and Curcumin happened to be one of them.

Turmeric is One of the most powerful and under-recognized of these is curcumin. Curcumin has 240 published studies that are available in the global scientific literature and is known as the most powerful cancer-preventing agent. Curcumin is extracted from the Indian spice turmeric
its chemopreventive and anti-inflammatory power.

Actually, curcumin targets 10 causative factors involved in the development of cancer. Interrupting any of these factors will protect you from developing cancer. Disrupting more than one factor will provide you better protection, including the prevention of DNA damage.

By blocking NF-kB , an inflammatory master molecule, curcumin reduces cancer-causing inflammation, lowering the levels of inflammatory cytokines all over the body. It also interferes with the formation of dangerous advanced glycation end products that promotes inflammation, which may cause cancerous mutation.

Curcumin has the ability to alter cellular signaling to have healthy control over cellular replication, which controls the cellular reproductive cycle, and helps in stopping uncontrolled propagation of new tissue in tumors. It stimulates apoptosis in reproducing cancer cells without affecting the healthy tissue and makes a tumor more susceptible to cell-killing cures.

Vitamin D is said to be cancer’s worst enemy.

Vitamin D can help you prevent more than 16 various types of cancer, which include breast, lung, ovarian, prostate, pancreatic and skin cancers.

Actually, if you have a good amount of vitamin D in your body, your risk of having cancer is reduced by 77%. Vitamin D has shown preventative benefits for many diseases, which include diabetes, cancer and heart disease, and can even reduce chronic pain.

Theories associating vitamin D deficiency to cancer have been confirmed and tested in over 200 epidemiological studies, and understanding of its physiological basis stems from over 2,500 laboratory studies.
The most important factor is keeping your vitamin D serum levels at around 50 and 70 /ml. Vitamin D from sun exposure or a safe tanning bed is the BEST way to optimize your vitamin D levels. If you take oral vitamin D and have cancer, you should monitor your vit. D serum levels regularly and also supplement your vitamin K2, since K2 deficiency is actually what produces the symptoms of vit. D toxicity.

Gerson Therapy

How Gerson Therapy is Administered The Gerson Therapy treatment plan should be followed properly.Some of the important parts of the regimen include the following:

Take 13 glasses of juice a day. The juice must be made from freshly squeezed organic veggies and fruits and should be taken once every hour.

Consuming vegetarian meals of organically grown veggies and fruits.

Taking supplements, such as Potassium, co-enzyme 10 injected with vitamin B12 Vitamins B3, C, and A, Pancreatic enzymes, pepsin and flax seed oil.

Taking chamomile enemas or coffee regularly to eliminate toxins from the body.

Preparing food without spices, oils, or salt and without using aluminium utensils or cookware.

Hope that it helps

Abhinav Gupta