UX Research Cheat Sheet

UX Research Cheat Sheet

One of the questions we get the most is, “When should I do user research on my project?” There are three different answers:

  • Do user research at whatever stage you’re in right now. The earlier the research, the more impact the findings will have on your product, and by definition, the earliest you can do something on your current project (absent a time machine) is today.
  • Do user research at all the stages. As we show below, there’s something useful to learn in every single stage of any reasonable project plan, and each research step will increase the value of your product by more than the cost of the research.
  • Do most user research early in the project (when it’ll have the most impact), but conserve some budget for a smaller amount of supplementary research later in the project. This advice applies in the common case that you can’t get budget for all the research steps that would be useful.

The chart below describes UX methods and activities available in various project stages.

ux_methods_activities_nng_800pxmost-frequent-ux-research-methods-nielsen_norman_group

 

Advertisements

Principles behind the Agile Manifesto

12 Principals of agile

  • Welcome changing requirements, even late in  development. Agile processes harness change for
    the customer’s competitive advantage.
  • Deliver working software frequently, from a couple of weeks to a couple of months, with a preference to the shorter timescale.
  • Business people and developers must work together daily throughout the project.
  • Build projects around motivated individuals.Give them the environment and support they need, and trust them to get the job done.
  • The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.
  • Working software is the primary measure of progress.
  • Agile processes promote sustainable development. The sponsors, developers, and users should be able to maintain a constant pace indefinitely.
  • Continuous attention to technical excellence and good design enhances agility.
  • Simplicity–the art of maximizing the amount of work not done–is essential.
  • The best architectures, requirements, and designs emerge from self-organizing teams.
  • At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behavior accordingly.
  • Our highest priority is to satisfy the customer through early and continuous delivery of valuable software.

How to get the end dates in an #Agile project – Practical approach to story point estimation!

Traditionally the estimates are given in days, months or person-hours with a probable end date to the project and these estimates don’t include time in meetings, email, and other non-project related activities all of this considered to be as a “PROJECT BUFFER-TIME.”

Traditionally how it happens

Estimates for our project

  • One month for design and architecture
  • Four months for development
  • One month for testing
  • 15 days of buffer time

Scenario 1 – your estimation of design is off by 2 weeks, what do you do?

You have already given the date of final deliverable. Usually, dates have an emotional attachment to them, and relative estimation removes the emotional attachment of the end date.

Scenario 2 – During the start of the project you don’t know much of the requirements to start, possibility of wrong estimations are higher.

111

The cone of uncertainty defines – during the start of the project the probability of estimates going wrong are very high, due to high degree of uncertainty.

“Story point represents the amount of effort required to implement a user story in a product backlog”

So while defining a story point to a particular user story in the product backlog one would consider the following factors

  1. Risk
  2. Complexity ( which governs the effort)
  3. Uncertainty of the requirement ( which involves dependencies and doubts

222

So, if story points should estimate of how long it will take to develop a user story all the factors such as the risk and complexity should be factored into effort involved in implementing it. Story points represent time. This has to be so because time is what our bosses, clients and customers care about. They only care about complexity to the extent it influences the amount of time something will take.

What agile manifesto advocates is “Estimates may not be accurate, but they need to be consistent, the sprint velocity would correct the inaccuracy of the estimates”   The idea to compare your last given estimates with the next estimates time and again in order to reach consistency in your estimates.

Assumptions to be made

  • One of the most important considerations that need to be made while defining the story point is assuming the ideal day, where everything works as per the plan, no sick day, no one interrupts etc.
  • No individual task should be more than 16 hours ( you can decide a number to define based on your project
  • Assuming that your team size is constant and team understands their roles

Identification of the base/reference story

Breaking of the user stories until you reach to a small story which cannot be further broken. Making this is a base story or the reference story based on which you would relatively estimate the product backlog. This is important to create the initial product backlog.

Tuning the base/reference story

After the team has created the initial product backlog, pick one of those stories that are a good representation of an “average” story from that list few user stories and do a relative mapping to the story against the average story and assigning a story point to this reference story and realign the product backlog with the new number. The product backlog grooming is a continuous process which happens throughout the entire project.

Techniques of estimating

There are various techniques for assigning user story points to a user story.  Common estimating methods include t-shirt sizes (S, M, L, and too big), powers of 2 (1, 2, 4, 8), the Fibonacci sequence (1, 2, 3, 5, 8, etc.)

333.png

Agile emphasises that the story point estimation should happen at the story level not at the task level

444.png

The estimation should not happen at task level but at the user story level as some of the tasks might not be required as we come closer to the final delivery of the user story or even the user story may get dropped during the sprint planning meeting. One of the key things to remember during the estimation is working with the product owner and identifying the user story which is the most granular part of the product backlog which means it is simply small or needs to be split. As a part of the sprint meeting the team needs to define two key points would the team able to take this user story in the current sprint or this need to be broken into smaller deliverable user stories.

Learn from past estimates

Retrospectives are a time for the team to incorporate insights from past iterations–including the accuracy of their estimates. Many agile tools tracks story points, which makes reflecting on and re-calibrating estimates a lot easier. For example, pulling up the last 5 user stories the team delivered with the story point value 8. Discuss whether each of those work items had a similar level of effort. Use that insight in future estimation discussions.

Like everything else in agile, estimation is a practice. You’ll get better and better with time.

Global Regulatory and Tax Accounting- Point Solution

A point solution which has limited adoption time and that can be integrated with the downstream and upstream application in the business. Instead of created a platform or a product for all the global regulatory changes made by various Global regulators like the IRS, FINRA, SEC, FCA, BASEL, etc. creating a point solution which can be then customized based on the customer technology, business requirements and sophistication of the users and technical capabilities within the organisation

End to End Solution

reg-1

Advantages of having point solution over platform/products

  1. One size does not fit all
  2. Customer technology culture – Some companies can handle outsourced services and are not interested in customizing platforms. Some IT departments, even in the largest of companies, are more focused on user experience than control and support departments in their efforts match their innovative processes
  3. Easy to get going and straightforward – client can make a decision whether to outsource the entire element to a service provider. Also, reduces the technology complexity when using multiple technologies for the same line of business the diagnosis of the problem is always challenging
  4. Low cost compared to the big enterprise platforms, as this may minimize the data complexity as each client tool would have its database with its structure and integrating these databases is hard
  5. The point solutions can be converted to enterprise platforms through M&A.

Key Success Indicators

  1. Regulatory risk reduction
  2. Cost Optimization through automated workflows
  3. Business Change adaptability with minimum business impact
  4. Robust control framework

 

Artificial Intelligence and Risk Management

How Can Artificial Intelligence help Investment Banking Risk Management?

Beyond Just Calculations….

The risk management algorithm has always been more about complex calculations. There are various models such as the binomial model, VaR (Value at Risk), Black-Sholes Morten model. These models are put into different simulation modeling algorithms such as Monte Carlo simulation, GARCH (1, 1) also known as Generalized autoregressive conditional heteroskedasticity model, EWMA, which are currently used by the banks. CME (Chicago Mercantile Exchange) developed risk management algorithms PC-SPAN for portfolio margining

  • SPAN has been reviewed and approved by market regulators and participants worldwide.
  • SPAN is the official Performance Bond mechanism of over 50 exchanges and clearing organizations worldwide, making it the global standard for portfolio margining

Risk management is the application of the risk management process which consists in:

Capture

The future challenge to integrate Risk management in every area of a company means “operational,” “economical”, and “strategically, Enterprise Risk Management (ERM) will be a need for the future management processes.

The latest developments that are shaping up in the Fintech focused on the Risk Management is an application of Artificial Intelligence in the Financial Risk Management.

How different methodology of a quantitative risk analysis to develop a formal risk management can leverage Artificial Neural Networks

What is similar between an engineer controlling an industrial facility and a bank operations manager controlling payment processing? Both deal with operational risks that require immediate action at the earliest sign of trouble.  Process for both is same as described in below diagram

AI_2

Financial Institutions are primarily risk managers and manage a variety of financial risks — market, credit, operational, currency, liquidity, and others. For having a robust risk management the bank need to ensure that it should embrace data-informed technologies that are being applied to the following

 

AI_3

Why need the Black Box?

Data-Informed Operations

Data-informed operations are the basis for day to day operations. No matter how sophisticated the data collection and processing systems are, a trained human is ultimately responsible for making critical decisions.  The bank operations managers can perform exception processing and error recovery based upon system-generated communications.

Any analytics applications that are guiding their decisions will submit their findings to a human,  to effect any changes to operations. The reason why operations department run by large sized teams who are challenged to keep improving their effectiveness through improved process and use of the latest cutting-edge technology.

Chasing False Alarms

Risk management teams use expert systems, primarily based on rules, to monitor operations and generate alerts. Expert risk managers set alerts based on rules, historical thresholds, specific KPIs, and the tuning of each of these over time takes up a lot of the time of experts to maintain the balance between risk exposure and team efficiency, and that has its consequences.

One of the biggest challenges of improving effectiveness is false alarms raised by analytics technologies currently used. Not being sensitive enough means that the risk exposure is higher, being too sensitive also means the operations team under much pressure is chasing false alarms.

No algorithm is useful in isolation, but instead from the perspective of how it interacts with its environment (data sampling, filtering, and reduction) and also how it manipulates or alters its environment. Therefore, the algorithm depends on an understanding of the environment and also a way to manage the environment

Ai_4

Any rules-based systems pose a major dilemma to an operations person because; the cost of missing an actual exception, models may be tuned extremely conservatively. As a result of this, it significantly increases operational cost, but they also create “alarm fatigue”, in which operators expect false alarms to such an extent that they miss a genuinely positive and allow an improper transaction.

Harry Henderson proposed an AI model which has both the old rules memory and a working memory where the model intelligently learns from the current environment and the rule matching system quickly re-tweaks the rules so as to avoid false alarms without losing the original exceptions

 

Ai_5

 

Trends and Human Processing (What is in the BLACK BOX?)

A robust risk management is about dealing with real-time transactional data and historical trends/learnings; there is an important aspect of time that affects how decisions are made. In general, humans are good at interpreting simple trends by looking at slopes and levels, but human have limitations to describe complex patterns. A solution to the problem is if expert AI systems can encode these trends

ai_6

The problem with the AI system is when different pieces of information don’t arrive at the same time or rate, incorporating the trends in such data into any AI system tends to be difficult.

For example – In the case of the Monte-Carlo simulation the inputs may be fixed, but the frequency of the inputs A, B, C, and D may vary leading to which the random number generation of the Monte-Carlo model(MCM).  May not follow the probability distribution curve which is a function of (PDF), to ensure that various risk has been factored the outputs of the MCM  would be evaluated and tested for the hypothesis using multiple regressive inputs to validate the confidence level of the model.

A critical aspect for the Risk Modeler is to select the appropriate distribution function according to the data available; it can follow any Log function, Normal distribution, Chi –squared distribution function, etc. The modeler also needs to understand the behavior of the data in the practice; typically it is based on an available historical database.

ai_7

Monte Carlo Simulation Model for risk assessment

This problem is exacerbated in financial services applications where trends are formed (and change) over periods of days, weeks or even years. Operations users cannot be expected to recognize long-term trends in customer behavior without expert system assistance. The result of such difficulties is it increases the number of effort people has to put into confirming alarms by interpreting patterns.

 

Expert instead of Learning Systems (Mind in the Box)

The next issue is that expert systems do not change by themselves as they have to be programmed by experts. The main advantage of AI system is that the whole process (training and testing) mimics the human brain reasoning like learning occurs in the minds of experts who then apply the lessons of their learning into the next versions of the rule engine base.

ai_8

With the rapidly changing financial business and data landscape, the operational systems have not evolved quickly enough. This leads to more risk exposure and less optimized use of the margin money.

For exampleGame of Chess

AI_9

But then the questions comes Can machine ever have a mechanism to reach a conclusion based on just common sense which is beyond logical reasoning? Can machines present facts which are beyond the mathematical formulas? Can machines make a logical deduction of the cases which has the rarest or the rare possibility of occurrence to ensure optimal use of time and resource?

Would AI system get thumbs up or thumbs down in future, is something that only time will tell, as the commercial application of AI has to withstand the challenge of diversity as different Banks, Insurance companies, funds and financial firms don’t speak the same risk – language. Never the less each one of them performs the cost-benefit analysis, sensitivity analysis, scenario analysis which permits to perform both the quantitative and qualitative analysis

 

Happy Reading!!

Abhinav Gupta

Reference data Systems – Legacy Modernization and Transformation

What is Reference data, in the financial industry?

The Industry Definition of reference data is that foundational data that provides the basis to generate, structure, categorize, or describe business transactions. The Reference data is the basis to view, monitor, analyze and report on transactions.  The below diagram shows that there are five main elements of a financial transaction some of the  

Drivers for Legacy modernization of reference data systems

Reference data systems are critical for a bank or financial institution and are the core asset of the bank. These systems should be adequately managed, governed, enhanced in a systematic fashion. The reference data system impacts all the operational functions of a bank. However when it comes to managing these reference data systems to drive the business; most of the banks/financial institutes are using on old technology stack.

With the continuously changing regulatory requirements, increase physical and information security challenges it has become imperative for the banks to use the twenty-first-century technology ; to simply management of the financial instrument, client and counterparty accounts, market data, historical transactional data information like the settlement instructions, etc. to minimize the risk by reducing the overall complexity

Key Points considered for defining the modernization initiative at any financial institution

application-development

  • Decompose monolithic applications into discrete services and process flows by creating componentized applications and services which are agnostic in nature
  • Architect for real-time straight-through-processing prefers to eliminate batch cycles which help in moving the system from EOD process to near real time process
  • Segregation into business services and common services, which would decouple the core business service system with the common services. For example – creating a common module for tax calculation for different types of trades and asset service transactions
  • Provide centralized user access/experience via Securities Workstation, which reduces the complex configuration management systems. Users should be allowed to login into multiple systems using single sign-on
  • Business exceptions should be handled by a standard work-item management layer, having a proper workflow management system with 4 eyes (maker-checker) reducing the operational inefficiencies
  • Data stored in a golden master database, creating a golden copy helps reduce the overall risk associated with the operational workflow and running predictive analytics on top of golden copy database would yield a better result with higher confidence levels

 

Abhinav Gupta

Alternative medicine and Anti- Cancer Diet

View story at Medium.com

I was researching for someone who suffers from Multiple Myeloma ( Blood Cancer) III (a) he has been undergoing Chemotherapy some of the below treatments have really helped him to withstand the side effects of chemo and steroids

Some of the Anti-Cancer Diets which was helpful

Budwig diet is rich in vegetables, fibre and fruits. You also need to avoid meat, sugar and fats like margarine, salad oil and butter.

The Budwig protocol in healing patients with cancer has an 80 to 93% success rate, this is based on Dr. Budwig’s report and other organisations that support his method in treating cancer.

Normally, flax seed is taken orally, but for severe cases it is used in enema form. The other part of the Budwig Protocol is a Specialised Diet. The results are usually noticed within 90 days, and in some cases after a week. Patients with cancer should continue with the protocol for a minimum of 6 months, regardless of whether the symptoms disappear or not.

A bowl, add 1 tsp of honey and 2 tbsps of freshly ground flax seeds. It is important that you only use freshly ground flax seeds.

Add fresh organic fruits mixture (fruits like peaches, apples, berries, grapes and others). Do not use bananas because, according to Dr. Budwig, this can quickly increase the blood sugar levels in cancer patients.

Mix 3 tbsps of flax seed oil with 100g of quark/ cottage cheese. Add 3 tbsps of unhomogenized milk to produce a smooth mixture. Blend it to get it mixed thoroughly.

You can add more flax seed oil depending on your personal taste, but in case the quark is not able to absorb the oil fully, add more as needed. Add the mixture to the bowl. You can add cinnamon or vanilla to improve the flavor, but this is optional.

The cinnamon will help in regulating blood sugar. You can add organic nuts on top.

Vitamin C injections and oxygen therapies should not be combined with the protocol.

All viruses , parasites and pathogens are, by nature, anaerobic, meaning that they thrive in the absence of oxygen, but cannot survive in an oxygen rich environment.

Hyperbaric Oxygen Therapy

Same thing goes for cancer cells. They cannot exist in an oxygen-rich environment. HBOT, or Hyperbaric Oxygen Therapy, involves the breathing of pure oxygen whilst in a closed chamber that has been pressurized at 1–1/ 2 to three times normal atmospheric pressure.

The Scientific Evidence Showing HBOT is Effective Scientific evidence shows HBOT is very effective in providing remedy in a number of diseases. The Committee on HBOT of the Undersea and Hyperbaric Medicine suggests it for the treatment of: ·

Abscess in the brain or head

Anemia due to severe blood loss ·

Arterial gas embolism

Blockage of the retinal artery

Carbon monoxide poisoning

Certain wounds that are not healing with standard treatment ·

Crushing injuries in which there is not enough oxygen to the tissues

Decompression sickness

Fighting Cancer with Vitamins and Supplements

There are many vitamins and minerals that people claim are very effective in treating cancer, however, only a few were actually proven to be very effective. Turmeric and Curcumin happened to be one of them.

Turmeric is One of the most powerful and under-recognized of these is curcumin. Curcumin has 240 published studies that are available in the global scientific literature and is known as the most powerful cancer-preventing agent. Curcumin is extracted from the Indian spice turmeric
its chemopreventive and anti-inflammatory power.

Actually, curcumin targets 10 causative factors involved in the development of cancer. Interrupting any of these factors will protect you from developing cancer. Disrupting more than one factor will provide you better protection, including the prevention of DNA damage.

By blocking NF-kB , an inflammatory master molecule, curcumin reduces cancer-causing inflammation, lowering the levels of inflammatory cytokines all over the body. It also interferes with the formation of dangerous advanced glycation end products that promotes inflammation, which may cause cancerous mutation.

Curcumin has the ability to alter cellular signaling to have healthy control over cellular replication, which controls the cellular reproductive cycle, and helps in stopping uncontrolled propagation of new tissue in tumors. It stimulates apoptosis in reproducing cancer cells without affecting the healthy tissue and makes a tumor more susceptible to cell-killing cures.

Vitamin D is said to be cancer’s worst enemy.

Vitamin D can help you prevent more than 16 various types of cancer, which include breast, lung, ovarian, prostate, pancreatic and skin cancers.

Actually, if you have a good amount of vitamin D in your body, your risk of having cancer is reduced by 77%. Vitamin D has shown preventative benefits for many diseases, which include diabetes, cancer and heart disease, and can even reduce chronic pain.

Theories associating vitamin D deficiency to cancer have been confirmed and tested in over 200 epidemiological studies, and understanding of its physiological basis stems from over 2,500 laboratory studies.
The most important factor is keeping your vitamin D serum levels at around 50 and 70 /ml. Vitamin D from sun exposure or a safe tanning bed is the BEST way to optimize your vitamin D levels. If you take oral vitamin D and have cancer, you should monitor your vit. D serum levels regularly and also supplement your vitamin K2, since K2 deficiency is actually what produces the symptoms of vit. D toxicity.

Gerson Therapy

How Gerson Therapy is Administered The Gerson Therapy treatment plan should be followed properly.Some of the important parts of the regimen include the following:

Take 13 glasses of juice a day. The juice must be made from freshly squeezed organic veggies and fruits and should be taken once every hour.

Consuming vegetarian meals of organically grown veggies and fruits.

Taking supplements, such as Potassium, co-enzyme 10 injected with vitamin B12 Vitamins B3, C, and A, Pancreatic enzymes, pepsin and flax seed oil.

Taking chamomile enemas or coffee regularly to eliminate toxins from the body.

Preparing food without spices, oils, or salt and without using aluminium utensils or cookware.

Hope that it helps

Abhinav Gupta

The Business Analyst Career Roadmap

BA career is a journey, with many entry and exit points. The current position of BA has many role families; BA can expertise more than one role .For example, a functional analyst can also have process analysis experience. Therefore your options for career growth have multiple entry and exit points.

Role Families 

1. Business Focused Role Families 

  • Business Requirements Analyst: The business requirements analyst is tasked with helping the business to meet its objectives and goals. He/she will understand how work is being conducted, and through analysis, determine solutions to the issues. He/she will have in-depth business knowledge typically related to a department (e.g. customer service, manufacturing). This role may conduct a feasibility study or justify the investment in change through a business case.
  • Business Process Analyst: A business process analyst specializes in bringing change to organizations through the analysis, design, and implementation of the business processes that keep teams running and the management of changes to those processes. Business process analysts have broad competencies in identifying the current state of processes, eliciting useful and harmful attributes of them, documenting models of the processes and facilitating stakeholder groups to a consensus regarding new business process designs.
  • Decision Analyst – In Demand: The Decision Analyst (often referred to as a business intelligence analyst). The decision analyst utilizes technologies, methods and practices for continuous iterative exploration and investigation of past business performance to gain insight and drive business planning.  The decision analyst will help the business to develop new ideas and understand business performance based on data and statistical methods.

 

2. IT Analyst Role Families 

  • Business Systems Analyst: The business systems analyst will utilize broad IT and in-depth industry knowledge to implement IT solutions which address business needs. He or she will identify, develop and implement effective technology solutions that address business needs.

 

  • Systems Analyst:  A systems analyst performs business analysis tasks through specialization in understanding the business usage of information technology (IT) and helping technology add value to the business. He or she knows and is comfortable with a variety of technical architectures and platforms, and understands IT capabilities and which applications in an organization deliver various capabilities.

 

  • Functional Analyst: The functional business analyst performs business analysis tasks through specializing in a particular technology product and its features and functions capabilities. The functional business analyst has deep knowledge of the technology product and has to experience in a variety of implementation contexts in varying organizations, and sometimes industries. He or she helps organizations and stakeholders define the usage and integration with other systems and implements the features and functions of the technology product to meet business requirements.

 

  • Service Request Analyst: A service request Analyst performs business analysis tasks by specializing in supporting stakeholders of a particular system application, maintaining the system, and handling user inquiries, user issues, and enhancements to the system. This Analyst has a deep understanding of a specific application or set of applications he or she supports, how users use the application, and what other systems integrate with the application.

 

  • Agile Analyst: In the agile world, software requirements are developed through continual exploration of the business need. Requirements are elicited and refined through an iterative process of planning, defining acceptance criteria, prioritizing, developing, and reviewing the results. Throughout the iterative planning and analysis of requirements, business analysis practitioners must constantly ensure that the features requested by the users align with the product’s business goals, especially as the business goals evolve and change over time. The agile analysis is a specialty often held by Business Systems and IT Analysts.

BA Leadership 

The following are a list of roles within this family:

  • BA Project Lead
  • BA Program Lead
  • BA Practice Lead
  • Relationship Manager
  • BA Manager

Enterprise Level Roles 

  • Enterprise Architect:  The enterprise architect aligns IT infrastructure with IT and business strategy supporting the goals and objectives, and the successful implementation of change. He/she develops formal standards, manages the enterprise architecture processes and guides the architectural team, CIO, CEO, and Business Architect.
  • Business Architect: This role works to create and maintain the business architecture. He or she leverages enterprise capabilities and efficient usage of process, technology, data and people, and aligns these capabilities to the business strategy.

Blockchain 1.0 – How Bitcoin works

Blockchain revolution is broken down into three categories: Blockchain 1.0, 2.0, and 3.0.

  • Blockchain 1.0 is currency, the deployment of cryptocurrencies in applications related to cash, such as money transfer, remittance, and digital payment systems.
  • Blockchain 2.0 Contracted, the entire slate of economic, market, and financial applications using the blockchain that are more extensive than simple cash transactions: stocks, bonds, futures, loans, mortgages, titles, smart property, and smart contracts.
  • Blockchain 3.0 is blockchain applications beyond currency, finance, and markets—particularly in the areas of government, health, science, literacy, culture, and art.

Untitled2

Bitcoin

  • Bitcoin is digital cash. It is a digital currency and online payment system in which encryption techniques are used to regulate the generation of units of currency and verify the transfer of funds, operating independently of a central bank.
  • Bitcoin is pseudonymous (not anonymous) in the sense that public key addresses (27–32 alphanumeric character strings; similar in function to an email address) are used to send and receive Bitcoins and record transactions, as opposed to personally identifying information.
  • Bitcoins are created as a reward for computational processing work, known as mining, in which users offer their computing power to verify and record payments into the public ledger. Individuals or companies engage in mining in exchange for transaction fees and newly created Bitcoins. Besides mining, Bitcoins can, like any currency, be obtained in exchange for fiat money, products, and services. Users can send and receive Bitcoins electronically for an optional transaction fee using wallet software on a personal computer, mobile device, or web application
  • Bitcoin: A Peer-to-Peer Electronic Cash System

Bitcoin Landscape

Piture3

How Bitcoin Works

Suppose Alice wants to buy a coffee in Bob’s café, and her friend Bob wants to send money to bob’s café who accepts money into Bitcoin

The payment requests QR code

encodes the following URL, defined in BIP0021:

  • bitcoin:1GdK9UzpHBzqzX2A9JFP3Di4weBwqgmoQA?
  • amount=0.015&
  • label=Bob%27s%20Cafe&
  • message=Purchase%20at%20Bob%27s%20Cafe
  • The bitcoin network can transact in fractional values, e.g., from millibitcoins (1/1000th of a bitcoin) down to 1/100,000,000th of a bitcoin, which is known as a Satoshi,
  • In simple terms, a transaction tells the network that the owner of some bitcoins has authorized the transfer of some of those bitcoins to another owner. The new owner can now spend these bitcoins by creating another transaction that allows to transfer to another owner, and so on, in a chain of ownership.

Bitcoin Transactions

Transactions move value from transaction inputs to transaction outputs.An input is where the coin value is coming from, usually a previous transaction’s output. A transaction output assigns a new owner to the value by associating it with a key. The  destination key is called an encumbrance.

Constructing a Bitcoin Transaction 

  • Step 1 – Alice’s wallet application contains all the logic for selecting appropriate inputs and outputs to build a transaction to Alice’s specification, Alice only needs to specify a destination and an amount and the rest happens in the wallet application without her seeing the details
  • Step 2 –If the wallet application does not maintain a copy of unspent transaction outputs, it can query the bitcoin network to retrieve this information, using a variety of APIs available by different providers or by asking a full-index node using the bitcoin JSON RPC APIPictur2e1.png 
  • Getting the Bitcoin Wallet balance in sync with the network.
    • Step 3 – With this information, Alice’s wallet application can construct a transaction to transfer that value to new owner addresses.

     

  • Creating the output
    • Step 4 – In simpler terms, Alice’s transaction output will contain a script that says something like, “This output is payable to whoever can present a signature from the key corresponding to Bob’s public address.”

     

  • Getting the balances back in Alice wallet
    • Step 5 – This transaction will also include a second output, because Alice’s funds are in the form of a 0.10 BTC output, too much money for the 0.015 BTC cup of coffee. Alice will need 0.085 BTC in change. Alice’s change payment is created by Alice’s wallet in the very same transaction as the payment to Bob

     

  • Adding the transaction fees
    • Step 6 for the transaction to be processed by the network in a timely fashion, Alice’s wallet application will add a small fee. Alice creates only 0.0845 as the second output, there will be 0.0005 BTC left over. The resulting difference is the transaction fee that is collected by the miner as a fee for including the transaction in a block and putting it on the blockchain ledger

    hh

Adding the Transaction to the Ledger

  • The transaction created by Alice’s wallet application is 258 bytes long and contains everything necessary to confirm ownership of the funds and assign new owners. Now, the transaction must be transmitted to the bitcoin network where it will become part of the distributed ledger (the blockchain).
  • If Bob’s bitcoin wallet application is directly connected to Alice’s wallet application, Bob’s wallet application might be the first node to receive the transaction. However, even if Alice’s wallet sends the transaction through other nodes, it will reach Bob’s wallet within a few seconds. Bob’s wallet will immediately identify Alice’s transaction as an incoming payment because it contains outputs redeemable by Bob’s keys. Bob’s wallet application can also independently verify that the transaction is well formed, uses previously unspent inputs, and contains sufficient transaction fees to be included in the next block.

1221.JPG

Reference – Minimum Viable Blockchain

Thanks

Abhinav Gupta

Next Topic – Bitcoin Mining for Dummies