Friday, July 31, 2015

Research about talking to strangers

Knowledge Sharing:
1. People who are optimistic, confident, and demonstrate competence generate trust. Are you one of these?
2. However, talking to people at this point is pretty second nature to me. I guess it also helps that I have an hour long conversation with a complete stranger 3 times a week for the podcast. I will start conversations with anybody who is standing next to me if I’m out at a bar or even if I’m sitting on my surfboard between waves.
3.  Walking up to people you don't know and striking up a conversation is the social equivalent of skydiving. It's fun and interesting, but risky. It might also change your life. If you make the effort despite your fears about talking to strangers, you might accidentally have the time of your life. So, read on aspiring social skydiver.
4.  Practice until talking to strangers is second nature.
5.  The best way to practice is to set weekly goals.
6.  Attend social events by yourself.
7.  The more you think about it, the more anxious you'll get. When you see someone you want to talk to, break the ice immediately, before you have a chance to talk yourself out of it. The adrenaline of the moment will carry you past your nerves.
8.  But no one knows how nervous you are but you! Just pretend you're more confident than you actually feel, and the person you're talking to will see what you want them to see.
9.  Fake it till you make it
10.  Remember, the more you practice talking to strangers, the less you'll have to fake your confidence.
11.  But as a shy person, you know perfectly well that sometimes, people just don't feel like talking. If someone rejects your approach, don't take it personally!
12.  Don’t let rejection get you down
14.  Try to see failure as exciting — it’s a chance to learn and improve
15.  People don’t bite. The worst thing that can happen is that someone will say they're busy or want to be left alone. That's not the end of the world!
16.  Nobody's watching or thinking about you but you. Don't worry about people laughing at you — they're all busy thinking about themselves.
17.  If you look anxious or grim when you open up a conversation, you’re going to put the other person on edge immediately.
18.  Even if you feel like a mess inside, try to look relaxed and friendly to put other people at ease. This will result in better, longer conversations.
19.  Make eye contact. Instead of fiddling nervously with your phone, look around the room and observe the people. Make eye contact with people to see who else is looking for conversation.
20.  Smile whenever you make eye contact with people, even if you don't plan to talk them. It both gives you practice in non-verbal communication and raises the odds of someone being receptive to a conversation.
21.  Open up your body language. Throw your shoulders back, stick your chest out, and raise your chin. The more confident you look, the more people will want to talk to you.
22.  Don't cross your arms over your chest. People might interpret crossed arms to mean that you're closed off or uninterested in conversation.
23.  Open nonverbally before you start talking to someone.
24.  Others might find it strange if you start talking to them without giving any hints that you were going to approach them. Instead of walking up and starting a surprise conversation with the side of someone's head, ease into it nonverbally. Make eye contact and give a smile to establish a connection before trying to start a conversation.
25.  If you're doing a cold-approach (not reacting to something you've both observed), start small. Instead of opening with a question about life goals, just make an observation or ask for a favor:
26.  Open with a small interaction.
27.  Once you've opened with your small interaction, you want to find out the other person's name. The best way to do that is simply to offer your own name. Etiquette will basically force the other person to introduce themselves in kind. If he ignores your introduction, he's either in a very bad mood or is rude — either way, it's best you don't try to pursue this conversation.
28.  After you've finished your opening interaction, just say "I'm [your name], by the way." Offer a firm handshake as you're introducing yourself.
29.  "What have you been up to today?" instead of "Are you having a good day?"
"I've seen you here a lot. What keeps you coming back? What's so great about this place?" instead of "Do you come here often?"
30.  People enjoy conversations more when they feel like they have something to teach.
31.  Finding common ground in a conversation is very important. As strange as it might seem strange, though, a good disagreement can be a great way to form a new relationship. Show the person you're trying to talk to that hanging out with you won't be boring.
32.  Keep the debates light-hearted. If you see the other person getting worked up, back off immediately.
33.  Make sure to smile and laugh often while debating to let everyone know you're having a good time, not getting upset.
34.  A debate about religion or politics might result in hurt feelings, but one about the best travel spots or football team will stay light-hearted and fun. Other safe topics might include movies, music, books, or food.
35.  You might be tempted to stick to a prepared list of conversation topics. Doing that would limit the conversation's potential, though! Let the conversation grow organically. You can try to steer it gently toward topics you're more comfortable with, but don't manhandle it awkwardly. If your partner wants to talk about something you don't know much about, just admit it. Ask them to explain it to you and enjoy learning something!
36.  Keep it light during a fleeting interaction.
37.  Have fun during a longer interaction.
38.   You might be at a professional conference. In any networking interaction, you want people to get the impression that you're confident and capable. Even if you feel anxious about talking to a stranger, fake it till you make it.
39.  Stick to talking about the industry you work in. Show people that you know your stuff and are good at your job.
40.  Never ask closed questions. Always ask open questions.
41.  When you ask open question, they are going to give you a long and detailed answer. All you have to do then is ACTUALLY LISTEN to what the other person is saying. Then when you hear something interesting, make a comment about your own experience or ask a more detailed question about that topic.
42.  I guess we started sharing more personal details (which tends to happen if you are being a good listener) during the conversation. I had mentioned that there was a brief period in my entrepreneurial career where I was selling a product I wasn’t proud of and was actually embarrassed to be associated with it, and it really sucked my soul dry.
43.  You see, when you ask an open question, the other person will tell you all kinds of stuff. Then all you have to do is listen deeply, look for something interesting and either comment on that or ask a clarifying question.
44.  If you show more interest in someone, they will often take a deeper interest in you. It’s a natural human instinct to reciprocate, so the more you hear them out, the more they want to hear you out. The “secret” is to hear them out FIRST.
45.  When I was younger, when I was having conversations with other people, I would simultaneously have a conversation with my inner voice. The problem with this approach is I wasn’t able to listen deeply to the other person because I was distracted by also listening to myself.
46.  The solution is to turn off (or at least turn down) the volume on your inner voice, especially when you’re in the middle of a conversation with another person. Just pay attention to them.
47.  ASSUME THE OTHER PERSON IS MORE NERVOUS THAN YOU. The majority of people I know feel at least a little awkward meeting new people (though some people are better at hiding it than others). Most people also tend to assume that everyone else is more socially comfortable than they are.
48.  The way to break this cycle is to assume everyone else is more nervous or feels more awkward than you. Then make it your job to help others feel comfortable by reaching out and engaging them. Even if they are not great conversationalists, that’s okay. Just ask them about themselves!
49.  The main reason my people skills were functional when I was recruiting was because I happened to do a number of extracurricular activities in high school and college that involved working with other people.
50.  Every little bit of interpersonal interaction helps improve those skills. If you can, join groups, clubs, meetups, Toastmasters or industry associations so you have plenty of opportunities to practice.
51.  The key is to use the skills in an environment where there’s no downside to doing it poorly.
52.  The root cause of these three dynamics is low or diminished self-esteem. One trademark of low self esteem is the presumption that one is somehow inferior to others or, on the flip side, presuming most people are better than you.
53.  Another way low self esteem expresses itself is by acting superior to other people. You might notice this as arrogance. When you’re right and have high self esteem, there is no need to convince others you’re right. It’s enough simply to know you are right and they are wrong.
54.  This behavior is a severe over-compensation for low self esteem. Basically these kinds of people don’t feel good about themselves and don’t want anyone to discover this “fact”, so they act arrogant and cocky, with a lot of (false) bravado to hide their insecurity.
55.  In the general population, I’d say 75% of people are either One Up or One Down. Within McKinsey, its a fairly open secret that 80%+ of McK consultants are One Down-type people—myself included.
56.  If you want to develop exceptional people skills, you absolutely, positively have to understand how this dynamic works.
57.  You need to learn to recognize it in yourself, recognize it in others, and know how to work with yourself and others, given their tendencies.
58.  The key to winning over a One Up client is to let your idea be his idea. He is not threatened by “his” ideas. But he is threatened by an outside idea (that he did not think of himself), especially if it’s a good idea. It is perceived as proof that he wasn’t adequate to the task at hand.
59.  When you understand the humanity behind such outwardly aggressive behavior, you learn to feel compassion for the other person. They will detect this and allow you to develop a closer relationship with them because somehow you “get” them.
60.  Every meeting should be a conversation, not a sales pitch. Spend at least half of every customer meeting listening. And make certain the conversation is substantive and about real business issues, not just office patter or sports chit-chat.
61.   we quickly realized that channels are where your customer is going to look for your product, not necessarily where they will be hanging out.
62.  Don’t interrupt them and don’t ask questions until the customer is done talking.
63.  Look decent. We have had much better results when we look our best than when we look like we’re homeless.  For example, I’ve had better results with my beard trimmed.  It may seem a bit shallow but remember, you are talking to strangers whom you gave less than a minute to figure out if they can trust you.
64.  Accept the craziness. We have done interviews where people have dropped lines like, “You look like you smoke weed.  I love weed, and my problem is…” – in all honest, I should have trimmed my beard.  We’ve also had comedic responses like “Honey, my only problem was my ex-husband and he’s gone.”  The point is, people are crazy.
65.  Accept the rudeness. We have done interviews where people try to trash our idea or believe they are startup experts; and thus try to tell you what you should be doing,  Some people simply feel threatened by someone with bigger ambitions, don’t take it personal.
66.  Don’t correct your customer unless it’s necessary. Some people perceive it as rude. Does it really matter? Does it affect your value proposition? Ask questions around the issue so you can find out the information you need.
67.  Highlight pains and excitement. Make sure you highlight in your notes their pains,  things that got them excited and things they hated. It will make analyzing the data that much easier.
68.  Lastly, no matter how many times you do it, you probably won’t be able to do the interview in the same order every time. Sometimes because you forget the questions and other times because you learned the pains are completely different from what you were expecting. That’s the beauty of doing personal interviews over surveys and anything with a defined structure; an interview can go many ways so just have a conversation with your customer and let it flow.
69.


Examples:
1.  For CEOs, I usually ask, “So how did you get started in XYZ field?” I’ve never gotten anything shorter than a 10-minute answer. I’ve even gotten 30-minute answers.
2.  This is a good 80/20 rule of thumb for an introvert: Ask one question and get 10–30 minutes of conversation out of it.
3.  It is a good question. So think of a few open-ended questions, and then ask them.
4.

Warnings:
  • You won’t know what to say when you approach people.
  • You might end up standing around looking uncomfortable.
  • You’ll be almost visibly shaking for the first few people you approach.
  • You might get off to a good start in a conversation, and then get stuck and won't know what else to say (uncomfortable silences).
  • You’ll tell yourself, “This is too hard! I think I’ll just rent a movie instead.”
  • Some people will think you're hitting on them.
  • Don't feel too big.


Principles:
1. Just Say Hello
2. Don’t Expect Anything
3. Get out of your Head

References:
http://www.caseinterview.com/how-to-talk-to-strangers
http://usabilityworks.com/talking-to-strangers-in-the-street-recruiting-by-intercepting-people/



Thursday, July 30, 2015

Research about Amazon S3 API

Knowledge Sharing:
1. The Amazon S3 REST API uses a custom HTTP scheme based on a keyed-HMAC (Hash Message Authentication Code) for authentication. To authenticate a request, you first concatenate selected elements of the request to form a string. You then use your AWS secret access key to calculate the HMAC of that string. Informally, we call this process "signing the request," and we call the output of the HMAC algorithm the signature, because it simulates the security properties of a real signature. Finally, you add this signature as a parameter of the request by using the syntax described in this section.
2. When the system receives an authenticated request, it fetches the AWS secret access key that you claim to have and uses it in the same way to compute a signature for the message it received. It then compares the signature it calculated against the signature presented by the requester. If the two signatures match, the system concludes that the requester must have access to the AWS secret access key and therefore acts with the authority of the principal to whom the key was issued. If the two signatures do not match, the request is dropped and the system responds with an error message.
3. Developers are issued an AWS access key ID and AWS secret access key when they register.
4. The Signature element is the RFC 2104 HMAC-SHA1 of selected elements from the request, and so the Signature part of the Authorization header will vary from request to request. If the request signature calculated by the system matches the Signature included with the request, the requester will have demonstrated possession of the AWS secret access key. The request will then be processed under the identity, and with the authority, of the developer to whom the key was issued.
5. For Amazon S3 request authentication, use your AWS secret access key (YourSecretAccessKeyID) as the key, and the UTF-8 encoding of the StringToSign as the message. The output of HMAC-SHA1 is also a byte string, called the digest. The Signature request parameter is constructed by Base64 encoding this digest.
6. The MD5 message-digest algorithm is a widely used cryptographic hash function producing a 128-bit (16-byte) hash value, typically expressed in text format as a 32 digit hexadecimal number. MD5 has been utilized in a wide variety of cryptographic applications, and is also commonly used to verify data integrity.
7. Some HTTP client libraries do not expose the ability to set the Date header for a request. If you have trouble including the value of the 'Date' header in the canonicalized headers, you can set the timestamp for the request by using an 'x-amz-date' header instead. The value of the x-amz-date header must be in one of the RFC 2616 formats (http://www.ietf.org/rfc/rfc2616.txt). When an x-amz-date header is present in a request, the system will ignore any Date header when computing the request signature. Therefore, if you include the x-amz-date header, use the empty string for the Date when constructing the StringToSign. See the next section for an example.
8. A valid time stamp (using either the HTTP Date header or an x-amz-date alternative) is mandatory for authenticated requests. Furthermore, the client timestamp included with an authenticated request must be within 15 minutes of the Amazon S3 system time when the request is received. If not, the request will fail with the RequestTimeTooSkewed error code. The intention of these restrictions is to limit the possibility that intercepted requests could be replayed by an adversary. For stronger protection against eavesdropping, use the HTTPS transport for authenticated requests.
9.  To perform a specific operation on a resource, an IAM user needs permission from both the parent AWS account to which it belongs and the AWS account that owns the resource.
10. If the request is for an operation on an object that the bucket owner does not own, in addition to making sure the requester has permissions from the object owner, Amazon S3 must also check the bucket policy to ensure the bucket owner has not set explicit deny on the object.
11. The signature version 4 signing specification describes how to add authentication information to AWS requests—that is, how to sign AWS requests. As a security measure, most requests to AWS must be signed using an access key (access key ID and secret access key). If you use the AWS Command Line Interface (CLI) or one of the AWS SDKs, those tools all automatically sign requests for you, based on credentials that you specify when you configure the tools. But if you make direct HTTP or HTTPS calls to AWS, you must sign the requests yourself, using the procedure described here.
12.  When AWS receives the request, it performs the same steps that you did in order to calculate the signature. AWS then compares the signature that it calculates against the one that you send in the request. If the signatures match, the request is processed; if the signatures don't match, the request is denied.
14.  After you've completed the signing tasks, you add the resulting authentication information to the request. One option is to add it to the request using an Authorization header. (Although the header is named Authorization, the signing information is actually used for authentication—establishing who the request came from.) The Authorization header includes information about the algorithm you used for signing (SHA256), the credential scope (with your access key), the list of signed headers, and the calculated signature.
15.  In this pseudocode, Hash represents a function that produces a message digest, typically SHA-256. (Later in the process you specify which hashing algorithm you're using.) 
16.  A cryptographic hash function is similar to a checksum. The main difference is that while a checksum is designed to detect accidental alterations in data, a cryptographic hash function is designed to detect deliberate alterations.
17.  MD5 processes a variable-length message into a fixed-length output of 128 bits.
18.  


Authorization = "AWS" + " " + AWSAccessKeyId + ":" + Signature;

Signature = Base64( HMAC-SHA1( YourSecretAccessKeyID, UTF-8-Encoding-Of( StringToSign ) ) );

StringToSign = HTTP-Verb + "\n" +
 Content-MD5 + "\n" +
 Content-Type + "\n" +
 Date + "\n" +
 CanonicalizedAmzHeaders +
 CanonicalizedResource;

CanonicalizedResource = [ "/" + Bucket ] +
 <HTTP-Request-URI, from the protocol name up to the query string> +
 [ subresource, if present. For example "?acl", "?location", "?logging", or "?torrent"];

CanonicalizedAmzHeaders = <described below>

References:
http://docs.aws.amazon.com/AmazonS3/latest/dev/RESTAuthentication.html#ConstructingTheAuthenticationHeader
https://kl2217.wordpress.com/2011/07/21/common-hashing-algorithms/


Common:
AWS secret access key
Cross-Origin Resource Sharing


Monday, July 6, 2015

Understand Target Marketing and Market Segmentation

Knowledge Sharing:
1. These trends, long in the making, will disrupt some businesses while unlocking new opportunities for others. Consider the example of Antonio Swad. In 1986, he moved from Ohio to Dallas to open a traditional pizzeria. Realizing that  he was located in an area with a large concentration of Hispanic consumers, he changed his eatery's name to Pizza PatrĂ³n and focused his marketing efforts on the Latino community.
2. The key to success, he observes, is to realize that "this is a community that you need to serve primarily and sell to secondarily."
3. Don't break your target down too far! Remember, you can have more than one niche market. Consider if your marketing message should be different for each niche market. If you can reach both niches effectively with the same message, then maybe you have broken down your market too far. Also, if you find that there are only 50 people that fit all of your criteria, maybe you should reevaluate your target. The trick is to find that perfect balance
4.  You may be asking, "How do I find all this information?" Try searching online for research others have done on your target. Search for magazine articles and blogs that talk about your target market or that talk to your target market. Search for blogs and forums where people in your target market communicate their opinions. Look for survey results, or consider conducting a survey of your own. Ask your current customers for feedback.
5.  Defining your target market is the hard part. Once you know who you are targeting, it is much easier to figure out which media you can use to reach them and what marketing messages will resonate with them. Instead of sending direct mail to everyone in your ZIP code, you can send only to those who fit your criteria. Save money and get a better return on investment by defining your target audience.
6.  The beauty of target marketing is that it makes the promotion, pricing and distribution of your products and/or services easier and more cost-effective. It provides a focus to all of your marketing activities.
7.  While market segmentation can be done in many ways, depending on how you want to slice up the pie, three of the most common types are:
8.  Geographic segmentation relies on the notion that groups of consumers in a particular geographic area may have specific product or service needs; for instance, a lawn care service may want to focus their marketing efforts in a particular village or subdivision that has a high percentage of seniors.
9.  

10.  Psychographic segmentation is based on the theory that the choices that people make when purchasing goods or services are reflections of their lifestyle preferences or socio economic class.
11.  Demographic information is crucial for many businesses. For example, a liquor vendor might want to target their marketing efforts based on the results of Gallup polls, which indicate that beer is the beverage of choice for people below the age of 54 (particularly in the 18-34 year old age range) while those aged 55 and older prefer wine.
12.  Not all customers are the same. So stop taking a one-size-fits-all approach to your marketing and start segmenting your customers into smaller groups,
14.  Segmentation is simply a way of arranging your customers into smaller groups according to type. These distinct sub-groups or segments should be characterised by particular attributes. Now you can target specific, relevant marketing messages at each group.
15.  And it's not just about what you say. How you communicate is also vital, and segmentation often requires a carefully structured marketing mix. That's because some customers may prefer the direct approach, such as telephone marketing, while others respond better to a local advertising campaign.
16.  By increasing your understanding about what your customers are buying, you can also maximise opportunities for cross-selling or up-selling. I'm reminded of the builders merchant who sells a tonne of bricks but doesn't cross-sell by selling the sand and cement. By grouping together all the customers who regularly buy certain products, you can target them with relevant offers encouraging them to increase their spend.
17.  What's more, if you are a regular customer, a targeted message shows that you are appreciated and valued. Conversely, a general message, which doesn't acknowledge previous purchases, could well make you feel unloved and taken for granted.
18.  The key is to draw a picture of an individual that represents the type of person you are aiming at. If you take two very different types of prospect, you can see that they will have very different needs, wants, values and opinions. And they will respond quite differently depending on the marketing method you use.
19.  Being second to market is the best strategy if you're a smaller firm with fewer resources. Costs and risks are lower, and you need to focus more on differentiation than innovation to tap into a growing market
20.  Target marketing is the overall term for directing your marketing endeavors toward a group of people. Market segmentation is the breaking down of the market into smaller groups with the intention of promoting your product or service differently to each of them.
21. Market segmentation involves grouping your various customers into segments that have common needs or will respond similarly to a marketing action. Each segment will respond to a different marketing mix strategy, with each offering alternate growth and profit opportunities.
22.  Undifferentiated Targeting: This approach views the market as one group with no individual segments, therefore using a single marketing strategy. This strategy may be useful for a business or product with little competition where you may not need to tailor strategies for different preferences.
23.  Concentrated Targeting: This approach focuses on selecting a particular market niche on which marketing efforts are targeted. Your firm is focusing on a single segment so you can concentrate on understanding the needs and wants of that particular market intimately. Small firms often benefit from this strategy as focusing on one segment enables them to compete effectively against larger firms.
24.  Multi-Segment Targeting: This approach is used if you need to focus on two or more well defined market segments and want to develop different strategies for them. Multi segment targeting offers many benefits but can be costly as it involves greater input from management, increased market research and increased promotional strategies.





References:
http://toolkit.smallbiz.nsw.gov.au/part/3/10/49




Saturday, July 4, 2015

Research about Docker

Knowledge Sharing:
1. Here it is important to note that Docker runs 1 process per container, which is a major difference from other container engines like Warden.
2. Everything at Google runs in a container. Google starts over 2 billion containers per week.
3.  But with commoditization of Containers in the form of Docker, Warden, etc. opportunities have opened up for developers and dev-ops to better manage and control the deployment and scaling of applications.
4.  without PaaS, Docker is just a bunch of containers. The advantages of Docker are realized when used in an enterprise cloud stack or PaaS environment. The ease of deployment and portability make containers an essential part of PaaS.
5.  I believe PaaS and Containers like Docker are not competing technologies, but rather complementary. In fact this elastic mesh of containers composing an Application is what makes PaaS so efficient.
6.  Microservices, is the new kid in the block, and is considered the next big thing in software architecture. You can look at it as 'fine-grained SOA’ - an approach to developing an application as a suite of small services, each running in its own process and communicating with each other using some light-weight mechanism, with independent deployments, scalability and portability. You should already see how Containers just perfectly fits into that context!
7.  Warden is the container orchestration engine for Cloud Foundry, which exposes APIs for managing the isolated environments. All applications deployed to Cloud Foundry runs within a Warden container. It constitutes a warden server and a ruby based warden client. The server and client interface using Google protocol buffers.
8.  The service-as-a-VM approach is popular way of packaging and deploying services. For example, the Netflix video streaming service consists of many services each packaged as an AMI and deployed on Amazon EC2.
9.  Docker is a new way to containerize applications that is becomingly increasingly popular. It allows you to package a microservice in a standardized portable format that’s independent of the technology used to implement the service. At runtime it provides a high degree of isolation between different services. However, unlike virtual machines, Docker containers are extremely lightweight and as a result can be built and started extremely quickly. A container can typically be built in just a few seconds and starting a container simply consists of starting the service’s process(es).
10.  The two main Docker concepts are image, which is a portable application packaging format, and container, which is a running image and consists of one or more sandboxed processes. 
11.  A Docker container is a running image consisting of one or more sandboxed processes. 
12.  

Reference:
http://stackoverflow.com/questions/28700859/correct-way-to-manage-database-schemas-in-docker






Thursday, July 2, 2015

Research Microservice Architecture




Knowledge Sharing:
1. One approach is to use verb-based decomposition and define services that implement a single use case such as checkout. The other option is to decompose the application by noun and create services responsible for all operations related to a particular entity such as customer management. An application might use a combination of verb-based and noun-based decomposition.
2.  When using Z-axis scaling each server runs an identical copy of the code. In this respect, it’s similar to X-axis scaling. The big difference is that each server is responsible for only a subset of the data. Some component of the system is responsible for routing each request to the appropriate server. One commonly used routing criteria is an attribute of the request such as the primary key of the entity being accessed. Another common routing criteria is the customer type. For example, an application might provide paying customers with a higher SLA than free customers by routing their requests to a different set of servers with more capacity.
3.   Z-axis splits are commonly used to scale databases. Data is partitioned (a.k.a. sharded) across a set of servers based on an attribute of each record. In this example, the primary key of the RESTAURANT table is used to partition the rows between two different database servers. Note that X-axis cloning might be applied to each partition by deploying one or more servers as replicas/slaves. Z-axis scaling can also be applied to applications. In this example, the search service consists of a number of partitions. A router sends each content item to the appropriate partition, where it is indexed and stored. A query aggregator sends each query to all of the partitions, and combines the results from each of them.
4.  It has even been called lightweight or fine-grained SOA. And indeed, one way to think about microservice architecture is that it’s SOA without the commercialization and perceived baggage of WS* and ESB. 
5.  The second issue with starting with microservices is that they only work well if you come up with good, stable boundaries between the services - which is essentially the task of drawing up the right set of BoundedContexts. Any refactoring of functionality between services is much harder than it is in a monolith. But even experienced architects working in familiar domains have great difficulty getting boundaries right at the beginning. By building a monolith first, you can figure out what the right boundaries are, before a microservices design brushes a layer of treacle over them. It also gives you time to develop the MicroservicePrerequisites you need for finer-grained services.
6.   Another route I've run into is to start with just a couple of coarse-grained services, larger than those you expect to end up with. Use these coarse-grained services to get used to working with multiple services, while enjoying the fact that such coarse granularity reduces the amount of inter-service refactoring you have to do. Then as boundaries stabilize, break down into finer-grained services.
7.   Although the evidence is sparse, I feel that you shouldn't start with microservices unless you have reasonable experience of building a microservices system in the team.
8.   I've heard of plenty of cases where an attempt to decompose a monolith has quickly ended up in a mess. I've also heard of a few cases where a gradual route to microservices has been successful - but these cases required a relatively good modular design to start with.
9. In the same way that we have come to avoid distributed transactions across organisational boundaries, with a microservices architecture we avoid distributed transactions across separate business services, allowing event-driven asynchronous messaging to trigger workflows in related services.
10. In short, the microservice architectural style [1] is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API. These services are built around business capabilities and independently deployable by fully automated deployment machinery. There is a bare minimum of centralized management of these services, which may be written in different programming languages and use different data storage technologies.
11.  Re-keying or scripting extracted data from one system to another can arguably be called eventual consistency, but it’s a poor alternative to what can be done with a service-based architecture.
12.  The microservice approach to division is different, splitting up into services organized around business capability. Such services take a broad-stack implementation of software for that business area, including user-interface, persistant storage, and any external collaborations. Consequently the teams are cross-functional, including the full range of skills required for the development: user-experience, database, and project management.
13.  Microservice proponents tend to avoid this model, preferring instead the notion that a team should own a product over its full lifetime. A common inspiration for this is Amazon's notion of "you build, you run it" where a development team takes full responsibility for the software in production. This brings developers into day-to-day contact with how their software behaves in production and increases contact with their users, as they have to take on at least some of the support burden.
14.  The microservice community favours an alternative approach: smart endpoints and dumb pipes. Applications built from microservices aim to be as decoupled and as cohesive as possible - they own their own domain logic and act more as filters in the classical Unix sense - receiving a request, applying logic as appropriate and producing a response. These are choreographed using simple RESTish protocols rather than complex protocols such as WS-Choreography or BPEL or orchestration by a central tool.
15.  The second approach in common use is messaging over a lightweight message bus. The infrastructure chosen is typically dumb (dumb as in acts as a message router only) - simple implementations such as RabbitMQ or ZeroMQ don't do much more than provide a reliable asynchronous fabric - the smarts still live in the end points that are producing and consuming messages; in the services.
16.  As well as decentralizing decisions about conceptual models, microservices also decentralize data storage decisions. While monolithic applications prefer a single logical database for persistant data, enterprises often prefer a single database across a range of applications - many of these decisions driven through vendor's commercial models around licensing. Microservices prefer letting each service manage its own database, either different instances of the same database technology, or entirely different database systems - an approach calledPolyglot Persistence.
17.  Decentralizing responsibility for data across microservices has implications for managing updates. The common approach to dealing with updates has been to use transactions to guarantee consistency when updating multiple resources. This approach is often used within monoliths.
18.   Distributed transactions are notoriously difficult to implement and and as a consequence microservice architecturesemphasize transactionless coordination between services, with explicit recognition that consistency may only be eventual consistency and problems are dealt with by compensating operations.
19.   Choosing to manage inconsistencies in this way is a new challenge for many development teams, but it is one that often matches business practice. Often businesses handle a degree of inconsistency in order to respond quickly to demand, while having some kind of reversal process to deal with mistakes. The trade-off is worth it as long as the cost of fixing mistakes is less than the cost of lost business under greater consistency.
20.   Netflix's Simian Army induces failures of services and even datacenters during the working day to test both the application's resilience and monitoring.
21.   Since services can fail at any time, it's important to be able to detect the failures quickly and, if possible, automatically restore service. Microservice applications put a lot of emphasis on real-time monitoring of the application, checking both architectural elements (how many requests per second is the database getting) and business relevant metrics (such as how many orders per minute are received). Semantic monitoring can provide an early warning system of something going wrong that triggers development teams to follow up and investigate.
22.   The key property of a component is the notion of independent replacement and upgradeability[13] - which implies we look for points where we can imagine rewriting a component without affecting its collaborators.
23.    The monolith still is the core of the website, but they prefer to add new features by building microservices that use the monolith's API. This approach is particularly handy for features that are inherently temporary, such as specialized pages to handle a sporting event. Such a part of the website can quickly be put together using rapid development languages, and removed once the event is over. We've seen similar approaches at a financial institution where new services are added for a market opportunity and discarded after a few months or even weeks.
24.   The traditional integration approach is to try to deal with this problem using versioning, but the preference in the microservice world is to only use versioning as a last resort. We can avoid a lot of versioning by designing services to be as tolerant as possible to changes in their suppliers.
25.   One reasonable argument we've heard is that you shouldn't start with a microservices architecture. Instead begin with a monolith, keep it modular, and split it into microservices once the monolith becomes a problem. (Although this advice isn't ideal, since a good in-process interface is usually not a good service interface.)
26.  What are the fundamentals of being cloud-native? As I see it, being cloud-native is more about the application architecture and design than how you code the thing. Sadly, many people -- in IT and at the vendors -- are missing the boat on both points.
27.  What I'm trying to say is that in this kind of situations the answer is: it depends. And something that we tech geeks often forget to do before embarking on such distributed/scalability/architecture journeys is to talk to business. Often business can handle a certain degree of inconsistencies, suboptimal processes or looking up data in more places instead of one (i.e. what you think is important might not necessarily be for business). So talk to them and see what they can tolerate. Might be cheaper to resolve something in an operational way than to invest a lot into trying to build a highly distributable system.
28.  Shared Data: The various database instances must operate across a shared set of data so that each instance has a consistent view of the data. In other words, each database instance must see the exact same data at any point in time.

Distributed Locking: Whenever one database instance attempts to write to the database—such as a bank withdrawal—the other database instances must wait for this change to take effect. A distributed lock manager is required to coordinate these changes

29.  The database graveyard includes object databases, graph databases, XML databases, object-relational databases, in-memory databases, and now NoSQL and NewSQL. These technologies tend to find a niche market, but they never achieve escape velocity because of the chicken-and-egg problem of: you cannot achieve market leadership until you have an ecosystem, you don’t get an ecosystem until you achieve market leadership.
30. A microservices architecture also involves complex patterns of remote calls as the services communicate. That makes end-to-end tracking of request flows more difficult, but the tracking data is vital to diagnosing problems. You need to be able to trace how process A called B, which called C, and so on. One way to do this is to instrument HTTP headers with globally unique identifiers (GUIDs) and transaction IDs.
31.  

Features:
1. Independence
2. Replacability
3. Upgradability
4. Monitorability
5. Restorability
6. Refactorability to change the boundaries.




Notes:
1. Have smaller team to work on smaller modules
2.

Problems of Monolithic Architecture:
1. Difficult for new team members
2. Overloaded IDE
3. Overloaded Web Container
4. CD is difficult
5. Scaling is difficult for all the components
6. Require long term commitment to technology stack.








Benefits:
Simple to develop, deploy and scale.


Benefits of Microservice Architecture:
1. Small Services
2. Scale service independently
3. Fault Isolation
4. Multiple team
5. No long term investment on technology stack
6.


Challenges:
1. Data Integrity
2. Complexity of distributed system (testing more difficult, inter-service communication, distributed transaction, coordination of multiple team)
3. Deployment complexity
4. More memory consumption
5. When to use it
6. How to break microservices (Art)
7. 

Break MicroServices:
1. One approach is to partition services by verb or use case. For example, later on you will see that the partitioned e-commerce application has a Shipping service that’s responsible for shipping complete orders. Another common example of partitioning by verb is a login service that implements the login use case.
2. Another partitioning approach is to partition the system by nouns or resources. This kind of service is responsible for all operations that operate on entities/resources of a given type. For example, later on you will see how it makes sense for the e-commerce system to have an Inventory service that keeps track of whether products are in stock.
3.  Another analogy that helps with service design is the design of Unix utilities. Unix provides a large number of utilities such as grep, cat and find. Each utility does exactly one thing, often exceptionally well, and can be combined with other utilities using a shell script to perform complex tasks.
4.   


When to use it?
1. One challenge with using this approach is deciding when it makes sense to use it. When developing the first version of an application, you often do not have the problems that this approach solves. Moreover, using an elaborate, distributed architecture will slow down development. This can be a major problem for startups whose biggest challenge is often how to rapidly evolve the business model and accompanying application. Using Y-axis splits might make it much more difficult to iterate rapidly. Later on, however, when the challenge is how to scale and you need to use functional decomposition, the tangled dependencies might make it difficult to decompose your monolithic application into a set of services.
2.   Another challenge is deciding how to partition the system into microservices. This is very much an art, but there are a number of strategies that can help. One approach is to partition services by verb or use case. For example, later on you will see that the partitioned e-commerce application has a Shipping service that’s responsible for shipping complete orders. Another common example of partitioning by verb is a login service that implements the login use case.

 


Best Practice:
1. Break the application from functionality in terms of y-axis
2. Architect the application by applying the Scale Cube (specifically y-axis scaling) and functionally decompose the application into a set of collaborating services. Each service implements a set of narrowly, related functions. For example, an application might consist of services such as the order management service, the customer management service etc.
3.  Services communicate using either synchronous protocols such as HTTP/REST or asynchronous protocols such as AMQP.
4.  Each service has its own database in order to be decoupled from other services. When necessary, consistency is between databases is maintained using either database replication mechanisms or application-level events.
5.   For example, a service that needs ACID transactions might use a relational database, whereas a service that is manipulating a social network might use a graph database.
6.  For large applications, it makes more sense to use a microservice architecture that decomposes the application into a set of services.
7.  In particular, applications are much more complex and have many more moving parts. You need a high-level of automation, such as a PaaS, to use microservices effectively. You also need to deal with some complex distributed data management issues when developing microservices. Despite the drawbacks, a microservice architecture makes sense for large, complex applications that are evolving rapidly, especially for SaaS-style applications.
8.  There are various strategies for incrementally evolving an existing monolithic application to a microservice architecture. Developers should implement new functionality as a standalone service and write glue code to integrate the service with the monolith. It also makes sense to iteratively identify components to extract from the monolith and turn into services. While the evolution is not easy, it’s better than trying to develop and maintain an unwieldy monolithic application.
9.   


Recommendations:
1. Functional Breakdown of Services
2. Database Replication Mechanism
3. Application Level Events
4.


Good Practice:
1. API Gateway
2. Service Registry:
Apache ZooKeeper or Netflix Eureka. In other applications, services must register with a load balancer, such as an internal ELB in an Amazon VPC.
3. Message Broker
4. Inter-Process Communication (HTTP/REST/SOAP, AMQP-Based)
5. Decentralized Data Management

Decentralized Data Management:
1. One solution is for the OrderService to retrieve the credit limit by making an RPC call to the CustomerService. This approach is simple to implement and ensures that the OrderService always has the most current credit limit. The downside is that it reduces availability because the CustomerService must be running in order to place an order. It also increases response time because of the extra RPC call.
Another approach is for the OrderService to store a copy of the credit limit. This eliminates the need to make a request to the CustomerService and so improves availability and reduces response time. It does mean, however, that we must implement a mechanism to update the OrderService’s copy of the credit limit whenever it changes in the CustomerService.
2.  Distributed Transactions
3.  Event-Driven Asynchronous Updates (A major drawback of this approach is that it trades consistency for availability. The application has to be written in a way that can tolerate eventually consistent data. Developers might also need to implement compensating transactions to perform logical rollbacks. Despite these drawbacks, however, this is the preferred approach for many applications.)
4.  
Migrate from Monolithic Architecture to Microservice Architecture:
1. First, stop making the problem worse. Don’t continue to implement significant new functionality by adding code to the monolith. Instead, you should find a way to implement new functionality as a standalone service
2.  Second, identify a component of the monolith to turn into a cohesive, standalone service. Good candidates for extraction include components that are constantly changing, or components that have conflicting resource requirements, such as large in-memory caches or CPU intensive operations. The presentation tier is also another good candidate. You then turn the component into a service and write glue code to integrate with the rest of the application. Once again, this will probably be painful but it enables you to incrementally migrate to a microservice architecture.
3. 
 
Challenges:
1. Functionally breakdown
2. Context Sharing among services
3.