Asp.Net MVC-4,Entity Framework and JQGrid Demo with simple Todo List WebApplication – CodeProject

There are many advantages of using MVC as follows: –

MVC helps us to develop loosely coupled architecture.

Complex applications can be easily managed.

Separations of concerns is possible by dividing the application in to Model,View and Controller.

Extensive support for Test Driven Development(TDD). Unit testing will be easy, an additional layer of testing will provide yet another layer of defense against unexpected behavior.

Asp. net MVC is light weight as they do not use view state.

SEO(Search Engine Optimization) Clean Url’s and no extension methods are used for locating the physical files.

Rich javascript support with UnObstrusive javascript, Jquery Validation with Json binding.

No Post back events.

Expressive views including the new Razor view engine and HTML – 5 enabled.

via Asp.Net MVC-4,Entity Framework and JQGrid Demo with simple Todo List WebApplication – CodeProject.

Multicast Message Broker – CodeProject

In one of my projects I needed to use udp multicast messaging to distribute notifications. I’ve found numerous articles about multicasting but these were mostly oriented towards an explanation of the technology with just simple code snippets demonstrating basic api calls.

My aim was to build a component which would provide reliable encapsulation of network interactions and that would expose multicast functionality to upper layers through publish/subscribe implementation of the observer pattern.

Requirements for message broker component:

  • distribute string messages to anyone interested (observer pattern) using udp multicasting
  • single component encapsulating both send & receive logic
  • no machine specific configuration
  • basic error detection
  • thread-safe interaction
  • detection of duplicate messages
  • ability to handle fragmented data
  • ability to recover from data received in unexpected format (e.g. network errors, interrupted transmission, etc.)

Multicast Message Broker – CodeProject.

Protecting your data in the age of the NSA and PRISM

Soon after Edward Snowden released a cache of top-secret documents detailing the far-ranging data collection activities of the U.S. National Security Agency (NSA) in the summer of 2013, the Federal Bureau of Investigation (FBI) approached the secure email provider Lavabit with a demand to turn over the encryption keys to its communications. Their target was, allegedly, none other than Snowden himself, who had been posting his @lavabit.com address and inviting human rights activists around the world to contact him. Ladar Levison, the owner of Lavabit, refused the Bureau’s request. Levison was concerned that turning over his private encryption key would allow the government to decrypt not only Snowden’s communications but also those of all of Lavabit’s nearly 400,000 customers, many of whom are activists and had chosen Lavabit for its security. Facing a contempt of court charge, Levison eventually turned over the encryption key. However, he simultaneously shut down his service, thus preventing the authorities from gaining access to his customers’ communications.

Few companies have the ability to act like Lavabit and shut down in the face of such a demand. Lavabit was a very small organization with no shareholders and few employees to answer to for their actions. As organizations become more decentralized and their employees more mobile, they naturally need to share more information, raising concerns about how to adequately protect that information from NSA-like government actors. Most organizations have no plan in place for reacting to a government request for data. How should companies prepare to deal with this issue? What steps can be taken to protect data? We’ll explore these questions in this report, place the NSA threat in perspective, and suggest steps most companies can take to preserve data privacy.

 

The NSA in Perspective
The NSA is not unique in its use of the Internet for intelligence gathering. Most other major industrialized nations contribute to or have some surveillance footprint on the Internet. Some of these nations even engage in economic espionage and sabotage
efforts, a serious concern for businesses worried about intellectual property and their global competitiveness.1 However, because of the NSA’s aggressiveness and scope, organizations should consider to what degree they need to protect against such an agency and other only slightly less capable state actors. The NSA can be thwarted, as their frustration with breaking2 the Tor network3 demonstrates. Though, as is often the case, hardening one security weakness inevitably leads to the aversary exploiting that weakness.

Risk assessments by corporations and individuals are critical in this context. To perform a risk assessment, one has to understand the capabilities of those who are trying to infi ltrate their information systems, and place these risks in context. Many organizations face security and data privacy threats from many sources – malicious hackers, insiders, or weak security systems and process. In reality, the most dangerous threat for most organizations is unintended mistakes and errors by employees – losing a laptop, or sending a confi dential fi le to the wrong people. An analysis of recently revealed NSA strategy and techniques can help provide perspective as well as give insights into the methodologies of other state actors. Many of the NSA’s techniques involve accessing metadata, so we’ll explore that distinction fi rst. Next, we’ll identity the major NSA programs revealed as of today, and some suggested countermeasures.

http://hosteddocs.ittoolbox.com/Intralinks_NSA_Programs_as_Lessons_in_Data_Privacy.pdf

Deep Web Browser Shodan is Dangerous?

“When people don’t see stuff on Google, they think no one can find it. That’s not true.”
That’s according to John Matherly, creator of Shodan, the scariest search engine on the Internet.

Unlike Google (GOOG, Fortune 500), which crawls the Web looking for websites, Shodan navigates the Internet’s back channels. It’s a kind of “dark” Google, looking for the servers, webcams, printers, routers and all the other stuff that is connected to and makes up the Internet. (Shodan’s site was slow to load Monday following the publication of this story.)

 

Shodan runs 24/7 and collects information on about 500 million connected devices and services each month.

 

It’s stunning what can be found with a simple search on Shodan. Countless traffic lights, security cameras, home automation devices and heating systems are connected to the Internet and easy to spot.

Shodan searchers have found control systems for a water park, a gas station, a hotel wine cooler and a crematorium. Cybersecurity researchers have even located command and control systems for nuclear power plants and a particle-accelerating cyclotron by using Shodan.

 

What’s really noteworthy about Shodan’s ability to find all of this — and what makes Shodan so scary — is that very few of those devices have any kind of security built into them.

 

“It’s a massive security failure,” said HD Moore, chief security officer of Rapid 7, who operates a private version of a Shodan-like database for his own research purposes.

 http://money.cnn.com/2013/04/08/technology/security/shodan/

Healthcare.gov: When Websites Go Bad

Judging from the Congressional Hearing involving some of the major component vendors behind Healthcare.gov, Congress and the White House need some assistance in understanding how a major website development project works. Yelling at component vendors is useless but makes for good drama on local news at 6 o’clock. What is needed is a team experienced with integrating various web components into a cohesive whole from end to end. Apparently that was lacking with this site. Obama adviser Jeffrey Zients is being tapped to “fix” the site, but his background is more into management consulting and business policy, not IT.

 

Here’s what we know so far about plans to fix the site:

 

(Reuters) – President Barack Obama promised on Saturday that his troubled healthcare website was just weeks away from a cure as he struggled to convince Americans he is on top of what has become a self-inflicted wound to his signature first-term achievement.

His administration unveiled a plan on Friday to make Obamacare insurance marketplaces on healthcare.gov – a website riddled with error messages, long delays and bugs – work better by the end of November.

It was the end to an embarrassing week where Obama discovered he had overshot on an Oct 1. promise of a website that would make shopping for health insurance as easy as buying “a plane ticket on Kayak or a TV on Amazon.”

“As you may have heard, the site isn’t working the way it’s supposed to yet,” Obama said in his weekly Saturday address – an understatement after days of reports of people being shut out of the system.

“In the coming weeks, we are going to get it working as smoothly as it’s supposed to,” he added.

Obama had stood firm against Republican attempts to defund or delay the healthcare law, known popularly as Obamacare – efforts that led to a 16-day government shutdown this month.

He and his top officials had warned publicly before October 1 that there could be “glitches,” but the White House has been scrambling to control the damage from a rollout that was far worse than expected.

The depth of the design flaws has raised questions about why the Obama administration was so insistent on starting the enrollments on October 1 when the system was clearly not ready – and laid bare the president’s mistake in raising expectations about how good the website was going to be.

“Either they made assumptions that were too optimistic and were caught off guard, or they knew that the difficulties would be greater than the public understood, but chose not to say so,” said Bill Galston, a Brookings Institution expert who was a domestic policy adviser to Democratic President Bill Clinton.

“It may be some of both.”

CRISIS MANAGEMENT 101

Obama adviser Jeffrey Zients, appointed on Tuesday to figure out how to manage the complicated fixes for the website, was an unannounced participant on a conference call with health reporters on Friday afternoon.

Zients gave a deadline, although he cautioned there was a lot of work to do. “By the end of November, healthcare.gov will work smoothly for the vast majority of users,” he said.

Borrowing from the lexicon of homebuilders, Zients said he had hired a “general contractor” to manage the many contractors on the project, and developed a “punch list” of dozens of problems to address.

http://www.reuters.com/article/2013/10/26/us-usa-healthcare-obama-idUSBRE99P02S20131026

There’s been a rash of commentary from some on the left who’ve decided that the real problem with Obamacare isn’t the crippling technological issues that have made it impossible for almost anyone to enroll in the federally run health-insurance exchanges but the media’s coverage of those problems.

It’s not the crime, it’s the lack of a cover-up.

The complaint takes different forms. Salon’s Joan Walsh frames it instrumentally. The coverage, she writes, “only aids [the] unhinged right.” In this telling, the problem with reporting on Obamacare’s problems is that it helps Obamacare’s enemies.

Zerlina Maxwell frames it as a question of insured journalists being unable to see past their own rarified position. “The privilege of analyzing the process from the perspective of someone who is already insured and not in need of coverage allows the core impact of the new program on the health and security of millions of Americans to be missed,” she writes.

There are dimensions to these arguments that really center on the job of the journalist, and there, I think Walsh and Maxwell and I simply disagree. But behind this disagreement is a question about how deep the law’s problems really go. As Walsh and Maxwell (and President Obama) say, Obamacare is more than just a Web site. More balanced coverage, they believe, would be emphasizing all its other good qualities.

“I was actually happy to see the president come out defiantly in his Rose Garden talk, describing the ACA as ‘not just a website’ and listing the many benefits it’s already providing,” wrote Walsh.

“Obamacare is more than a website,” repeats Maxwell.

Obamacare’s problems go far beyond its Web site.

A failure in the press coverage of the health-care exchange’s rocky launch has been in allowing people to believe that the problem is a glitchy Web site. This is a failure of language: “The Web site” has become a confusing stand-in phrase for any problem relating to the law’s underlying infrastructure. No one has a very good word to describe everything that infrastructure encompasses.

In brick-and-mortar terms, it’s the road that leads to the store, the store itself, the payment systems between the store and the government and the manufacturers, the computer system the manufacturers use to fill the orders, the trucks that carry the the product back to the store, the loading dock where the customers pick up the products, and so on.

It’s the problems in that infrastructure — indeed, much more than “just a Web site” — that pose such deep problems for the law.

As Sarah Kliff and I wrote in our overview of the health-care launch’s technical issues, the challenges right now can be grouped into three broad categories: problems with the consumer experience on the HealthCare.gov Web site, problems with the eligibility system, and problems with the hand-off to insurers.

The problems with the Web site are the difficulties consumers are facing when they try to log on and shop for insurance coverage. These problems — error messages, site timeouts, difficulty logging in to an account — make it hard for an individual to buy coverage through the marketplace. They are the reason why some people have made upward of 20 attempts at purchasing a plan. These are the problems that are being fixed fastest and that are the least serious.

http://www.washingtonpost.com/blogs/wonkblog/wp/2013/10/25/obamacares-problems-go-much-deeper-than-the-web-site/?tid=pm_business_pop

Best Practices for Virtualizing & Managing SharePoint 2013

Why Virtualize SharePoint?

Increasingly, organizations want to virtualize modern multi-tiered applications like SharePoint, to better meet their business and collaboration needs. According to a report from the Enterprise Strategy Group (ESG) Lab, among organizations already using virtualization in some way, approximately 53 percent are moving toward implementing virtualization technology for more complex and advanced systems.1 For these organizations, it is necessary to consolidate and optimize computing resources for better flexibility, scalability, and manageability of mission-critical collaboration workloads like SharePoint 2013. This requirement is essential to better scale the key components of such demanding workloads—web servers, application servers, and database servers.2 In a traditional deployment of SharePoint, dedicated physical servers are usually used to deploy individual roles/components, including the front-end web server, application server, and database server (Figure 1). Organizations use separate physical servers for these roles to ensure high availability of services, better scalability, and improved performance. However, using separate physical servers for deploying separate roles has certain limitations, such as:  Underutilized resources: CPU, memory, and storage are dedicated to a specific workload and remain idle while waiting for instructions, thereby consuming unnecessary power and space.  Higher costs: Acquisition, maintenance, and management are more expensive.  Reduced efficiency: A longer time is required to recover from outages. Plus, a higher Recovery Time Objective (RTO) may affect the service-level agreement (SLA).

 

http://www.google.com/url?sa=t&rct=j&q=&esrc=s&frm=1&source=web&cd=8&cad=rja&ved=0CGYQFjAH&url=http%3A%2F%2Fdownload.microsoft.com%2Fdownload%2F0%2F0%2F1%2F001ADCCC-A45B-47E3-8DA4-ED51E3208021%2FBest_Practices_for_Virtualizing_and_Managing_SharePoint_2013.pdf&ei=cJNtUseREtG-2wXGyICoDg&usg=AFQjCNHhNH-vIdyOO7LinK46sWR_NImJ-Q&sig2=nzaMljGsv-cH0eN-wblIVw&bvm=bv.55123115,d.eW0 

Is It Time To Reconsider How We Approach CSS?

Challenging CSS Best Practices By Thierry Koblentz

 

When it comes to CSS, I believe that the sacred principle of “separation of concerns” (SoC) has lead us to accept bloat, obsolescence, redundancy, poor caching and more. Now, I’m convinced that the only way to improve how we author style sheets is by moving away from this principle.

For those of you who have never heard of the SoC principle in the context of Web design, it relates to something commonly known as the “separation of the three layers”: •structure, •presentation, •behavior.

It is about dividing these concerns into separate resources: an HTML document, one or more cascading style sheets and one or more JavaScript files.

But when it comes to the presentational layer, “best practice” goes way beyond the separation of resources. CSS authors thrive on styling documents entirely through style sheets, an approach that has been sanctified by Dave Shea’s excellent project CSS Zen Garden. CSS Zen Garden is what most — if not all — developers consider to be the standard for how to author style sheets.

http://coding.smashingmagazine.com/2013/10/21/challenging-css-best-practices-atomic-approach/

Helpful Sites for Writing Better User Stories and Use Cases

Advantages of the “As a user, I want” user story template

In my user stories book and in all my training and conference sessions on user stories I advocate writing user stories in the form of:

“As a <type of user>, I want <some goal> so that <some reason>.” While I consider the so-that clause optional, I really like this template. At a conference, someone asked me why. Because I get that question fairly often, I want to give three reasons why here:

Reason 1  Something significant and I’m tempted to say magical happens when requirements are put in the first person. Obviously by saying “As a such-and-such, I want …” you can see how the person’s mind goes instantly to imagining he or she is a such-and-such. As for the magic, Paul McCartney was interviewed and asked about why the Beatles songs were so amazingly popular. One of his responses was that their songs were among the first to use a lot of pronouns. Think about it: She Loves You, I Wanna Hold Your Hand, I Saw Her Standing There, I Am The Walrus, Baby You Can Drive My Car, etc. His point was that these helped people more closely identify with the songs. I tried briefly to find a source for this interview tonight and the closest I found was this. The information in that reference fits my recollection of hearing McCartney says this during a radio interview in 1973 or 74 that I assume was recorded when the Beatles were together.

Reason 2  Having a structure to the stories actually helps the product owner prioritize. If the product backlog is a jumble of things like: • Fix exception handing • Let users make reservations • Users want to see photos • Show room size options

… and so on, the product owner has to work harder to understand what the feature is, who benefits from it, and what the value of it is.

Read the rest at http://www.mountaingoatsoftware.com/blog/advantages-of-the-as-a-user-i-want-user-story-template

Your use cases are only as effective as the value someone’s deriving from them. What seems obvious to you may not be to your developers or customers. The success measurement for an effective written use case is one that is easily understood, and ultimately the developers can build the right product the first time.
A great way for writing effective use cases is to walk through a sample use case example and watch how it can be leveraged to something complex. By absorbing the meaning of use case diagrams, alternate flows and basic flows, you will be able to apply use cases to your projects. In some of the tips below, we’ll use eBay features for example use cases.
Before we get into this blog post for writing effective use cases for software development and product management, let’s quickly define a use case. If you had to find a single word it’s synonymous with, I suppose you could say “scenario”, but it’s really much more than that. It’s the particular types of scenarios that are made up activities. When you’re talking to your friends about a new obscure start-up, a great question that I like to ask is “What’s the primary use case for the customer?”. It puts someone on the spot to tell a story from the customer’s perspective from customer acquisition, to purchase, and on to engagement. Anyway, now let’s get on to writing up some use cases!
Tip 1. When creating use cases, be productive without perfection
Tip 2. Define your use case actors
Tip 3. Define your “Sunny Day” Use Cases (Primary Use Cases)
Tip 4. Identify reuse opportunity for use cases
Tip 5. Create a use case index
Tip 6. Identify the key components of your use case
Tip 7. Name and briefly describe your use case
Tip 8. Create the use case basic flow
Tip 9. Create the use case alternate flows
Tip 10. Produce your use case document
Tip 11. Sample Use Case Model Diagram
Tip 12. Do you need User Stories?
Tip 13. Agile Development with Use Cases
http://www.gatherspace.com/static/use_case_example.html#12

Are use cases agile? They’re sometimes regarded as the lumbering, heavyweight cousin of user stories – the requirements technique favored by agile teams.  Use Cases ARE Agile. No… really.  Being agile is an attitude and an approach, while use cases are simply a structure for organizing requirements. There’s nothing about use cases that make them ill-suited for agile teams. Rather, it’s the way that use cases are typically written that is NOT agile (that is, written completely up-front).  So, “Are use cases agile?” is the wrong question. If you want to stay agile, but need something more than a user story, the question to ask is, “How can I write use cases in an agile manner?” Here are four steps to get you started.

See more at: http://blog.casecomplete.com/post/Agile-Use-Cases-in-Four-Steps#sthash.az2Omkn1.dpuf

User stories are one of                         the primary development artifacts for Scrum and Extreme Programming (XP) project teams.  A user story is a very high-level definition of a requirement, containing just enough information so that the developers can produce a reasonable estimate of the effort to implement it.  This article covers the following topics:

 

        

 

1. Introduction to User Stories

A good way to think about a user story is that it is a reminder to have a conversation with your customer (in XP,                        project stakeholders are called customers), which is another way to say it’s a reminder to do some just-in-time analysis.  In short, user stories are very slim and high-level requirements artifacts.

http://www.agilemodeling.com/artifacts/userStory.htm

Use cases are a popular way to express software requirements.     They are popular because they are practical.  A use case bridges     the gap between user needs and system functionality by directly     stating the user intention and system response for each step in a     particular interaction.

Use cases are simple enough that almost anyone can read them.     Even customers or users can read use cases without any special     training.  However, writing use cases takes some practice.     It is not difficult to start writing use cases, but really     mastering them takes training, practice, and insight.

No single use case specifies the entire requirements of the     system.  Each use case merely explains one particular     interaction.  An organized suite of use cases and other     specification techniques are needed to fully specify the software     requirements.

The figure below illustrates where a use case document fits     into the overall software requirements specification (SRS) and     how it relates to other documents.  This white paper focuses on     the yellow “Use Cases” box.  Ideally, your use case document is     just one part of an overall set of project documents. But, don’t     worry if you just want to jump straight into writing use     cases: this white paper focuses on them.

http://www.readysetpro.com/whitepapers/usecasetut.html

User stories serve the same purpose as use cases but are not the same. They are used to create time estimates for the release planning meeting. They are also used instead of a large requirements document. User Stories are written by the customers as things that the system needs to do for them. They are similar to usage scenarios, except that they are not limited to describing a user interface. They are in the format of about three sentences of text written by the customer in the customers terminology without techno-syntax.
 User stories also drive the creation of the acceptance tests. One or more automated acceptance tests must be created to verify the user story has been correctly implemented.
 One of the biggest misunderstandings with user stories is how they differ from traditional requirements specifications. The biggest difference is in the level of detail. User stories should only provide enough detail to make a reasonably low risk estimate of how long the story will take to implement. When the time comes to implement the story developers will go to the customer and receive a detailed description of the requirements face to face.

http://www.extremeprogramming.org/rules/userstories.html

What is the difference between a UseCase and XP’s UserStory?

This is a common question, and not one that has a generally agreed on answer. Many people in the XP community consider stories to be a simplified form of use cases, but although I used to hold this view I see things differently now.

Use cases and stories are similar in that they are both ways to organize requirements. They are different in that they organize for different purposes. Use cases organize requirements to form a narrative of how users relate to and use a system. Hence they focus on user goals and how interacting with a system satisfies the goals. XP stories (and similar things, often called features) break requirements into chunks for planning purposes. Stories are explicitly broken down until they can be estimated as part of XP’s release planning process. Because these uses of requirements are different, heuristics for good use cases and stories will differ.

http://www.martinfowler.com/bliki/UseCasesAndStories.html

Origins      User Stories originate with Extreme Programming, their first written description in 1998 only claims that customers define project scope “with user stories, which are like use cases”. Rather than offered as a distinct practice, they are described as one of the “game pieces” used in the “planning game”.            However, most of the thrust of further writing centers around all the ways user stories are unlike use cases, in trying to answer in a more practical manner “how requirements are handled” in Extreme Programming (and more generally Agile) projects. This drives the emergence, over the years, of a more sophisticated account of user stories.                        cf. the Role-feature-benefit template, 2001                        cf. the 3 C’s model, 2001                        cf. the INVEST checklist, 2003                        cf. the Given-When-Then template, 2006                          Signs of use                  the team uses visual planning tools (release plan, story map, task board) and index cards or stickies on these displays reflect product features                        the labels on cards that stand for user stories contain few or no references to technical elements (“database”, “screen” or “dialog”) but generally refer to end users’ goals – See more at: http://guide.agilealliance.org/guide/stories.html#sthash.QAEt5kk1.dpuf

Synthetic Monitoring: Helpful Info for Web Developers, Architects and Admins

Every website requires some kind of real-time monitoring to stay abreast of changes to the production behavior of web applications at runtime. We all need to see how our websites hold up by simulating a customer clicking through our site pages and launching various transactions or complex requests. We also need to see how our apps respond when things go wrong. That’s what synthetic monitoring (aka active monitoring) helps with.

Microsoft‘s Technet Site has some helpful info. Here’s a snippet followed by a link for more detailed info:

In Operations Manager 2007, synthetic transactions are actions, run in real time, that are performed on monitored objects. You can use synthetic transactions to measure the performance of a monitored object and to see how Operations Manager reacts when synthetic stress is placed on your monitoring settings.

For example, for a Web site, you can create a synthetic transaction that performs the actions of a customer connecting to the site and browsing through its pages. For databases, you can create transactions that connect to the database. You can then schedule these actions to occur at regular intervals to see how the database or Web site reacts and to see whether your monitoring settings, such as alerts and notifications, also react as expected.

http://technet.microsoft.com/en-us/library/dd440885.aspx

Wikipedia has the following info (click the link for more info):

Synthetic monitoring (also known as active monitoring) is website monitoring that is done using a web browser emulation or scripted real web browsers. Behavioral scripts (or paths) are created to simulate an action or path that a customer or end-user would take on a site. Those paths are then continuously monitored at specified intervals for performance, such as: functionality, availability, and response time measures.

Synthetic monitoring is valuable because it enables a webmaster to identify problems and determine if his website or web application is slow or experiencing downtime before that problem affects actual end-users or customers. This type of monitoring does not require actual web traffic so it enables companies to test web applications 24×7, or test new applications prior to a live customer-facing launch.

http://en.wikipedia.org/wiki/Synthetic_monitoring

Website monitoring is the process of testing and verifying that end-users can interact with a website or web application as expected. Website monitoring is often used by businesses to ensure website uptime, performance, and functionality is as expected.

Website monitoring companies provide organizations the ability to consistently monitor a website, or server function, and observe how it responds. The monitoring is often conducted from several locations around the world to a specific website, or server, in order to detect issues related to general Internet latency, network hop issues, and to pinpoint errors. Monitoring companies generally report on these tests in a variety of reports, charts and graphs. When an error is detected monitoring services send out alerts via email, SMS, phone, SNMP trap, pager that may include diagnostic information, such as a network trace route, code capture of a web page’s HTML file, a screen shot of a webpage,and even a video of a website failing. These diagnostics allow network administrators and webmasters to correct issues faster.

Monitoring gathers extensive data on website performance, such as load times, server response times, page element performance that is often analyzed and used to further optimize website performance.

http://en.wikipedia.org/wiki/Website_monitoring 

Best Practices for Active Response Time Monitoring by Chung Wu

First, unless carefully designed, the tests may not be representative of actual end user activities, reducing the usefulness of the measurements. Therefore, you must be very careful in defining those tests. It would be a good idea to sit down with real users to observe how they use the applications. If the application has not been launched, work with the developers, or if there is one, the UI interaction designer to define the flow. In addition, work with your business sponsors to understand where the application will be used and the distribution of user population. You would want to place your synthetic test drivers at locations where it is important to measure user experience.

Second, some synthetic transactions are very hard to create and may introduce noise into business data. While it is usually relatively easy to create query-based synthetic transactions, it is much harder to create transactions that create or update data. For example, if synthetic transactions are to test for successful checkouts on an e-commerce website, the tests must be constructed carefully so that the test orders are not mis-categorized as actual orders.

To mitigate these potential problems, you should set up dedicated test account(s) to make it easier to tell whether something running on the application came from real users or the synthetic tests…

Read the rest at http://it.toolbox.com/blogs/app-mgt-blog/best-practices-for-active-response-time-monitoring-23265 

Related articles

Solution Architecture Best Practice: Using System Availability and Recovery Metrics

//
Before endeavoring on an IT project involving the introduction of a new software package or or expansion of an existing one, business leaders need to know the impact of such an initiative on revenues, labor costs, and capital budgets. A solution architecture design document  (aka SAD) can help as long as it is part of an overall business impact or disaster recovery planning process. When drafting a solution architecture design document, helpful metrics such as system availability, recovery time objective (RTO) and recovery point objective (RPO) can help determine the desired runtime characteristics the business wants to achieve. Non-technical business leaders and subject matter experts may not necessarily care about “the nines” (99.999% availability, for instance), but they do care about lost revenue per hour, minute and second that the system (hardware software as a whole) that the company incurs when an IT asset is offline, or the labor costs of workers standing idle or having to resort to manual business process steps. Conversely, IT operating team members don’t necessarily care about the notion of these costs, but cares more about the nines. But for many, arriving at the right set of nines to assign to an IT project that introduces or expands a system is not exactly straightforward.

I’m offering an approach to help you assign a set of nines to your system availability objective. By “system,” I am referring to the combination of hardware and software. The following table provides industry-standard mappings of “nines” to acceptable down times for different availabilities for a given one-year period.

90%

99%

99.9%

99.99%

99.999%

99.9999%

40 days

4 days

9 hours

50 minutes

5 minutes

30 seconds

How do we know which of the sets of nines is applicable? It depends on the business subject matter experts, and in turn, they may rely on the operations team to supply data. But in the case where neither the business SMEs or the operations teams have such numbers, a good rule of thumb is to first have the business SMEs, tally the Line of Business (LOB) revenue per hour, minute or second of any given business process that would be impacted if the system in question went down. Have them do the same for revenue per hour, minute and second. Don’t worry about downtimes just yet; we only want to know how much money is generated by the business process per hour/min/sec, and then the labor cost (or overhead costs, operating costs) per hour/min/sec.

Next, identify the cost of maintaining each of the sets of nines (the greater the number of nines, the greater the maintenance cost).

Finally, if the loss of revenues per hour/min/sec noticeably exceeds the cost of maintaining the desired nines, then it might be advisable to absorb the maintenance costs. In the absence of revenues, the project’s maintenance budget can be used, but caution has to be used here as the budget may not align with lost revenues when a system goes down as the budget is almost always smaller than the company’s revenues for the impacted business process.

Labor costs should be used in a separate metric to identify the amount of money a company pays its employees when the system is unavailable. To recap, we have three system availability decision metrics to use from a business standpoint to help us arrive at a decision on which of the nines to choose:

Availability Decision Per Revenue
  1. Tally Revenue generated per hour, min, seconds
  2. Identify the cost of maintaining each of the sets of nines
  3. Availability Decision Ratio (ADR) = Revenues (R) / Cost of Nines (CoN), where a number greater than 1 indicates that the chosen set of nines is doable

 

Availability Decision Per Labor Costs Similar to Availability Decision Per Revenue above, except you use Labor Costs (LC) instead of Revenues
Availability Decision Per Maintenance Budget Similar to Availability Decision Per Revenue above, except you use Maintenance Budget (MB) instead of Revenues

Regarding recovery metrics, an article on Wikipedia does a great job in explaining them. I provide a snippet below, and invite you to go to http://en.wikipedia.org/wiki/Recovery_point_objective to read the rest. I have highlighted some sentences to call your attention to important principles.

The recovery time objective (RTO) is the duration of time and a service level within which a business process must be restored after a disaster (or disruption) in order to avoid unacceptable consequences associated with a break in business continuity.[1] It can include the time for trying to fix the problem without a recovery, the recovery itself, testing, and the communication to the users. Decision time for users representative is not included. RTO is spoken of as a complement of RPO (or Recovery point objective) with the two metrics describing the limits of acceptable or “tolerable” ITSC performance in terms of time lost(RTO) from normal business process functioning, and in terms of data lost or not backed-up during that period of time(RPO) respectively. The rule in setting an RTO should be that the RTO is the longest period of time the business can do without the IT Service in question.

A “recovery point objective” or “RPO”, is defined by business continuity planning. It is the maximum tolerable period in which data might be lost from an IT service due to a major incident.[1] The RPO gives systems designers a limit to work to. For instance, if the RPO is set to 4 hours, then in practice, offsite mirrored backups must be continuously maintained- a daily offsite backup on tape will not suffice. Care must be taken to avoid two common mistakes around the use and definition of RPO. Firstly, BC Staff use business impact analysis to determine RPO for each service – RPO is not determined by the existent backup regime. Secondly, when any level of preparation of offsite data is required, rather than at the time the backups are offsited- the period during which data is lost very often starts near the time of the beginning of the work to prepare backups which are eventually offsited.

How RTO and RPO values affect computer system design

The RTO and RPO form part of the first specification for any IT Service. The RTO and the RPO have a very significant effect on the design of computer services and for this reason must be considered in concert with all the other major system design criteria.

When assessing the abilities of system designs to meet RPO criteria, for practical reasons, the RPO capability in a proposed design is tied to the times backups are sent offsite- if for instance offsiting is on tape and only daily (still quite common), then 49 or better, 73 hours is the best RPO the proposed system can deliver, so as to cover for tape hardware problems (tape failure is still too frequent, one bad tape can write off a whole daily synchronisation point). Another example- if a service is to be properly set up to restart from any point (data is capable of synchronisation at all times) and offsiting is via synchronous copies to an offsite mirror data storage device, then the RPO capability of the proposed service is to all intents and purposes 0 hours- although it is normal to allow an hour for RPO in this circumstance to cover off any unforeseen difficulty.

If the RTO and RPO can be set to be more than 73 hours then daily backups to tapes (or other transportable media), that are then couriered on a daily basis to an offsite location, comfortably covers backup needs at a relatively low cost. Recovery can be enacted at a predetermined site. Very often this site will be one belonging to a specialist recovery company who can more cheaply provide serviced floor space and hardware as required in recovery because it manages the risks to its clients and carefully shares (or “syndicates”) hardware between them, according to these risks.

If the RTO is set to 4 hours and the RPO to 1 hour, then a mirror copy of production data must be continuously maintained at the recovery site and close to dedicated recovery hardware must be available at the recovery site- hardware that is always capable of being pressed into service within 30 minutes or so. These shorter RTO and RPO settings demand a fundamentally different hardware design- which is for instance, relatively much more expensive than tape backup designs.