There are many advantages of using MVC as follows: –
MVC helps us to develop loosely coupled architecture.
Complex applications can be easily managed.
Separations of concerns is possible by dividing the application in to Model,View and Controller.
Extensive support for Test Driven Development(TDD). Unit testing will be easy, an additional layer of testing will provide yet another layer of defense against unexpected behavior.
Asp. net MVC is light weight as they do not use view state.
SEO(Search Engine Optimization) Clean Url’s and no extension methods are used for locating the physical files.
No Post back events.
Expressive views including the new Razor view engine and HTML – 5 enabled.
In one of my projects I needed to use udp multicast messaging to distribute notifications. I’ve found numerous articles about multicasting but these were mostly oriented towards an explanation of the technology with just simple code snippets demonstrating basic api calls.
My aim was to build a component which would provide reliable encapsulation of network interactions and that would expose multicast functionality to upper layers through publish/subscribe implementation of the observer pattern.
Requirements for message broker component:
- distribute string messages to anyone interested (observer pattern) using udp multicasting
- single component encapsulating both send & receive logic
- no machine specific configuration
- basic error detection
- thread-safe interaction
- detection of duplicate messages
- ability to handle fragmented data
- ability to recover from data received in unexpected format (e.g. network errors, interrupted transmission, etc.)
Soon after Edward Snowden released a cache of top-secret documents detailing the far-ranging data collection activities of the U.S. National Security Agency (NSA) in the summer of 2013, the Federal Bureau of Investigation (FBI) approached the secure email provider Lavabit with a demand to turn over the encryption keys to its communications. Their target was, allegedly, none other than Snowden himself, who had been posting his @lavabit.com address and inviting human rights activists around the world to contact him. Ladar Levison, the owner of Lavabit, refused the Bureau’s request. Levison was concerned that turning over his private encryption key would allow the government to decrypt not only Snowden’s communications but also those of all of Lavabit’s nearly 400,000 customers, many of whom are activists and had chosen Lavabit for its security. Facing a contempt of court charge, Levison eventually turned over the encryption key. However, he simultaneously shut down his service, thus preventing the authorities from gaining access to his customers’ communications.
Few companies have the ability to act like Lavabit and shut down in the face of such a demand. Lavabit was a very small organization with no shareholders and few employees to answer to for their actions. As organizations become more decentralized and their employees more mobile, they naturally need to share more information, raising concerns about how to adequately protect that information from NSA-like government actors. Most organizations have no plan in place for reacting to a government request for data. How should companies prepare to deal with this issue? What steps can be taken to protect data? We’ll explore these questions in this report, place the NSA threat in perspective, and suggest steps most companies can take to preserve data privacy.
The NSA in Perspective
The NSA is not unique in its use of the Internet for intelligence gathering. Most other major industrialized nations contribute to or have some surveillance footprint on the Internet. Some of these nations even engage in economic espionage and sabotage
efforts, a serious concern for businesses worried about intellectual property and their global competitiveness.1 However, because of the NSA’s aggressiveness and scope, organizations should consider to what degree they need to protect against such an agency and other only slightly less capable state actors. The NSA can be thwarted, as their frustration with breaking2 the Tor network3 demonstrates. Though, as is often the case, hardening one security weakness inevitably leads to the aversary exploiting that weakness.
Risk assessments by corporations and individuals are critical in this context. To perform a risk assessment, one has to understand the capabilities of those who are trying to infi ltrate their information systems, and place these risks in context. Many organizations face security and data privacy threats from many sources – malicious hackers, insiders, or weak security systems and process. In reality, the most dangerous threat for most organizations is unintended mistakes and errors by employees – losing a laptop, or sending a confi dential fi le to the wrong people. An analysis of recently revealed NSA strategy and techniques can help provide perspective as well as give insights into the methodologies of other state actors. Many of the NSA’s techniques involve accessing metadata, so we’ll explore that distinction fi rst. Next, we’ll identity the major NSA programs revealed as of today, and some suggested countermeasures.
“When people don’t see stuff on Google, they think no one can find it. That’s not true.”
That’s according to John Matherly, creator of Shodan, the scariest search engine on the Internet.
Unlike Google (GOOG, Fortune 500), which crawls the Web looking for websites, Shodan navigates the Internet’s back channels. It’s a kind of “dark” Google, looking for the servers, webcams, printers, routers and all the other stuff that is connected to and makes up the Internet. (Shodan’s site was slow to load Monday following the publication of this story.)
Shodan runs 24/7 and collects information on about 500 million connected devices and services each month.
It’s stunning what can be found with a simple search on Shodan. Countless traffic lights, security cameras, home automation devices and heating systems are connected to the Internet and easy to spot.
Shodan searchers have found control systems for a water park, a gas station, a hotel wine cooler and a crematorium. Cybersecurity researchers have even located command and control systems for nuclear power plants and a particle-accelerating cyclotron by using Shodan.
What’s really noteworthy about Shodan’s ability to find all of this — and what makes Shodan so scary — is that very few of those devices have any kind of security built into them.
“It’s a massive security failure,” said HD Moore, chief security officer of Rapid 7, who operates a private version of a Shodan-like database for his own research purposes.
Judging from the Congressional Hearing involving some of the major component vendors behind Healthcare.gov, Congress and the White House need some assistance in understanding how a major website development project works. Yelling at component vendors is useless but makes for good drama on local news at 6 o’clock. What is needed is a team experienced with integrating various web components into a cohesive whole from end to end. Apparently that was lacking with this site. Obama adviser Jeffrey Zients is being tapped to “fix” the site, but his background is more into management consulting and business policy, not IT.
Here’s what we know so far about plans to fix the site:
(Reuters) – President Barack Obama promised on Saturday that his troubled healthcare website was just weeks away from a cure as he struggled to convince Americans he is on top of what has become a self-inflicted wound to his signature first-term achievement.
His administration unveiled a plan on Friday to make Obamacare insurance marketplaces on healthcare.gov – a website riddled with error messages, long delays and bugs – work better by the end of November.
It was the end to an embarrassing week where Obama discovered he had overshot on an Oct 1. promise of a website that would make shopping for health insurance as easy as buying “a plane ticket on Kayak or a TV on Amazon.”
“As you may have heard, the site isn’t working the way it’s supposed to yet,” Obama said in his weekly Saturday address – an understatement after days of reports of people being shut out of the system.
“In the coming weeks, we are going to get it working as smoothly as it’s supposed to,” he added.
Obama had stood firm against Republican attempts to defund or delay the healthcare law, known popularly as Obamacare – efforts that led to a 16-day government shutdown this month.
He and his top officials had warned publicly before October 1 that there could be “glitches,” but the White House has been scrambling to control the damage from a rollout that was far worse than expected.
The depth of the design flaws has raised questions about why the Obama administration was so insistent on starting the enrollments on October 1 when the system was clearly not ready – and laid bare the president’s mistake in raising expectations about how good the website was going to be.
“Either they made assumptions that were too optimistic and were caught off guard, or they knew that the difficulties would be greater than the public understood, but chose not to say so,” said Bill Galston, a Brookings Institution expert who was a domestic policy adviser to Democratic President Bill Clinton.
“It may be some of both.”
CRISIS MANAGEMENT 101
Obama adviser Jeffrey Zients, appointed on Tuesday to figure out how to manage the complicated fixes for the website, was an unannounced participant on a conference call with health reporters on Friday afternoon.
Zients gave a deadline, although he cautioned there was a lot of work to do. “By the end of November, healthcare.gov will work smoothly for the vast majority of users,” he said.
Borrowing from the lexicon of homebuilders, Zients said he had hired a “general contractor” to manage the many contractors on the project, and developed a “punch list” of dozens of problems to address.
There’s been a rash of commentary from some on the left who’ve decided that the real problem with Obamacare isn’t the crippling technological issues that have made it impossible for almost anyone to enroll in the federally run health-insurance exchanges but the media’s coverage of those problems.
It’s not the crime, it’s the lack of a cover-up.
The complaint takes different forms. Salon’s Joan Walsh frames it instrumentally. The coverage, she writes, “only aids [the] unhinged right.” In this telling, the problem with reporting on Obamacare’s problems is that it helps Obamacare’s enemies.
Zerlina Maxwell frames it as a question of insured journalists being unable to see past their own rarified position. “The privilege of analyzing the process from the perspective of someone who is already insured and not in need of coverage allows the core impact of the new program on the health and security of millions of Americans to be missed,” she writes.
There are dimensions to these arguments that really center on the job of the journalist, and there, I think Walsh and Maxwell and I simply disagree. But behind this disagreement is a question about how deep the law’s problems really go. As Walsh and Maxwell (and President Obama) say, Obamacare is more than just a Web site. More balanced coverage, they believe, would be emphasizing all its other good qualities.
“I was actually happy to see the president come out defiantly in his Rose Garden talk, describing the ACA as ‘not just a website’ and listing the many benefits it’s already providing,” wrote Walsh.
“Obamacare is more than a website,” repeats Maxwell.
Obamacare’s problems go far beyond its Web site.
A failure in the press coverage of the health-care exchange’s rocky launch has been in allowing people to believe that the problem is a glitchy Web site. This is a failure of language: “The Web site” has become a confusing stand-in phrase for any problem relating to the law’s underlying infrastructure. No one has a very good word to describe everything that infrastructure encompasses.
In brick-and-mortar terms, it’s the road that leads to the store, the store itself, the payment systems between the store and the government and the manufacturers, the computer system the manufacturers use to fill the orders, the trucks that carry the the product back to the store, the loading dock where the customers pick up the products, and so on.
It’s the problems in that infrastructure — indeed, much more than “just a Web site” — that pose such deep problems for the law.
As Sarah Kliff and I wrote in our overview of the health-care launch’s technical issues, the challenges right now can be grouped into three broad categories: problems with the consumer experience on the HealthCare.gov Web site, problems with the eligibility system, and problems with the hand-off to insurers.
The problems with the Web site are the difficulties consumers are facing when they try to log on and shop for insurance coverage. These problems — error messages, site timeouts, difficulty logging in to an account — make it hard for an individual to buy coverage through the marketplace. They are the reason why some people have made upward of 20 attempts at purchasing a plan. These are the problems that are being fixed fastest and that are the least serious.
Why Virtualize SharePoint?
Increasingly, organizations want to virtualize modern multi-tiered applications like SharePoint, to better meet their business and collaboration needs. According to a report from the Enterprise Strategy Group (ESG) Lab, among organizations already using virtualization in some way, approximately 53 percent are moving toward implementing virtualization technology for more complex and advanced systems.1 For these organizations, it is necessary to consolidate and optimize computing resources for better flexibility, scalability, and manageability of mission-critical collaboration workloads like SharePoint 2013. This requirement is essential to better scale the key components of such demanding workloads—web servers, application servers, and database servers.2 In a traditional deployment of SharePoint, dedicated physical servers are usually used to deploy individual roles/components, including the front-end web server, application server, and database server (Figure 1). Organizations use separate physical servers for these roles to ensure high availability of services, better scalability, and improved performance. However, using separate physical servers for deploying separate roles has certain limitations, such as: Underutilized resources: CPU, memory, and storage are dedicated to a specific workload and remain idle while waiting for instructions, thereby consuming unnecessary power and space. Higher costs: Acquisition, maintenance, and management are more expensive. Reduced efficiency: A longer time is required to recover from outages. Plus, a higher Recovery Time Objective (RTO) may affect the service-level agreement (SLA).
- To SharePoint or Not to SharePoint? (business2community.com)
- SharePoint: Can it navigate the cloud, mobile curves ahead? (zdnet.com)
- SharePoint Does Not Function as Stand Alone ECM (arnoldit.com)
- “Stop Asking Your SharePoint Users What They Want” #dbpreads #dbpfavs (dbpxhaust.wordpress.com)
Challenging CSS Best Practices By Thierry Koblentz
When it comes to CSS, I believe that the sacred principle of “separation of concerns” (SoC) has lead us to accept bloat, obsolescence, redundancy, poor caching and more. Now, I’m convinced that the only way to improve how we author style sheets is by moving away from this principle.
For those of you who have never heard of the SoC principle in the context of Web design, it relates to something commonly known as the “separation of the three layers”: •structure, •presentation, •behavior.
But when it comes to the presentational layer, “best practice” goes way beyond the separation of resources. CSS authors thrive on styling documents entirely through style sheets, an approach that has been sanctified by Dave Shea’s excellent project CSS Zen Garden. CSS Zen Garden is what most — if not all — developers consider to be the standard for how to author style sheets.
Advantages of the “As a user, I want” user story template
In my user stories book and in all my training and conference sessions on user stories I advocate writing user stories in the form of:
“As a <type of user>, I want <some goal> so that <some reason>.” While I consider the so-that clause optional, I really like this template. At a conference, someone asked me why. Because I get that question fairly often, I want to give three reasons why here:
Reason 1 Something significant and I’m tempted to say magical happens when requirements are put in the first person. Obviously by saying “As a such-and-such, I want …” you can see how the person’s mind goes instantly to imagining he or she is a such-and-such. As for the magic, Paul McCartney was interviewed and asked about why the Beatles songs were so amazingly popular. One of his responses was that their songs were among the first to use a lot of pronouns. Think about it: She Loves You, I Wanna Hold Your Hand, I Saw Her Standing There, I Am The Walrus, Baby You Can Drive My Car, etc. His point was that these helped people more closely identify with the songs. I tried briefly to find a source for this interview tonight and the closest I found was this. The information in that reference fits my recollection of hearing McCartney says this during a radio interview in 1973 or 74 that I assume was recorded when the Beatles were together.
Reason 2 Having a structure to the stories actually helps the product owner prioritize. If the product backlog is a jumble of things like: • Fix exception handing • Let users make reservations • Users want to see photos • Show room size options
… and so on, the product owner has to work harder to understand what the feature is, who benefits from it, and what the value of it is.
Your use cases are only as effective as the value someone’s deriving from them. What seems obvious to you may not be to your developers or customers. The success measurement for an effective written use case is one that is easily understood, and ultimately the developers can build the right product the first time.
A great way for writing effective use cases is to walk through a sample use case example and watch how it can be leveraged to something complex. By absorbing the meaning of use case diagrams, alternate flows and basic flows, you will be able to apply use cases to your projects. In some of the tips below, we’ll use eBay features for example use cases.
Before we get into this blog post for writing effective use cases for software development and product management, let’s quickly define a use case. If you had to find a single word it’s synonymous with, I suppose you could say “scenario”, but it’s really much more than that. It’s the particular types of scenarios that are made up activities. When you’re talking to your friends about a new obscure start-up, a great question that I like to ask is “What’s the primary use case for the customer?”. It puts someone on the spot to tell a story from the customer’s perspective from customer acquisition, to purchase, and on to engagement. Anyway, now let’s get on to writing up some use cases!
Tip 1. When creating use cases, be productive without perfection
Tip 2. Define your use case actors
Tip 3. Define your “Sunny Day” Use Cases (Primary Use Cases)
Tip 4. Identify reuse opportunity for use cases
Tip 5. Create a use case index
Tip 6. Identify the key components of your use case
Tip 7. Name and briefly describe your use case
Tip 8. Create the use case basic flow
Tip 9. Create the use case alternate flows
Tip 10. Produce your use case document
Tip 11. Sample Use Case Model Diagram
Tip 12. Do you need User Stories?
Tip 13. Agile Development with Use Cases
Are use cases agile? They’re sometimes regarded as the lumbering, heavyweight cousin of user stories – the requirements technique favored by agile teams. Use Cases ARE Agile. No… really. Being agile is an attitude and an approach, while use cases are simply a structure for organizing requirements. There’s nothing about use cases that make them ill-suited for agile teams. Rather, it’s the way that use cases are typically written that is NOT agile (that is, written completely up-front). So, “Are use cases agile?” is the wrong question. If you want to stay agile, but need something more than a user story, the question to ask is, “How can I write use cases in an agile manner?” Here are four steps to get you started.
User stories are one of the primary development artifacts for Scrum and Extreme Programming (XP) project teams. A user story is a very high-level definition of a requirement, containing just enough information so that the developers can produce a reasonable estimate of the effort to implement it. This article covers the following topics:
1. Introduction to User Stories
A good way to think about a user story is that it is a reminder to have a conversation with your customer (in XP, project stakeholders are called customers), which is another way to say it’s a reminder to do some just-in-time analysis. In short, user stories are very slim and high-level requirements artifacts.
Use cases are a popular way to express software requirements. They are popular because they are practical. A use case bridges the gap between user needs and system functionality by directly stating the user intention and system response for each step in a particular interaction.
Use cases are simple enough that almost anyone can read them. Even customers or users can read use cases without any special training. However, writing use cases takes some practice. It is not difficult to start writing use cases, but really mastering them takes training, practice, and insight.
No single use case specifies the entire requirements of the system. Each use case merely explains one particular interaction. An organized suite of use cases and other specification techniques are needed to fully specify the software requirements.
The figure below illustrates where a use case document fits into the overall software requirements specification (SRS) and how it relates to other documents. This white paper focuses on the yellow “Use Cases” box. Ideally, your use case document is just one part of an overall set of project documents. But, don’t worry if you just want to jump straight into writing use cases: this white paper focuses on them.
User stories serve the same purpose as use cases but are not the same. They are used to create time estimates for the release planning meeting. They are also used instead of a large requirements document. User Stories are written by the customers as things that the system needs to do for them. They are similar to usage scenarios, except that they are not limited to describing a user interface. They are in the format of about three sentences of text written by the customer in the customers terminology without techno-syntax.
User stories also drive the creation of the acceptance tests. One or more automated acceptance tests must be created to verify the user story has been correctly implemented.
One of the biggest misunderstandings with user stories is how they differ from traditional requirements specifications. The biggest difference is in the level of detail. User stories should only provide enough detail to make a reasonably low risk estimate of how long the story will take to implement. When the time comes to implement the story developers will go to the customer and receive a detailed description of the requirements face to face.
This is a common question, and not one that has a generally agreed on answer. Many people in the XP community consider stories to be a simplified form of use cases, but although I used to hold this view I see things differently now.
Use cases and stories are similar in that they are both ways to organize requirements. They are different in that they organize for different purposes. Use cases organize requirements to form a narrative of how users relate to and use a system. Hence they focus on user goals and how interacting with a system satisfies the goals. XP stories (and similar things, often called features) break requirements into chunks for planning purposes. Stories are explicitly broken down until they can be estimated as part of XP’s release planning process. Because these uses of requirements are different, heuristics for good use cases and stories will differ.
Origins User Stories originate with Extreme Programming, their first written description in 1998 only claims that customers define project scope “with user stories, which are like use cases”. Rather than offered as a distinct practice, they are described as one of the “game pieces” used in the “planning game”. However, most of the thrust of further writing centers around all the ways user stories are unlike use cases, in trying to answer in a more practical manner “how requirements are handled” in Extreme Programming (and more generally Agile) projects. This drives the emergence, over the years, of a more sophisticated account of user stories. cf. the Role-feature-benefit template, 2001 cf. the 3 C’s model, 2001 cf. the INVEST checklist, 2003 cf. the Given-When-Then template, 2006 Signs of use the team uses visual planning tools (release plan, story map, task board) and index cards or stickies on these displays reflect product features the labels on cards that stand for user stories contain few or no references to technical elements (“database”, “screen” or “dialog”) but generally refer to end users’ goals – See more at: http://guide.agilealliance.org/guide/stories.html#sthash.QAEt5kk1.dpuf
- We vs they: agile testing (tisquirrel.wordpress.com)
- How to split user stories – London Agile Discussion Group (agileworldblog.wordpress.com)
- An Agile Life- Part 1 (brendasmull.wordpress.com)
- Lessons learned how to do Scrum in a fixed price project (trifork.com)
- A Lack of a Shared Understanding (thecriticalpath.info)
- Epics, Story Mapping and Kanbans Make a Unified Agile View (keeping-agile.com)
Every website requires some kind of real-time monitoring to stay abreast of changes to the production behavior of web applications at runtime. We all need to see how our websites hold up by simulating a customer clicking through our site pages and launching various transactions or complex requests. We also need to see how our apps respond when things go wrong. That’s what synthetic monitoring (aka active monitoring) helps with.
Microsoft‘s Technet Site has some helpful info. Here’s a snippet followed by a link for more detailed info:
In Operations Manager 2007, synthetic transactions are actions, run in real time, that are performed on monitored objects. You can use synthetic transactions to measure the performance of a monitored object and to see how Operations Manager reacts when synthetic stress is placed on your monitoring settings.
For example, for a Web site, you can create a synthetic transaction that performs the actions of a customer connecting to the site and browsing through its pages. For databases, you can create transactions that connect to the database. You can then schedule these actions to occur at regular intervals to see how the database or Web site reacts and to see whether your monitoring settings, such as alerts and notifications, also react as expected.
Wikipedia has the following info (click the link for more info):
Synthetic monitoring (also known as active monitoring) is website monitoring that is done using a web browser emulation or scripted real web browsers. Behavioral scripts (or paths) are created to simulate an action or path that a customer or end-user would take on a site. Those paths are then continuously monitored at specified intervals for performance, such as: functionality, availability, and response time measures.
Synthetic monitoring is valuable because it enables a webmaster to identify problems and determine if his website or web application is slow or experiencing downtime before that problem affects actual end-users or customers. This type of monitoring does not require actual web traffic so it enables companies to test web applications 24×7, or test new applications prior to a live customer-facing launch.
Website monitoring is the process of testing and verifying that end-users can interact with a website or web application as expected. Website monitoring is often used by businesses to ensure website uptime, performance, and functionality is as expected.
Website monitoring companies provide organizations the ability to consistently monitor a website, or server function, and observe how it responds. The monitoring is often conducted from several locations around the world to a specific website, or server, in order to detect issues related to general Internet latency, network hop issues, and to pinpoint errors. Monitoring companies generally report on these tests in a variety of reports, charts and graphs. When an error is detected monitoring services send out alerts via email, SMS, phone, SNMP trap, pager that may include diagnostic information, such as a network trace route, code capture of a web page’s HTML file, a screen shot of a webpage,and even a video of a website failing. These diagnostics allow network administrators and webmasters to correct issues faster.
Monitoring gathers extensive data on website performance, such as load times, server response times, page element performance that is often analyzed and used to further optimize website performance.
Best Practices for Active Response Time Monitoring by Chung Wu
First, unless carefully designed, the tests may not be representative of actual end user activities, reducing the usefulness of the measurements. Therefore, you must be very careful in defining those tests. It would be a good idea to sit down with real users to observe how they use the applications. If the application has not been launched, work with the developers, or if there is one, the UI interaction designer to define the flow. In addition, work with your business sponsors to understand where the application will be used and the distribution of user population. You would want to place your synthetic test drivers at locations where it is important to measure user experience.
Second, some synthetic transactions are very hard to create and may introduce noise into business data. While it is usually relatively easy to create query-based synthetic transactions, it is much harder to create transactions that create or update data. For example, if synthetic transactions are to test for successful checkouts on an e-commerce website, the tests must be constructed carefully so that the test orders are not mis-categorized as actual orders.
To mitigate these potential problems, you should set up dedicated test account(s) to make it easier to tell whether something running on the application came from real users or the synthetic tests…
- Synthetic Monitoring: Is It Really Worth It? (java.sys-con.com)
- Compuware Extends Synthetic Support to Commercial Browsers with Chrome (sys-con.com)
- Feature update: Actionable alerts on links, site changes and more (raventools.com)
- 5 Questions to Ask To Choose the Right Website Monitoring Solution (circleid.com)
- Did you know? Keeping tabs on link monitoring usage (raventools.com)
- Website monitoring using Pingdom at Cal State University Monterey Bay (royal.pingdom.com)