1/18/11

Covariance And Contravariance In Generics

C# 4.0 (and .NET 4.0) introduced covariance and contravariance to generic interfaces and delegates. But what is this variance thing?

According to Wikipedia, in multilinear algebra and tensor analysis, covariance and contravariance describe how the quantitative description of certain geometrical or physical entities changes when passing from one coordinate system to another.(*)

But what does this have to do with C# or .NET?

In type theory, a the type T is greater (>) than type S if S is a subtype (derives from) T, which means that there is a quantitative description for types in a type hierarchy.

So, how does covariance and contravariance apply to C# (and .NET) generic types?

In C# (and .NET), variance is a relation between a generic type definition and a particular generic type parameter.

Given two types Base and Derived, such that:

* There is a reference (or identity) conversion between Base and Derived
* Base ≥ Derived

A generic type definition Generic is:

* covariant in T if the ordering of the constructed types follows the ordering of the generic type parameters: Generic ≥ Generic.
* contravariant in T if the ordering of the constructed types is reversed from the ordering of the generic type parameters: Generic ≤ Generic.
* invariant in T if neither of the above apply.

If this definition is applied to arrays, we can see that arrays have always been covariant in relation to the type of the elements because this is valid code:

object[] objectArray = new string[] { "string 1", "string 2" };
objectArray[0] = "string 3";
objectArray[1] = new object();

However, when we try to run this code, the second assignment will throw an ArrayTypeMismatchException. Although the compiler was fooled into thinking this was valid code because an object is being assigned to an element of an array of object, at run time, there is always a type check to guarantee that the runtime type of the definition of the elements of the array is greater or equal to the instance being assigned to the element. In the above example, because the runtime type of the array is array of string, the first assignment of array elements is valid because string ≥ string and the second is invalid because string ≤ object.

This leads to the conclusion that, although arrays have always been covariant in relation to the type of the elements, they are not safely covariant – code that compiles is not guaranteed to run without errors.

In C#, variance is enforced in the declaration of the type and not determined by the usage of each the generic type parameter.

Covariance in relation to a particular generic type parameter is enforced, is using the out generic modifier:

public interface IEnumerable
{
IEnumerator GetEnumerator();
}

public interface IEnumerator
{
T Current { get; }
bool MoveNext();
}

Notice the convenient use the pre-existing out keyword. Besides the benefit of not having to remember a new hypothetic covariant keyword, out is easier to remember because it defines that the generic type parameter can only appear in output positions — read-only properties and method return values.

In a similar way, the way contravariance is enforced in relation a particular generic type parameter, is using the in generic modifier:

public interface IComparer
{
int Compare(T x, T y);
}

Once again, the use of the pre-existing in keyword makes it easier to remember that the generic type parameter can only be used in input positions — write-only properties and method non ref and non out parameters.

A generic type parameter that is not marked covariant (out) or contravariant (in) is invariant.

Because covariance and contravariance applies to the relation between a generic type definition and a particular generic type parameter, a generic type definition can be both covariant, contravariant and invariant depending on the generic type parameter.

public delegate TResult Func(T arg);

In the above delegate definition, Func is contravariant in T and convariant in TResult.

All the types in the .NET Framework where variance could be applied to its generic type parameters have been modified to take advantage of this new feature.

In summary, the rules for variance in C# (and .NET) are:

* Variance in relation to generic type parameters is restricted to generic interface and generic delegate type definitions.
* A generic interface or generic delegate type definition can be covariant, contravariant or invariant in relation to different generic type parameters.
* Variance applies only to reference types: a IEnumerable is not an IEnumerable.
* Variance does not apply to delegate combination. That is, given two delegates of types Action and Action, you cannot combine the second delegate with the first although the result would be type safe. Variance allows the second delegate to be assigned to a variable of type Action, but delegates can combine only if their types match exactly.


Enable Transactions in WCF

Introduction and Goal

In this article, we will try to understand how we can implement transactions in WCF service. So we will create two WCF services which do database transactions and then unite them in one transaction. We will first understand the 6 important steps to enable transactions in WCF services. At the end of the article, we will try to force an error and see how the transaction is rolled back after the error.

Now a days, I am distributing my 400 questions and answers ebook which covers major .NET related topics like WCF, WPF, WWF, Ajax, Core .NET, SQL Server, Architecture and a lot more. I am sure you will enjoy this ebook. You can download my Ebook from here.

Step 1: Create Two WCF Services

The first step is to create two WCF service projects which will participate in one transaction. In both of these WCF services, we will do database transactions and we will try to understand how a WCF transaction unifies them. We have also created a web application with name WCFTransactions which will consume both the services in one transaction scope.

Step 2: Attribute Interface Methods with TransactionFlow

In both the WCF services, we will create a method called UpdateData which will insert into the database. So the first thing is to create the interface class with ServiceContract attribute and the method UpdateData with OperationContract attribute. In order to enable transaction in UpdateData method, we need to attribute it with TransactionFlow and we have specified that transactions are allowed for this method using TransactionFlowOption.Allowed enum.
Collapse

[ServiceContract]
public interface IService1
{
[OperationContract]
[TransactionFlow(TransactionFlowOption.Allowed)]
void UpdateData();
}

Step 3: Attribute the Implementation with TransactionScopeRequired

The 3rd step is to attribute the implementation of the WCF services with TransactionScopeRequired as true. Below is the code snippet which has a simple database inserting function, i.e. UpdateData which is attributed by TransactionScopeRequired attribute.
Collapse

[OperationBehavior(TransactionScopeRequired = true)]
public void UpdateData()
{
SqlConnection objConnection = new SqlConnection(strConnection);
objConnection.Open();
SqlCommand objCommand = new SqlCommand("insert into Customer
(CustomerName,CustomerCode) values('sss','sss')",objConnection);
objCommand.ExecuteNonQuery();
objConnection.Close();
}

Step 4: Enable Transaction Flow using WCF Service Config File

We also need to enable transactions for wsHttpBinding by setting the transactionFlow attribute to true.
Collapse


The transaction enabled binding we need to attach with the end point through which our WCF service is exposed.
Collapse


Step 5: Call the 2 Services in One Transaction

Now that we are done with enabling our server side transaction, it’s time to call the above 2 services in 1 transaction. We need to use the TransactionScope object to group the above 2 WCF services in one transaction. To commit all the WCF transactions, we call the Complete method of the Transactionscope object. To rollback, we need to call the Dispose method.
Collapse

using (TransactionScope ts = new TransactionScope(TransactionScopeOption.RequiresNew))
{
try
{

// Call your webservice transactions here
ts.Complete();
}
catch (Exception ex)
{
ts.Dispose();
}
}

Below is the complete code snippet in which we have grouped both the WCF transactions in one scope as shown below:
Collapse

using (TransactionScope ts = new TransactionScope(TransactionScopeOption.RequiresNew))
{
try
{
ServiceReference1.Service1Client obj = new ServiceReference1.Service1Client();
obj.UpdateData();
ServiceReference2.Service1Client obj1 = new ServiceReference2.Service1Client();
obj1.UpdateData();
ts.Complete();
}
catch (Exception ex)
{
ts.Dispose();
}
}

Step 6: Test If Your Transaction Works

It’s time to test if the transactions really work. We are calling two services, both of which are doing an insert. After the first WCF service call, we are forcing an exception. In other words, the data insert of the first WCF service should revert back. If you check the database records, you will see no records are inserted by the WCF service.
Click to enlarge

1/17/11

ASP.NET 4.0 Features - MetaDescription and MetaKeywords

ASP.NET 4.0, came up with two new properties inside Page Class, those are MetaDescription and MetaKeyWord. This has been introduce because of make web application Search Engine Friendly. Search Engine looks for Meta tag of our web page to get the details of page contents. In ASP.NET 4.0, we can add these two properties with page class in Code behind or in Page Directives.


Introduction
ASP.NET 4.0, came up with two new properties inside Page Class, those are MetaDescription and MetaKeyWord. This has been introduce because of make web application Search Engine Friendly. Search Engine looks for Meta tag of our web page to get the details of page contents. In ASP.NET 4.0, we can add these two properties with page class in Code behind or in Page Directives.

If you want to find out the definition of these two properties, Right Click on Page Class and Click on Goto Definition. This will show you the Meta data information of Page Class as shown in below picture.


Fig:Page Class with MetaDescription and MetaKeyWords Properies
How to use ?


If we set MetaDescription and MetaKeywords either from Code behind or using Page Directive in aspx page, both will be render as “meta” tag in html code.
Let have a look, how we can set these two properties from code behind,

protected void Page_Load(object sender, EventArgs e)

{

Page.MetaKeywords = "ASP.NET 4.0, .NET 4.0";

Page.MetaDescription = "ASP.NET 4.0 Information";

}



Fig: Set MetaKeywords and MetaDescription Properties using Codebehind

Now, If I run the application and check the HTML rendered content we will found the following code,

<head>

<meta content="ASP.NET 4.0 Information" name="description" />

<meta content="ASP.NET 4.0, .NET 4.0" name="keywords" />

head>

Similarly we can also add MetaKeywords and MetaDescription Properties in Page Directive Itself.


But the HTML output will be same for both the case.

Fig: HTML Rendered content of MetaKeywords and MetaDescription



Setting Meta Tags with the Page.MetaKeywords and Page.MetaDescription Properties

ASP.NET 4 adds two properties to the Page class, MetaKeywords and MetaDescription. These two properties represent corresponding meta tags in your page, as shown in the following example:



Untitled Page



These two properties work the same way that the page’s Title property does. They follow these rules:

  1. If there are no meta tags in the head element that match the property names (that is, name="keywords" for Page.MetaKeywords and name="description" for Page.MetaDescription, meaning that these properties have not been set), the meta tags will be added to the page when it is rendered.
  2. If there are already meta tags with these names, these properties act as get and set methods for the contents of the existing tags.

You can set these properties at run time, which lets you get the content from a database or other source, and which lets you set the tags dynamically to describe what a particular page is for.

You can also set the Keywords and Description properties in the @ Page directive at the top of the Web Forms page markup, as in the following example:

<%@ Page Language="C#" AutoEventWireup="true"

CodeFile="Default.aspx.cs"
Inherits="_Default"
Keywords="These, are, my, keywords"
Description="This is a description" %>

This will override the meta tag contents (if any) already declared in the page.

Conclusion
The main objective of MetaKeywords and MetaDescription proerties to make your web application SEO friendly. In ASP.NET 2.0, HtmlMeta used to do the same, but in ASP.NET 4.0 make these thing very simple as we can easily add using Page Class.

Microsoft, StayinFront launch cloud CRM products

Several New Zealand companies are among the thousands worldwide that beta-tested Microsoft’s new Dynamics CRM Online, which was officially launched today.

Action Traffic Control, based in Auckland, and Havelock North-based Gemco Group were both part of the local beta programme.
In a statement from Microsoft announcing the general availability of Dynamics CRM Online, Karl Johnson, general manager of Gemco Trades, a division of the construction company, says: “Because it is cloud-based, we can more easily support sales staff who don’t always work from the office.

“The most important benefit is the ability to capture and automate the sales process, giving us great efficiencies, with only one place to track our business processes, where previously we had three or four.”

Action Traffic Control director Andrew Seavill says: “Given the progressive shift to the cloud, we have elected to use the online version of Microsoft Dynamics CRM now that it is launched.”

More than 11,000 customers and 2000 partners took part in the beta programme.

Microsoft has taken a swipe at hosted CRM pioneer Salesforce.com and rival Oracle, offering organisations that migrate from those two vendors to Microsoft Dynamics CRM Online up to $309 worth of Microsoft services per switched user.

Ray Wang, principal analyst at Constellation Research, told the IDG news wire that, "The value in CRM 2011 is really about the fact this product was designed with the salesperson in mind, not the manager."

The features, such as deep integration with Outlook and mobile capabilities are crucial to salespeople's day-to-day jobs, Wang says.

CRM 2011 does not match up feature-for-feature with Salesforce.com, he says, but "it is comparable from a salesperson's point of view, in terms of what they want to do."

Microsoft’s announcement wasn’t the only one today about cloud- based CRM software; StayinFront also announced StayinFront Edge CG, a version of its hosted CRM offering specially tailored to the FMCG (fast moving consumer goods) sector.

In its statement announcing the launch of StayinFront Edge CG, the company claims the package “provides the clarity field forces need to more effectively manage their territories and for sales managers to measure, respond and lead their teams for improved productivity and overall results.”

How IIS Process ASP.NET Request and Response

Introduction
When request come from client to the server a lot of operation is performed before sending response to the client. This is all about how IIS Process the request. Here I am not going to describe the Page Life Cycle and there events, this article is all about the operation of IIS Level. Before we start with the actual details, let’s start from the beginning so that each and everyone understand it's details easily. Please provide your valuable feedback and suggestion to improve this article.

What is Web Server ?

When we run our ASP.NET Web Application from visual studio IDE, VS Integrated ASP.NET Engine is responsible to execute all kind of asp.net requests and responses. The process name is "WebDev.WebServer.Exe" which actually takw care of all request and response of an web application which is running from Visual Studio IDE.

Now, the name “Web Server” come into picture when we want to host the application on a centralized location and wanted to access from many locations. Web server is responsible for handle all the requests that are coming from clients, process them and provide the responses.


What is IIS ?
IIS (Internet Information Server) is one of the most powerful web servers from Microsoft that is used to host your ASP.NET Web application. IIS has it's own ASP.NET Process Engine to handle the ASP.NET request. So, when a request comes from client to server, IIS takes that request and process it and send response back to clients.

Request Processing :


Hope, till now it’s clear to you that what is Web server and IIS is and what is the use of them. Now let’s have a look how they do things internally. Before we move ahead, you have to know about two main concepts

1. Worker Process
2. Application Pool


Worker Process: Worker Process (w3wp.exe) runs the ASP.Net application in IIS. This process is responsible to manage all the request and response that are coming from client system. All the ASP.Net functionality runs under the scope of worker process. When a request comes to the server from a client worker process is responsible to generate the request and response. In a single word we can say worker process is the heart of ASP.NET Web Application which runs on IIS.

Application Pool: Application pool is the container of worker process. Application pools is used to separate sets of IIS worker processes that share the same configuration. Application pools enables a better security, reliability, and availability for any web application. The worker process serves as the process boundary that separates each application pool so that when one worker process or application is having an issue or recycles, other applications or worker processes are not affected. This makes sure that a particular web application doesn't not impact other web application as they they are configured into different application pools.



Application Pool with multiple worker process is called “Web Garden”.

Now, I have covered all the basic stuff like Web server, Application Pool, Worker process. Now let’s have look how IIS process the request when a new request comes up from client.

If we look into the IIS 6.0 Architecture, we can divided them into Two Layer


1. Kernel Mode
2. User Mode

Now, Kernel mode is introduced with IIS 6.0, which contains the HTTP.SYS. So whenever a request comes from Client to Server, it will hit HTTP.SYS First.



Now, HTTP.SYS is Responsible for pass the request to particular Application pool. Now here is one question, How HTTP.SYS comes to know where to send the request? This is not a random pickup. Whenever we creates a new Application Pool, the ID of the Application Pool is being generated and it’s registered with the HTTP.SYS. So whenever HTTP.SYS Received the request from any web application, it checks for the Application Pool and based on the application pool it send the request.


So, this was the first steps of IIS Request Processing.

Till now, Client Requested for some information and request came to the Kernel level of IIS means at HTTP.SYS. HTTP.SYS has been identified the name of the application pool where to send. Now, let’s see how this request moves from HTTP.SYS to Application Pool.

In User Level of IIS, we have Web Admin Services (WAS) which takes the request from HTTP.SYS and pass it to the respective application pool.



When Application pool receive the request, it simply pass the request to worker process (w3wp.exe) . The worker process “w3wp.exe” looks up the URL of the request in order to load the correct ISAPI extension. ISAPI extensions are the IIS way to handle requests for different resources. Once ASP.NET is installed, it installs its own ISAPI extension (aspnet_isapi.dll) and adds the mapping into IIS.

Note : Sometimes if we install IIS after installing asp.net, we need to register the extension with IIS using aspnet_regiis command.


When Worker process loads the aspnet_isapi.dll, it start an HTTPRuntime, which is the entry point of an application. HTTPRuntime is a class which calls the ProcessRequest method to start Processing.



When this methods called, a new instance of HTTPContext is been created. Which is accessible using HTTPContext.Current Properties. This object still remains alive during life time of object request. Using HttpContext.Current we can access some other objects like Request, Response, Session etc.


After that HttpRuntime load an HttpApplication object with the help of HttpApplicationFactory class.. Each and every request should pass through the corresponding HTTPModule to reach to HTTPHandler, this list of module are configured by the HTTPApplication.

Now, the concept comes called “HTTPPipeline”. It is called a pipeline because it contains a set of HttpModules ( For Both Web.config and Machine.config level) that intercept the request on its way to the HttpHandler. HTTPModules are classes that have access to the incoming request. We can also create our own HTTPModule if we need to handle anything during upcoming request and response.


HTTP Handlers are the endpoints in the HTTP pipeline. All request that are passing through the HTTPModule should reached to HTTPHandler. Then HTTP Handler generates the output for the requested resource. So, when we requesting for any aspx web pages, it returns the corresponding HTML output.

All the request now passes from httpModule to respective HTTPHandler then method and the ASP.NET Page life cycle starts. This ends the IIS Request processing and start the ASP.NET Page Lifecycle.


Conclusion

When client request for some information from a web server, request first reaches to HTTP.SYS of IIS. HTTP.SYS then send the request to respective Application Pool. Application Pool then forward the request to worker process to load the ISAPI Extension which will create an HTTPRuntime Object to Process the request via HTTPModule and HTTPHanlder. After that the ASP.NET Page LifeCycle events starts.

This was just overview of IIS Request Processing to let Beginner’s know how the request get processed in backend. If you want to learn in details please check the link for Reference and further Study section.

Welcome