In WPF gridview when we have click on edit or delete button, the button functionality is executed but the current row is not selected by default. to select the end user selected row in wpf gridview the itemcontainerstyle should be implemented for the listview.
Below is the code for select the row when user selects the delete button in wpf gridview
Step1: create an itemcontainerstyle for listview
"ListView.ItemContainerStyle"
Style TargetType="ListViewItem"
Setter Property="HorizontalContentAlignment" Value="Stretch" /
EventSetter Event="GotFocus" Handler="Item_GotFocus"/
/Style
/ListView.ItemContainerStyle"
step2:
private void Item_GotFocus(Object sender ,RoutedEventArgs e)
{
ListViewItem item;
item = (ListViewItem) sender;
MyListView.SelectedItem = item.DataContext;
}
Google Search
Sunday, November 30, 2008
code to select the specified gridview row WPF
Friday, June 6, 2008
Sockets in c#
Writing programs that access the network used to be a relatively difficult task. With .NET, this is no longer the case. The .NET Framework class library includes two namespaces that are full of classes that help you with networking: System.Net and System.Net.Sockets. In this article, I will discuss one of the most important classes in System.Net.Sockets: the Socket class. The System.Net.Sockets.Socket class can be used as a socket in a server application as well as in a client application. It also allows both synchronous and asynchronous operations. This article shows how to use the Socket class in a client application.
A socket is an endpoint of a connection. It is a descriptor that lets an application read from and write to the network. With sockets, client and server applications can communicate by sending and receiving streams of bytes over connections. To send a message to another socket used in a software application, you need to know not only the machine's IP address that hosts the software application, but also the software's process identifier on that machine. The identification of a software process in a machine is achieved through the use of a unique number called port. Therefore, to send a message from a socket in one application to another socket at the other end of a connection, you need to know the machine's IP address and the application's port number. In the .NET Framework, a socket is represented by the System.Net.Sockets.Socket class. This class is an implementation of the Sockets API, which is also known as the Berkeley sockets interface. The Sockets API was developed in the early 80s at the University at Berkeley for the 4.1c release of Berkeley Software Distribution (BSD) Unix. This distribution contained an early version of the Internet protocols.
Bottom of Form
Constructing a Socket Object
The Socket class provides the following constructor for you to use to create a Socket object.
public Socket (AddressFamily addressFamily,
SocketType socketType,
ProtocolType protocolType );
Instantiating a Socket object requires you to pass three arguments to its constructor: AddressFamily, SocketType, and ProtocolType, all of which are enumerations that are part of the System.Net.Sockets namespace. An AddressFamily member defines the addressing scheme that a Socket object uses to resolve an address. For socket applications that will work on the Internet, you use InterNetwork.
SocketType determines the type of socket. The most popular socket type is Stream. This type of socket supports two-way connection-based byte streams. ProtocolType specifies the type of the low-level protocol that the socket uses to communicate. A stream socket must be used with the Transmission Control Protocol (TCP) protocol type and the InterNetwork address family. For example, the following code instantiates a Socket object.
Socket mySocket = new Socket(AddressFamily.InterNetwork,
SocketType.Stream,
ProtocolType.Tcp);
The arguments you pass to the constructor are available in the following read-only properties: AddressFamily, SocketType, and ProtocolType.
Connecting to A Remote Server
Once you have a socket instance, you can connect to a remote server using the Connect method of the Socket class. This method attempts to connect to a remote server synchronously. It waits until a connection attempt is successful or failed before releasing control to the next line in the program. Even though this method is easy to use, there is some preliminary work youneed to do before you can use this method to connect to a remote server. Consider the signature of the Connect method below.
public void Connect( EndPoint remoteEP);
It accepts an argument: an instance of System.Net.EndPoint. The abstract EndPoint represents a network address. It has a subclass: System.Net.IPEndPoint. When using the Connect method, you typically pass an IPEndPoint object containing the IP address and port number of the remote server to which you want to connect. The question will then be, "How do you construct an IPEndPoint object for your socket to connect to a remote server?" Now, look at the IPEndPoint class definition. It has two constructors.
public IPEndPoint(long address, int port);
public IPEndPoint(IPAddress address, int port);
Of these two constructors, the second is normally used because IP addresses are often represented in dotted-quad notation (such as 129.36.129.44) and, as we soon will see, in .NET socket programming, it is easier to get an IP address in this notation than in a long address. However, the two constructors are actually very similar. It's just that the remote IP address in the first constructor is a long address, whereas in the second constructor, it is a System.Net.IPAddress object. Whichever constructor you choose, you need to have an IP address and the port number of the remote server. The port number is usually not a problem, because popular services are allocated default port numbers. For instance, HTTP uses port 80, Telnet uses port 25, and FTP uses port 21. The IP address is not normally directly available because it is easier for us to remember domain names such as microsoft.com or oreillynet.com rather than the IP addresses mapped to them. With this in mind, we need to resolve a domain name to obtain the IP address of the remote server to which we'd like to connect. In this event, to obtain an IPAddress instance that you can use to connect to a remote server, you need the following other classes: System.Net.Dns and System.Net.IPHostEntry. The Dns class is a final class that retrieves information about a specific host from the Internet Domain Name System (DNS), hence the name Dns. It is mainly used for its Resolve method, to obtain a set of IP addresses mapped to a domain name. The Resolve method returns an IPHostEntry object that contains an array of IP addresses. To obtain these IP addresses, you use the IPHostEntry class' AddressList property. For example, the following code displays all IP addresses mapped to a DNS name.
using System.Net.Sockets;
using System.Net;
try
{
String server = "microsoft.com"; // or any other domain name
IPHostEntry hostEntry = Dns.Resolve(server);
IPAddress[] ipAddresses = hostEntry.AddressList;
Console.WriteLine(server + " is mapped to:");
foreach (IPAddress ipAddress in ipAddresses)
{
Console.WriteLine(ipAddress.ToString());
}
}
catch (Exception e)
{
Console.WriteLine(e.ToString());
}
When run, the code will display all IP addresses mapped to the DNS name microsoft.com. If a DNS name is mapped to more than one IP address, you can use any of those addresses, even though people normally use the first one. The reason for choosing the first one is because a DNS name is often mapped to one IP address only. Obtaining the first IP address mapped to a DNS name is achieved using the following code:
HostEntry.AddressList[0]
What's more important, once you get an IPAddress object, you can construct an IPEndPoint object to connect to a remote server. If the connection is successful, the Socket instance will set its Connected property to true. A programmer often checks the value of this property before performing other operations on the socket instance, because a server application can close a connection after a period of time lapses. To close a connection explicitly when you are done with a socket, you use the Close method. Normally, you need to call the Shutdown method prior to invoking Close to flush all pending data.
Sending and Receiving Streams
After your socket is connected to a remote machine, you can send and receive data. To send data in synchronous mode, you use the Send method. The data you send must be placed in an array of bytes. There are four overloads of the Send method, all of which return an Integer indicating the number of bytes sent. The first overload is the simplest and the easiest to use of the four. It has the following signature:
public int Send(byte[] buffer);
where buffer is an array of bytes containing the data you want to send. Using this overload, all data in the buffer will be sent. The second overload allows you to send all data in the buffer and specify the bitwise combination of the System.Net.Sockets.SocketFlags enumeration members. It has the following signature:
public int Send(byte[] buffer, SocketFlags socketFlags);
The third overload allows you to send all or part of the data in the buffer and specify the bitwise combination of the SocketFlags enumeration.
public int Send( byte[] buffer, int size, SocketFlags socketFlags );
In this overload, size is the number of bytes to be sent.
The last overload is similar to the third overload, but it also allows you specify an offset position in the buffer to begin sending data. Its signature is as follows:
public int Send( byte[] buffer, int offset, int size, SocketFlags socketFlags );
In this overload, offset is the offset position. To receive data synchronously, you use the Receive method. This method also has four overloads, which are similar to the Send method overloads. The signatures of the overloads are as follows:
public int Receive(byte[] buffer);
public int Receive(byte[] buffer, SocketFlags socketFlags);
public int Receive( byte[] buffer, int size, SocketFlags socketFlags );
public int Receive( byte[] buffer, int offset, int size, SocketFlags socketFlags );
When using the Receive method, you can use the Socket class' Available property, which specifies the number of bytes of data received and is available to be read.
Summary
This article introduced the System.Net.Sockets.Socket class and some other supporting classes from the System.Net namespace. Socket is used to easily access the network from your .NET application. You have also learned how to instantiate a Socket object and how to send and receive streams
Thursday, June 5, 2008
Asynchronos Calls in C#
Customers hate to wait and are turned off by poor performance. .NET addresses this issue with asynchronous communication support. Get started with C#.
Several classes in the .NET Framework Base Class Library (BCL) provide both synchronous and asynchronous method signatures. Because a synchronous method call can create a delay in program flow, an asynchronous approach may be better in certain situations, such as when a program needs to send out requests to multiple Web services. This article will show you how to use asynchronous method calls for .NET development with C#.
Synchronous vs. asynchronous A synchronous method call waits for the method to complete before continuing with program flow, whereas an asynchronous method call will return immediately so that the program can perform other operations while the called method completes its work.
The synchronous version of the Resolve method has the following signature: public static IPHostEntry Resolve( string hostName );
The IPHostEntry method accepts a single argument, the hostName, which can be either a DNS name (www.mydomain.com) or an IP address in dotted-quad notation (such as 10.10.14.2). It returns an IPHostEntry object. The IPHostEntry object members may be used to retrieve the host name, IP addresses, and other information related to this particular host.
You could use the Resolve method synchronously in your program with code such as the following: IPHostEntry host = Dns.Resolve(“10.10.14.2”); Console.WriteLine(host.HostName);
The problem here is that as soon as Dns.Resolve is called, your program will block execution until the Resolve method completes its work and returns an IPHostEntry object, which may take several seconds. Since DNS resolution involves network access, this method call can potentially be affected by factors such as network latency and other issues that can delay operation and is thus well suited to the asynchronous approach.
Asynchronous design Asynchronous communication between components requires a design pattern so that the program can determine when or if a called method has completed executing. This design pattern allows the program to obtain any results returned by the method. This capability is especially useful when using Web services because your program can send requests to multiple Web services at once and wait until each returns results before continuing with program execution.
In this scenario, the Web services used may be provided on networks with different degrees of latency and by various companies across the Internet. Consequently, the length of time required to complete a request is affected by several factors that are beyond your program’s control.
The asynchronous design pattern used in the .NET Framework is characterised by the use of BeginXXXX and EndXXXX methods, where XXXX is the name of the synchronous version of the method. Let's take a closer look at the asynchronous approach.
The BeginResolve method and IAsyncResult To avoid blocking your program’s execution (as in the first example), you can opt to use the asynchronous version of the Resolve method: BeginResolve. This method has the following signature: public static IAsyncResult BeginResolve( string hostName, AsyncCallback requestCallback, object stateObject );
The BeginResolve method accepts the same parameter as the synchronous version, hostName, but adds two more parameters that are part of the asynchronous design pattern: the requestCallback and stateObject parameters. We will discuss these shortly, but first I’d like to call your attention to the return value, an IAsyncResult interface.
When you call a method asynchronously, the call immediately returns to your program, before the method being called has a chance to finish (or sometimes even start). By definition, there is no IPHostEntry object for the BeginResolve method to return, so instead it returns a waitable object, the IAsyncResult interface, which you can use later to retrieve the results of the call. The IAsyncResult interface is defined as follows: public interface IAsyncResult { object AsyncState {get;} WaitHandle AsyncWaitHandle {get;} bool CompletedSynchronously {get;} bool IsCompleted {get;} }
The first property in this interface, AsyncState, will return the same object that was passed into the BeginResolve method's stateObject parameter. There are no restrictions on this argument value. It can be anything the program wants to use to track this particular method call. It is not used or manipulated by the called method in any way.
The second property in the IAsyncResult interface, the AsyncWaitHandle, can be used by your program to wait on the method to complete by passing it to one of the WaitHandle class’ methods, WaitAll, WaitOne, or WaitAny. This is useful if you are sending several asynchronous method calls to operate in parallel and would like to make sure they all complete before continuing your own program’s work, especially if your program’s continued operation is dependent upon the results of one or more of these calls.
The third property in the interface, CompletedSynchronously, returns a Boolean value indicating whether the method was able to be completed by the time the BeginResolve method returned.
The fourth property, IsCompleted, returns a Boolean value indicating whether the work being done by the method has completed. This is useful if you use a polling mechanism to determine whether the asynchronous call has completed.
Actually, there are four ways to make and complete .NET asynchronous calls. You can use polling (the IsCompleted property) or a callback delegate (discussed below), you can use the AsyncWaitHandle to wait on the call to complete, or can call the EndXXXX method yourself with the IAsyncResult returned from the BeginXXXX method and have it wait on the call to complete. The difference between these last two techniques is that if you perform the wait yourself (using the AsyncWaitHandle), you can “wake up” every now and then based on a timeout and decide at that point whether you really want to wait any longer.
EndResolve method and AsyncCallback delegate One of the ways to complete an asynchronous method call is to supply an AsyncCallback delegate to the BeginXXXX method. This delegate has the following signature: public delegate void AsyncCallback( IAsyncResult ar );
By providing a method in your program with this signature and creating an AsyncCallback delegate pointing to that method, you can have the asynchronous method call notify your program when it has finished processing. The following code snippet shows how you would call the Dns class BeginResolve method using a callback: AsyncCallback callback = new AsyncCallback(GetResult); IAsyncResult ar = Dns.BeginResolve("10.10.14.2", callback, null); // Do some other work while the above finishes.
When the BeginResolve method finishes executing and has an IPHostEntry object to return, it calls the delegate passed to it allowing us to obtain the result. The following code shows how the GetResult method in our program could be implemented: private void GetResult(IAsyncResult ar) { IPHostEntry host = Dns.EndResolve(ar); }
The Dns class calls the GetResult method when finished resolving the requested IP address, passing the same IAsyncResult interface that was returned from the initial call to BeginResolve. This allows us to call EndResolve using that argument to obtain the result of the call.
Summing up Although you can use either synchronous or asynchronous method calls in .NET, there are times when an asynchronous approach is more efficient. This article has shown how to use asynchronous method calls with C#. In a future article, I will show you how to use the asynchronous design pattern to create your own asynchronous methods for long-running operations.
Add a Serial No column to DataGrid
protected int i = 1;
protected void GridView1_RowDataBound(object sender, GridViewRowEventArgs e)
{
if (e.Row.RowType == DataControlRowType.DataRow)
{
e.Row.Cells[0].Text = Convert.ToString(GridView1.PageIndex * GridView1.PageSize + i);
i++;
}
}
Resize Image
You may use this method to resize image and set the maximum side size in pixels when uploading.
You will need a reference to System.Drawing and System.Drawing.Imaging namespaces public void ResizeFromStream(string ImageSavePath, int MaxSideSize, Stream Buffer)
{
int intNewWidth; int intNewHeight; Image imgInput = Image.FromStream(Buffer); //Determine image format
ImageFormat fmtImageFormat = imgInput.RawFormat;
//get image original width and height
int intOldWidth = imgInput.Width; int intOldHeight = imgInput.Height;
//determine if landscape or portrait
int intMaxSide;
if (intOldWidth >= intOldHeight)
{
intMaxSide = intOldWidth;
}
else
{
intMaxSide = intOldHeight;
}
if (intMaxSide > MaxSideSize)
{ //set new width and height double dblCoef = MaxSideSize / (double)intMaxSide;
intNewWidth = Convert.ToInt32(dblCoef * intOldWidth);
intNewHeight = Convert.ToInt32(dblCoef * intOldHeight);
}
else
{
intNewWidth = intOldWidth; intNewHeight = intOldHeight;
}
//create new bitmap Bitmap
bmpResized = new Bitmap(imgInput, intNewWidth, intNewHeight);
//save bitmap to disk
bmpResized.Save(ImageSavePath, fmtImageFormat);
//release used resources
imgInput.Dispose(); bmpResized.Dispose(); Buffer.Close();
}
Sql server And Visual Studio Editions
There are 6 SQL Server 2005 Editions:
- Enterprise Edition
- Standard Edition
- Workgroup Edition
- Express Edition
- Developer Edition
- Compact Edition
Visual Studio Editions
- Visual Studio 2008 Express Editions (VWD, VB, C#, C++)
- Visual Studio 2008 Standard
- Visual Studio 2008 Professional
- Visual Studio Team System 2008 Architecture Edition
- Visual Studio Team System 2008 Database Edition
- Visual Studio Team System 2008 Development Edition
- Visual Studio Team System 2008 Test Edition
- Visual Studio Team System 2008 Team Suite
The different mechanisms to persist data between user requests in ASP.NET are:
- Application
- Cookies
- Form Post / Hidden Form Field
- QueryString
- Session
- Cache
- Context
- View State
- Control State
- Web.config and Machine.config Files
- Profile
Wednesday, June 4, 2008
Code Snippet in asp.net 2.0
What is a “Code Snippet”?
A: A Code Snippet is a reusable block of code. Unlike a static copy-and-paste approach, code snippets allow for dynamic instances of code
by allowing placeholders into mini templates of code. Code snippets are a new feature of Visual Studio 2005.
Q: How do I use a Code Snippet?
A: In the Visual Studio 2005 code editor, you can invoke snippets with [ctrl][k][x] keyboard shortcut. IntelliSense will then display a
context menu displaying the available code snippets to choose. However, most code snippets are available without using the keyboard
shorcut. If you know the shortcut name of the code snippet, simply type the name and press tab to invoke.
Now we will see how we can use snippets here.I will write a small snippet for disconneceted data access with dataset.
1) Open nottepad.
2) Type the code in it and save it as sample.snippet.
3) After saving this. Go to visual studio 2005. Click Tools > CodeSnippets manager or ctrl+K, ctrl+B.and add this snippet to library.
4) While you are coding right click the mouse. You will see an option of “insert snippet”. when you click it will tell usto insert the snippet we like. For example if take the above snippet it will be displayed with the name as:“Disconnected Data architecture”.If we click it will display the code as:
SqlConnection cn = new SqlConnection();string selquery = ;SqlDataAdapter da = new SqlDataAdapter(selquery, cn);DataSet ds = new DataSet();da.Fill(ds);So everytime it is not necessary for us to write whole sentence or connections.It will reduce our burden of writing commonly usedcode everytime.
New Line in Tool tip
HTML Source EditorWord wrapHi Friends,This is very simple article to show the tooltip wit new lineWhen you make a new line tool tip. Just you have to write a text in title attribute, with enter key. Then you can get the new line.Here is the sample code for to do thatName Id Place Mr.X 001 INDIA Mr.Y 002 INDIA
ASP.NET Validation Controls
ASP.NET validation controls provide an easy-to-use but powerful mechanism of ensuring that data is entered correctly on the forms. There are 6 validation controls included in the ASP.NET 2.0 and ASP.NET 3.5 versions. If you are not familiar with Validation Controls in ASP.NET, here’s a recommended read for you.
Let us see some tips and tricks that can be applied to the validation controls. These tips and tricks are for beginner and intermediate users who have been using validation controls.
Tip 1: Always use Page.IsValid before submitting data. Apart from the other benefits, using it prevents submitting data from old browsers that do not support javascript.
Tip 2: To prevent validation to occur on the click of the Cancel button, set the ‘CausesValidation’ property to false
Tip 3: Use the ‘InitialValue’ property of the RequiredFieldValidator to validate controls like combobox which have a default value.
For eg: If your combobox has a default item called “--Select --“ and you want that the user should select a value other than the default value before submitting the form, then set the ‘InitialValue’ property of the RequiredFieldValidator to
“--Select--“.
Check out more validation control tips and tricks over here
http://www.dotnetcurry.com/ShowArticle.aspx?ID=121
Consuming RSS Feed in ASP.NET
RSS stands for Really Simple Syndication. It is a format used to publish regularly updated web content such as blogs, articles, news sites etc. Now if you happen to visit a website regularly and have to often to go that website physically, to check if there is some new content available, then RSS feed is what you are looking out for. If that website provides RSS feed, all you need to do is subscribe to it. You can then either use any RSS reader which checks for updated content frequently and displays it to you or consume RSS feed in your own application. Handy, isn’t it!
For eg: I assume some of you like visiting dotnetcurry.com to check for new articles. Instead what you can do is subscribe to our RSS feed over here:
http://feeds.feedburner.com/netCurryRecentArticles
Then by using a RSS reader, you can view the latest updated content of this site. Without further ado, let us see how to consume a RSS feed of any website in your application. For demonstration purposes, we will be consuming the RSS feed of dotnetcurry.com in our application.
Note: Once you visit the url given above, right click on the page and View Source. You will find an XML file, since RSS is nothing but an XML file.
Step 1: Create a new website (Open VS 2005 > File > New Website) called ‘RSSFeed’.
Step 2: Drag and drop a XMLDataSource from the toolbox on to the Default.aspx. If the ‘Configure Data Source’ option is not visible on the XMLDataSource, click the smart tag (arrow just above the XMLDataSource) to make it visible and click on the link.
Step 3: The ‘Configure Data Source’ dialog appears. Add the following entries:
Data File - http://feeds.feedburner.com/netCurryRecentArticles
Transform File – Specify an .XSL file if you have one. This is used to beautify or change the layout of the XML file.
XPath Expression - rss/channel/item
Once the entries has been added, click ok to close the dialog.
Step 4: Now drag and drop a ‘DataList’ control from the toolbox on to Default.aspx. Click the smart tag to display the ‘Choose Data Source’. Choose ‘XMLDataSource1’ option from the dropdown.
Step 5: Now go to the ‘Source’ view of Default.aspx. Create an
<%#XPath("title")%>
<%#XPath("pubDate")%>
<%#XPath("author")%>
<%#XPath("description")%>
After adding some UI look and feel to the DataList, the entire source code will look similar to the following:
C#
<%@ Page Language="C#" AutoEventWireup="true" CodeFile="Default.aspx.cs" Inherits="_Default" %>
http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
http://www.w3.org/1999/xhtml" >
VB.NET
<%@ Page Language="VB" AutoEventWireup="false" CodeFile="Default.aspx.vb" Inherits="_Default" %>
http://www.w3.org/1999/xhtml" >
That’s it. Run the application. You will see that the top 5 recently published articles are shown in your webpage. You can verify that by going to the home page of www.dotnetcurry.com and viewing the top 5 articles over there. You can use this code to consume RSS feed of other websites as well. All you would need to do is specify the correct XPath expression based on the XML structure of the RSS.
create Excel in ASP.NET
To create the templates required:
Create a template of the spreadsheet you are trying to create, with at least one data row.
Save the spreadsheet as an XML spreadsheet
Edit the XML spreadsheet with a text editor and replace all the dynamic sections except the data row with tags such as [[Tag1]], [[Tag2]], etc.
Find the data row and copy that to another file, replace it in the original file with a tag such as [[RowTag]].
Save the file.
Edit the row template and put tags in it in place of the data elements such as [[Col1]], [[Col2]], etc.To create the spreadsheet:
Use your favorite programming language to read the spreadsheet template into a string
Read the row template into a string
Replace all of the tags in the spreadsheet template except for the [[RowTag]] with the required dynamic values.
Create a variable to hold the row data
For each dynamically generated row of data:
Make a copy of the row template.
Update each column tag, [[Col1]], [[Col2]], etc. in the copy of the row template, with the dynamic data
Append the new row to the row data variable.
Replace the [[RowTag]] in the spreadsheet template with the contents of the row data variable.
Save and serve with your favorite browser.Note: It is possible to incorporate formulas using the same methods. Take a look at how the formulas save and create the appropriate tags in the spreadsheet and row templates. Replace as appropriate.
ADO.NET
What is ADO.net
ADO.net is data access architecture for the Microsoft .NET Framework. Difference between ADO and ADO.net
ADO used connected data usage, while ADO.net used disconnected data environment.
ADO used OLE DB to access data and is COM-based, while ADO.net uses XML as the format for transmitting data to and from your database and web application.
In ADO, Record set, is like a single table or query result, while in ADO.net Dataset, can contain multiple tables from any data source.
In ADO, it is sometime problematic because firewall prohibits many types of request, while in ADO.net there is no such problem because XML is completely firewall-proof.
SQLDataReader makes exclusive use of connection
The SQLDataReader object makes exclusive use of its SQLConnection object as long as it is open. You are not able to execute any other SqlCommand objects on that connection as long as the SQLDataReader object is open. Therefore, you should always call SQLDataReader.close() as soon as you are done retrieving data.
Strongly Typed Dataset Object
Strongly typed Dataset object allows you to create early-bound data retrieval expression.
Advantage of Strongly Typed dataset
It is faster than late-bound data retrieval expression.
Its column name is shown in intellisense as you type code.
FAQ collected from http://www.dng-ado.blogspot.com
Improving Performance with Connection Pooling
Opening a connection is a database-intensive task. It can be one of the slowest operations that you perform in an ASP.NET page. Furthermore, a database has a limited supply of connections, and each connection requires a certain amount of memory overhead (approximately 40 kilobytes per connection).
If you plan to have hundreds of users hitting your Web site simultaneously, the process of opening a database connection for each user can have a severe impact on the performance of your Web site.
Fortunately, you can safely ignore these bad warnings if you take advantage of connection pooling. When database connections are pooled, a set of connections is kept open so that they can be shared among multiple users. When you request a new connection, an active connection is removed from the pool. When you close the connection, the connection is placed back in the pool.Connection pooling is enabled for both OleDb and SqlClient connections by default.To take advantage of connection pooling, you must be careful to do two things in your ASP.NET pages. First, you must be careful to use the same exact connection string whenever you open a database connection. Only those connections opened with the same connection string can be placed in the same connection pool. For this reason you should place your connection string in the web.config file and retrieve it from this file whenever you need to open a connection
To take advantage of connection pooling in your ASP.NET pages, you also must be careful to explicitly close whatever connection you open as quickly as possible. If you do not explicitly close a connection with the Close() method, the connection is never added back to the connection pool.
FAQ collected from http://www.dng-ado.blogspot.com
SQL Injection Problem
SQL injection is a strategy for attacking databases.
Example1:An ASP page asks the user for a name and a password, and then sends the following string to the database: SELECT FROM users WHERE username = 'whatever' AND password = 'mypassword'It seems safe, but it isn't. A user might enter something like this as her user name: ' OR 1>0 -- When this is plugged into the SQL statement, the result looks like this: SELECT FROM users WHERE username = '' OR 1>0 -- AND password = '' This injection comments out the password portion of the statement. It results in a list of all the names in the users table, so any user could get into your system. The easiest way to prevent this sort of injection is to parse the SQL string and remove any occurrences of "--" before passing the statement.
Example 2:You also have to beware of injections that contain semicolons because semicolons delimit SQL statements. Think about the implications of a user name like this: ' OR 1>0 ; DELETE Customers ; -- There are numerous ways a malicious user might penetrate your system using SQL injection and various defenses, but the simplest approach is to avoid dynamic SQL. Instead, use stored procedures everywhere. Thanks to the way SQL passes parameters, injections such as those above will produce errors, and the stored procedure will not execute.
Tuesday, May 27, 2008
How to consume REST services with WCF
As you are probably aware by now, Windows Communication Foundation (WCF) 3.5 introduced a new binding called WebHttpBinding to create and to consume REST based services. If you are new to the WCF Web Programming model then see here for more details.
There have been many articles and blogs on how to host a RESTful service. However there doesn’t seem to be much written work on how to consume these services so I thought to write a few lines on this topic.
The new WebHttpBinding is used to configure endpoints that are exposed through HTTP requests instead of SOAP messages. So you can simply call into a service by using a URI. The URI usually includes segments that are converted into parameters for the service operation.
So the client of a service of this type requires 2 abilities: (1) Send an HTTP request, (2) Parse the response. The default response message format supported out of the box with the WebHttpBinding is “Plain old XML” (POX). It also supports JSON and raw binary data using the WebMessageEncodingBindingElement.
One way of consuming these services is by manually creating a HTTP request. The following example is consuming the ListInteresting operation from Flickr:
WebRequest request = WebRequest.Create("http://api.flickr.com/services/rest/?method=flickr.interestingness.getList&api_key=*&extras=");
WebResponse ws = request.GetResponse();
XmlSerializer s = new XmlSerializer(typeof(PhotoCollection));
PhotoCollection photos = (PhotoCollection)s.Deserialize(ws.GetResponseStream());
The idea is simple:
- Do the HTTP request and include all the parameters as part of the URI
- Get the response that is in XML format
- Either parse it or deserialize it into an object
The above code works but it is not elegant: We are not using the unified programming model offered by WCF and the URL is hacked together using string concatenation. The response is also manually deserialized into an object. With WCF and the WebHttpBinding we can automate most of this.
The first step is to define our service contract:
[ServiceContract]
[XmlSerializerFormat]
public interface IFlickrApi
{
[OperationContract]
[WebGet(
BodyStyle = WebMessageBodyStyle.Bare,
ResponseFormat = WebMessageFormat.Xml,
UriTemplate = "?method=flickr.interestingness.getList&api_key={apiKey}&extras={extras}")]
PhotoCollection ListInteresting(string apiKey, string extras);
}
As you can see, I am specifically instructing WCF to use the XML Serializer Formatter for this. The next step is to set the client endpoint. I decided to do this inside the config file:
In order to be able to use the XML Serializer Formatter, I need XML Serializable types:
[XmlRoot("photos")]
public class PhotoCollection
{
[XmlAttribute("page")]
public int Page { get; set; }
...
[XmlElement("photo")]
public Photo[] Photos { get; set; }
}
public class Photo
{
[XmlAttribute("id")]
public string Id { get; set; }
[XmlAttribute("title")]
public string Title { get; set; }
...
}
The final step is to create an instance of the client proxy:
ChannelFactory
new ChannelFactory
var proxy = factory.CreateChannel();
var response = proxy.ListInteresting("xxxx", "yyyy");
((IDisposable)proxy).Dispose();
If you don’t like using ChannelFactory directly then you can create your proxy by deriving from ClientBase<>:
public partial class FlickrClient :
ClientBase
{
public FlickrClient()
{
}
public FlickrClient(string endpointConfigurationName) :
base(endpointConfigurationName)
{
}
public FlickrClient(
string endpointConfigurationName,
string remoteAddress) :
base(endpointConfigurationName, remoteAddress)
{
}
public FlickrClient(string endpointConfigurationName,
EndpointAddress remoteAddress) :
base(endpointConfigurationName, remoteAddress)
{
}
public FlickrClient(Binding binding,
EndpointAddress remoteAddress) :
base(binding, remoteAddress)
{
}
public PhotoCollection ListInteresting(string apiKey, string extras)
{
return base.Channel.ListInteresting(extras);
}
}
Now the client code will look similar to the following:
FlickrClient proxy = new FlickrClient();
var response = proxy.ListInteresting("xxxxxx","yyyyyy");
((IDisposable)proxy).Dispose();
Monday, May 26, 2008
creating multiple versions of workflows
Creating Multiple Versions of Workflows
In the recent past, a question that has surfaced quite often is regarding the versioning of workflows and the errors that show up when trying to do so. It is quite natural to expect an extreme level of dynamism and out-of-box features from Microsoft, but one needs to take a pragmatic view of WF.
There is a clear distinction between a Workflow Definition and a Workflow Instance. A Workflow Definition is a Template and a Workflow Instance is a runtime representation of that template. We can closely relate to the Workflow Definition and a Workflow Instance to a Class and an object respectively.
A Workflow can be authored programmatically in C# or exclusively in XAML or using a combination of both. When a workflow instance is created and persisted, it is persisted with its state and also the type information of workflow and activities used. During this cycle of instance passivation, if the definition of the workflow (or the activity used in the workflow) is changed and the assembly with the old definition is removed, then the workflow runtime throws an error. This is an expected behavior since Binary Serialization is type dependent.
Workflow Foundation does have a solution to this problem in the form of Dynamic Workflow Updates using the WorkflowChanges API. The following is one of the possible ways of mitigate the problem of workflow versioning:
1. The assembly with the new workflow definition (or activity definition) is added to the GAC (or a location where you reference your assemblies from). This will sit beside the assembly (older version) that contained the older definition. The assembly with the older definition(s) MUST be present in the GAC (or a location where you reference your assemblies from).
2. When the workflow is to be rehydrated, identify the changes between the older definition and newer definition by getting hold of the corresponding definitions and apply changes to the workflow using the WorkflowChanges API. Once the changes are done, the workflow can be resumed. Two ways in which this can be done:
- This can be done on-demand or can be a background process. When we say on-demand, the applying of changes is done when the workflow is rehydrated by the user or application and it is resumed immediately.
- If it is a background process, the workflow instances corresponding to this workflow can be loaded into the memory; changes can be applied and the instances can be dehydrated back to the persistence store. So, when the workflow instance is next invoked by the application, it is loaded and resumed directly. This process can be accomplished using a windows service. However, the synchronization aspect should be taken care of here.
But the question definitely remains as to why there isn't a mechanism which can change the instances when there is a change to the definition of the workflow? Here is one possible reason:
A defintion is a "Stateless" entitiy while an instance has a "State". A workflow definition is either a .NET assembly or an XAML file which is used to create an instance. Any change to the definition means a change to the stateless entity and it happens at design-time. To get this reflected on to a run-time entity needs a program to do so. Moreover, the Workflow Persistence Service can be customized and this makes it an even greater challenge.
For a small set of workflows, the issue of versioning may not really be a show stopper. However, when planning a large scale use of WF, it is necessary to take an appropriate view of the versioning required and move ahead. It is important to set the expectations right before dealing with situations like these.
Mars - Transactions and Debugging
Multiple Active Result Sets (MARS) is a new SQL Server 2005 feature that allows the user to run more than one SQL batch on an open connection at the same time. In my previous article about MARS I explained what MARS is and how to use it. In this article I'll discuss how transactions in MARS work and how you can debug MARS connections. -->
.csharpcode, .csharpcode pre
{
/* font-size: small;
color: black;
font-family: consolas, "Courier New", courier, monospace;
background-color: #ffffff; */
/*white-space: pre;*/
}
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
{
background-color: #f4f4f4;
width: 100%;
margin: 0em;
}
.csharpcode .lnum { color: #606060; }
Transactions in MARS
Before MARS transactions were a pretty straight forward thing. You executed a command which was associated with a transaction and that was it. This is not so simple anymore.
A good example are Transaction Savepoints. Transaction savepoints are points in a transaction to which you can partially rollback to.
For example:BEGIN TRAN
-- create a table
CREATE TABLE t1 (id INT, title VARCHAR(20) )
-- insert some data
INSERT INTO t1
SELECT 1, 'name 1' UNION ALL
SELECT 2, 'name 2' UNION ALL
SELECT 3, 'name 3'
SELECT * FROM t1
-- save transaction to a savepoint
SAVE TRAN savepoint1
-- insert some more data
INSERT INTO t1
SELECT 5, 'name 5'
SELECT * FROM t1
-- whoops, we don't want that nasty 5 in there,
-- roll it back to the savepoint
ROLLBACK TRAN savepoint1
-- insert a nice 4
INSERT INTO t1
SELECT 4, 'name 4'
SELECT * FROM t1
COMMIT
Under MARS, setting savepoints, rolling back to savepoints and committing transactions isn't allowed when there is more than one request which is actively running under a transaction. Let's see why with some code. Note that both requests are running under the same transaction. string connString = @"server=MyServer; database=testDB;
trusted_connection=yes;
MultipleActiveResultSets=true";
using (SqlConnection conn = new SqlConnection(connString))
{
// Command 1 represents the First Request/Batch
SqlCommand cmd1 = new SqlCommand();
cmd1.Connection = conn;
cmd1.CommandText = @"INSERT INTO t1
SELECT 1, 'name 1' UNION ALL
SELECT 2, 'name 2' UNION ALL
SELECT 3, 'name 3';
Select * from t1;";
// Command 2 represents the Second Request/Batch
SqlCommand cmd2 = new SqlCommand();
cmd2.Connection = conn;
cmd2.CommandText = "UPDATE t1 SET title = 'other name 2' WHERE id = 2";
conn.Open();
// Start the transaction
// Both request run under the same transaction
SqlTransaction tran = conn.BeginTransaction("mainTran");
cmd1.Transaction = tran;
cmd2.Transaction = tran;
try
{
// Time T1 – run the insert and the select
SqlDataReader rdr = cmd1.ExecuteReader();
while (rdr.Read())
{
// Time T2
// The execution will fail at this point because Transaction Savepoints aren't
// allowed in MARS'ed environment
tran.Save("savepoint1");
cmd2.ExecuteNonQuery();
}
// Time T3 - executes in the first batch
cmd1.CommandText = "INSERT INTO t1 SELECT 4, 'name 4';";
// Time T4 - this will fail.
cmd2.CommandText = "UPDATE t1 SET id = 'other name 5' WHERE id = 5;";
// run the statements
cmd1.ExecuteNonQuery();
cmd2.ExecuteNonQuery();
tran.Commit();
}
catch (Exception ex)
{
// this is the error message we get when trying to set the Transaction Savepoint:
// The transaction operation cannot be performed because there
// are pending requests working on this transaction.
Console.WriteLine(ex.Message);
}
}
At first glance this code looks OK. But let's examine it closely. Everything is great until the rollback to savepoint1 in the second request. What happens here is 3 statements execute since setting the savepoint. First the update to the table in request 2, then insert into the table in request 1 and finally the update to the table in request 2. But since the second update fails and rolls back to the savepoint, the insert in request 1 will also be rolled back which is unwanted behaviour.
These kinds of problems are hard to find and debug and are the reason why savepoints and committing aren't allowed under MARS when more than one request is run under a transaction.
In .Net only one transaction can be set per connection. This means that this kind of code isn't possible:private void MarsConcurrentTransactions()
{
string connString = @"server=MyServer;
database=testDB;
trusted_connection=yes;
MultipleActiveResultSets=true";
using (SqlConnection conn = new SqlConnection(connString))
{
// Command 1 represents the First Request/Batch
SqlCommand cmd1 = new SqlCommand();
cmd1.Connection = conn;
cmd1.CommandText = "SELECT * FROM t1 WHERE id IS NULL";
// Command 2 represents the Second Request/Batch
SqlCommand cmd2 = new SqlCommand();
cmd2.Connection = conn;
cmd2.CommandText = " SELECT * FROM t1 WHERE id IS NOT NULL";
conn.Open();
// Start the transactions
SqlTransaction tran1 = conn.BeginTransaction("tran1");
// this fails - can't have 2 concurrent transaction on the same connection
SqlTransaction tran2 = conn.BeginTransaction("tran2");
cmd1.Transaction = tran1;
cmd2.Transaction = tran2;
// ... more code ...
}
}
Nor is this one:string connString = @"server=MyServer;
database=testDB;
trusted_connection=yes;
MultipleActiveResultSets=true";
using (SqlConnection conn = new SqlConnection(connString))
{
// Command 1 represents the First Request/Batch
SqlCommand cmd1 = new SqlCommand();
cmd1.Connection = conn;
cmd1.CommandText = "SELECT title FROM t1";
// Command 2 represents the Second Request/Batch
SqlCommand cmd2 = new SqlCommand();
cmd2.Connection = conn;
cmd2.CommandText = "UPDATE t1 SET id = id + 5 WHERE title = @title";
cmd2.Parameters.Add(new SqlParameter("@title", SqlDbType.VarChar, 20));
conn.Open();
// Start the transactions
SqlTransaction tran1 = conn.BeginTransaction("tran1");
cmd1.Transaction = tran1;
using (SqlDataReader rdr1 = cmd1.ExecuteReader())
{
while (rdr1.Read())
{
cmd2.Parameters[0].Value = rdr1["title"].ToString();
// this will FAIL because we can't mix sql trasaction with
// implicit transaction in which the update runs by default.
cmd2.ExecuteNonQuery();
}
}
tran1.Rollback();
}
That's because we still have 2 transactions. One explicit (SqlTransaction) and one implicit (the Sql Server's in which the update runs)
What we can and should do is put all SqlCommands under the same SqlTransaction while not setting savepoints:string connString = @"server=MyServer;
database=testDB;
trusted_connection=yes;
MultipleActiveResultSets=true";
using (SqlConnection conn = new SqlConnection(connString))
{
// Command 1 represents the First Request/Batch
SqlCommand cmd1 = new SqlCommand();
cmd1.Connection = conn;
cmd1.CommandText = "SELECT title FROM t1";
// Command 2 represents the Second Request/Batch
SqlCommand cmd2 = new SqlCommand();
cmd2.Connection = conn;
cmd2.CommandText = "UPDATE t1 SET id = id + 5 WHERE title = @title";
cmd2.Parameters.Add(new SqlParameter("@title", SqlDbType.VarChar, 20));
conn.Open();
// Start the transactions
SqlTransaction tran1 = conn.BeginTransaction("tran1");
cmd1.Transaction = tran1;
cmd2.Transaction = tran1;
using (SqlDataReader rdr1 = cmd1.ExecuteReader())
{
while (rdr1.Read())
{
cmd2.Parameters[0].Value = rdr1["title"].ToString();
cmd2.ExecuteNonQuery();
}
}
tran1.Commit();
}
I've shown four ways how a developer might try to use transactions with MARS. However only the one in which all SqlCommands are under one transaction is the correct one as long as you're not setting any transaction savepoints. To truly understand MARS execution a developer must have a good understanding of its possibilities.
Debugging and monitoring MARS
With MARS the "old-school" type of monitoring isn't adequate anymore. Why? Because in SQL Server 2000 we could simply say or at least assume that the SPID (SQL Server Process ID) identifies a request (a batch). This way you can simply get the executing SQL Statement for the SPID of your choice. SysProcesses helped with debugging more than once with it's SPIDs and accompanying execution statistics.
All of this has been changed. Of course sysprocesses still shows process information, but with the introduction of Dynamic Management Views it was "replaced" with a few of those. These new DMV's are sys.dm_exec_sessions, sys.dm_exec_connections and sys.dm_exec_requests.
sys.dm_exec_sessions
Returns one row per authenticated session on Microsoft SQL Server. A SPID is equal to session_id column. Interesting columns are last_request_start_time and last_request_end_time, which show the begining of the last request including the currently running request and completion time of the last request in a session.
sys.dm_exec_connections
Returns information about the connections established to this instance of SQL Server and the details of each connection. Here we get into the new waters. This view shows us physical and logical connections to the SQL Server. SPID is again equal to session_id column. Logical connections are a kind of virtual connections in a physical connection in which MARS requests run. For logical connections the parent_connection_id is not null. Parent_connection_id identifies the primary physical connection that the MARS requests are using.
sys.dm_exec_requests
Returns information about each request that is executing within SQL Server. SPID is again equal to session_id. Each session can have MARS requests and each of these requests has a unique id under a session in the request_id column. connection_id provides the physical connection on which the MARS request runs on.
Now a SPID is equal to a request_id, so 2 MARS requests on a single connection have 2 different SPID's. SQL Server 2005 also has a new function called current_request_id() which returns the request currently executing under a session.
When debugging this query might come in handy:SELECT r.session_id, r.request_id,
c.connection_id, c.parent_connection_id, c.connect_time, c.net_transport,
s.HOST_NAME, s.program_name, s.nt_domain, s.login_name,
s.last_request_start_time, s.last_request_end_time, s.transaction_isolation_level
FROM sys.dm_exec_requests r
JOIN sys.dm_exec_sessions s ON r.session_id = s.session_id
JOIN sys.dm_exec_connections c ON s.session_id = c.session_id
It show us the needed information for each request in a session, the connection it belongs, the start and end times, transaction isolation level, who ran it, etc.
MARS
SQL Server 2005 has so many new features that in my opinion if you read only BOL for a year you'd find something new every day. One of those is Multiple Active Result Sets or MARS. Multiple Active Result Sets is a new SQL Server 2005 feature that, putting it simply, allows the user to run more than one SQL batch on an open connection at the same time. -->
Pre-SQL 2005 era
In SQL Server's prior to 2005, you could only run one batch per connection. This means simply that you could only do this:private void MARS_Off()
{
SqlConnection conn = new SqlConnection("Server=serverName;
Database=adventureworks;Trusted_Connection=yes;");
string sql1 = "SELECT * FROM [Person].[Address]";
string sql2 = "SELECT * FROM [Production].[TransactionHistory]";
SqlCommand cmd1 = new SqlCommand(sql1, conn);
SqlCommand cmd2 = new SqlCommand(sql2, conn);
cmd1.CommandTimeout = 500;
cmd2.CommandTimeout = 500;
conn.Open();
SqlDataReader dr1 = cmd1.ExecuteReader();
// do stuff with dr1 data
conn.Close();
conn.Open();
SqlDataReader dr2 = cmd2.ExecuteReader();
// do stuff with dr2 data
conn.Close();
}
And the accompanying profiler trace:
This example shows that you could use the same connection with the second SqlDataReader only when you finished using the connection with first one. The connection must be closed and reopened as it is shown with Audit Login and Audit Logout events. Opening and closing a connection is an expensive operation so this can hurt performance, even if your connection is stored in the connection pool.
If you for instance wanted to do some processing of the data in your data reader and updating the processed data back to the database you had to use another connection object which again hurts performance. There was no way to use the same opened connection easily for more than one batch at the time. There are of course server side cursors but they have drawbacks like performance and ability to operate only on a single select statement at the time.
SQL 2005 era
SQL Server 2005 team recognized the above mentioned drawback and introduced MARS. So now it is possible to use a single opened connection for more than one batch. A simple way of demonstrating MARS in action is with this code:private void MARS_On()
{
SqlConnection conn = new SqlConnection("Server= serverName;Database=adventureworks;
Trusted_Connection=yes;MultipleActiveResultSets=true;");
string sql1 = "SELECT * FROM [Person].[Address]";
string sql2 = "SELECT * FROM [Production].[TransactionHistory]";
SqlCommand cmd1 = new SqlCommand(sql1, conn);
SqlCommand cmd2 = new SqlCommand(sql2, conn);
cmd1.CommandTimeout = 500;
cmd2.CommandTimeout = 500;
conn.Open();
SqlDataReader dr1 = cmd1.ExecuteReader();
SqlDataReader dr2 = cmd2.ExecuteReader();
conn.Close();
}
And the accompanying profiler trace:
MARS is disabled by default on the Connection object. You have to enable it with the addition of MultipleActiveResultSets=true in your connection string.
How MARS Works
The MARS architecture is based on multiplexing or interleaving. Because I'm from an electrical engineering background I'll be using the term multiplexing as it is closer to my heart. Simply put multiplexing means that the input signals are processed one by one and not in parallel based on a clock count. Same applies to MARS connections, only the clock count is replaced with well defined points. Most statements must be run atomically in a batch, which means that they must fully complete before another statement can be run. Statements that don't need to run atomically can be multiplexed before they finish thus enabling another MARS statement to execute.
These multiplexing-enabled statements are:
SELECT
FETCH
RECEIVE
READTEXT
BULK INSERT / BCP
ASYNC CURSOR POPULATION
The best way to describe is is with an example: Say you are retrieving 1 million rows. In the middle of the retrieval an INSERT statement comes in via MARS. Because a SELECT can be multiplexed the retrieval is stopped and the INSERT is performed. Because an INSERT can't be multiplexed, it must fully complete. When it does the SELECT is resumed. A little later in the retrieval an UPDATE statement comes in. So again the SELECT is stopped and the UPDATE is fully executed since it also can't be multiplexed. After the UPDATE fully completes the SELECT is able to finish.
However if we are updating 1 million rows first and in comes a SELECT via MARS, the UPDATE will fully finish before the SELECT can be started.
There are a few exceptions worth mentions in the multiplexing-enabled statements in the list above:
RECEIVE can be multiplexed when the rows begin to arrive and if the SET XACT ABORT ON is specified for the batch. If the RECEIVE is in the waiting state then it can't be multiplexed.
BULK INSERT / BCP can be multiplexed if the SET XACT ABORT ON is specified for the batch and the execute triggers on the target table has been disabled
Managed code (stored procedures, functions, triggers) can't be multiplexed.
Is MARS really that great?
Like any technology it can benefit you or you can shoot yourself in the foot with it. MARS uses "firehose mode" to retrieve data. Firehose mode means that the server will produce data as fast as possible. This also means that your client application must receive inbound data at the same speed as it comes in. If it doesn't the data storage buffers on the server will fill up and the processing will stop until those buffers empty.
So what? You may ask... But as long as the processing is stopped the resources on the SQL server are in use and are tied up. This includes the worker thread, schema and data locks, memory, etc. So it is crucial that your client application consumes the inbound results as quickly as they arrive.
Also the important thing to realize is that multiplexed execution DOES NOT mean parallel execution.
Monday, April 14, 2008
Creating and Using Silverlight and WPF User Controls
One of the fundamental design goals of Silverlight and WPF is to enable developers to be able to easily encapsulate UI functionality into re-usable controls. You can implement new custom controls by deriving a class from one of the existing Control classes (either a Control base class or from a control like TextBox, Button, etc). Alternatively you can create re-usable User Controls - which make it easy to use a XAML markup file to compose a control's UI (and which makes them super easy to build). In Part 6 of my Digg.com tutorial blog series I showed how to create a new user control using VS 2008's "Add New Item" project item dialog and by then defining UI within it. This approach works great when you know up front that you want to encapsulate UI in a user control. You can also use the same technique with Expression Blend. Sometimes you don't always know you want to encapsulate some UI functionality as a re-usable user control until after you've already started defining it on a parent page or control. For example, we might be working on a form where we want to enable a user to enter shipping and billing information. We might begin by creating some UI to encapsulate the address information. To-do this we could add a After carefully laying it all out, we might realize "hey - we are going to use the exact same UI for the billing address as well, maybe we should create a re-usable address user control so that we can avoid repeating ourselves". We could use the "add new item" project template approach to create a blank new user control and then copy/paste the above UI contents into it. An even faster trick that we can use within Blend, though, is to just select the controls we want to encapsulate as a user control in the designer, and then "right click" and choose the "Make Control" menu option: When we select the "Make Control" menu item, Blend will prompt us for the name of a new user control to create: We'll name it "AddressUserControl" and hit ok. This will cause Blend to create a new user control that contains the content we selected: When we do a re-build of the project and go back to the original page, we'll see the same UI as before - except that the address UI is now encapsulated inside the AddressUserControl: We could name this first AddressUserControl "ShippingAddress" and then add a second instance of the user control to the page to record the billing address (we'll name this second control instance "BillingAddress"): And now if we want to change the look of our addresses, we can do it in a single place and have it apply for both the shipping and billing information. Now that we have some user controls that encapsulate our Address UI, let's create an Address data model class that we can use to bind them against. We'll define the class like below (taking advantage of the new automatic properties language feature): Within the code-behind file of our Page.xaml file we can then instantiate two instances of our Address object - one for the shipping address and one for the billing address (for the purposes of this sample we'll populate them with dummy data). We'll then programmatically bind the Address objects to our AddressUserControls on the page. We'll do that by setting the "DataContext" property on each user control to the appropriate shipping or billing address data model instance: Our last step will be to declaratively add {Binding} statements within our AddressUserControl.xaml file that will setup two-way databinding relationships between the "Text" properties of the TextBox controls within the user control and the properties on the Address data model object that we attached to the user control: When we press F5 to run the application we'll now get automatic data-binding of the Address data model objects with our AddressUserControls: Because we setup the {Binding} declarations to be "Mode=TwoWay", changes users make in the textboxes will automatically get pushed back to the Address data model objects (no code required for this to happen). For example, we could change our original shipping address in the browser to instead go to Disneyland: If we wire-up a debugger breakpoint on the "Click" event handler of the "Save" button (and then click the button), we can see how the above TextBox changes are automatically reflected in our "_shippingAddress" data model object: We could then implement the SaveBtn_Click event handler to persist the Shipping and Billing Address data model objects however we want - without ever having to manually retrieve or manipulate anything in the UI controls on the page. This clean view/model separation that WPF and Silverlight supports makes it easy to later change the UI of the address user controls without having to update any of our code in the page. It also makes it possible to more easily unit test the functionality (read my last post to learn more about Silverlight Unit Testing). WPF and Silverlight make it easy to encapsulate UI functionality within controls, and the user control mechanism they support provides a really easy way to take advantage of this. Combining user controls with binding enables some nice view/model separation scenarios that allow you to write very clean code when working with data. You can download a completed version of the above sample here if you want to run it on your own machine. Taking Existing UI and Encapsulating it as a User Control
Data Binding Address Objects to our AddressUserControl
Summary
Dynamic LINQ (Part 1: Using the LINQ Dynamic Query Library)
LINQ (language integrated query) is one of the new features provided with VS 2008 and .NET 3.5. LINQ makes the concept of querying data a first class programming concept in .NET, and enables you to efficiently express queries in your programming language of choice.
One of the benefits of LINQ is that it enables you to write type-safe queries in VB and C#. This means you get compile-time checking of your LINQ queries, and full intellisense and refactoring support over your code:
While writing type-safe queries is great for most scenarios, there are cases where you want the flexibility to dynamically construct queries on the fly. For example: you might want to provide business intelligence UI within your application that allows an end-user business analyst to use drop-downs to build and express their own custom queries/views on top of data.
Traditionally these types of dynamic query scenarios are often handled by concatenating strings together to construct dynamic SQL queries. Recently a few people have sent me mail asking how to handle these types of scenarios using LINQ. The below post describes how you can use a Dynamic Query Library provided by the LINQ team to dynamically construct LINQ queries.
Downloading the LINQ Dynamic Query Library
Included on the VS 2008 Samples download page are pointers to VB and C# sample packages that include a cool dynamic query LINQ helper library. Direct pointers to the dynamic query library (and documentation about it) can be found below:
- VB Dynamic Query Library (included in the \Language Samples\LINQ Samples\DynamicQuery directory)
- C# Dynamic Query Library (included in the \LinqSamples\DynamicQuery directory)
Both the VB and C# DynamicQuery samples include a source implementation of a helper library that allows you to express LINQ queries using extension methods that take string arguments instead of type-safe language operators. You can copy/paste either the C# or VB implementations of the DynamicQuery library into your own projects and then use it where appropriate to more dynamically construct LINQ queries based on end-user input.
Simple Dynamic Query Library Example
You can use the DynamicQuery library against any LINQ data provider (including LINQ to SQL, LINQ to Objects, LINQ to XML, LINQ to Entities, LINQ to SharePoint, LINQ to TerraServer, etc). Instead of using language operators or type-safe lambda extension methods to construct your LINQ queries, the dynamic query library provides you with string based extension methods that you can pass any string expression into.
For example, below is a standard type-safe LINQ to SQL VB query that retrieves data from a Northwind database and displays it in a ASP.NET GridView control:
Using the LINQ DynamicQuery library I could re-write the above query expression instead like so:
Notice how the conditional-where clause and sort-orderby clause now take string expressions instead of code expressions. Because they are late-bound strings I can dynamically construct them. For example: I could provide UI to an end-user business analyst using my application that enables them to construct queries on their own (including arbitrary conditional clauses).
Dynamic Query Library Documentation
Included with the above VB and C# Dynamic Query samples is some HTML documentation that describes how to use the Dynamic Query Library extension methods in more detail. It is definitely worth looking at if you want to use the helper library in more depth:
Download and Run a Dynamic Query Library Sample
You can download and run basic VB and C# samples I've put together that demonstrate using the Dynamic LINQ library in an ASP.NET web-site that queries the Northwind sample database using LINQ to SQL:
You can use either Visual Web Developer 2008 Express (which is free) or VS 2008 to open and run them.
Other Approaches to Constructing Dynamic LINQ Queries
Using the dynamic query library is pretty simple and easy to use, and is particularly useful in scenarios where queries are completely dynamic and you want to provide end user UI to help build them.
In a future blog post I'll delve further into building dynamic LINQ queries, and discuss other approaches you can use to structure your code using type-safe predicate methods (Joseph and Ben Albahari, authors of the excellent C# 3.0 In a Nutshell book, have a good post on this already here).
Friday, April 4, 2008
Table Partitioning in Sql server
Table partitioning is the concept introduced in sql server 2005. Its used to enhance faster database accessibility.
It may be either Horizontal partitioning or Vertical partitioning.
Horizontal Partitioning:In this type, the rows will be segregated as two tables (Old table and new table). Its a logical seperation placed in different file groups.If i want to access most recent data frequently when compared to old data then horizontal partitioning paves the way for this. It's highly advisable to implement this concept in multi processor environment.Schema remains the same in horizontal partitioning.
Vertical partitioning:Its nothing but segregating the columns of the table, part of the column in one table and other part of the column in other table(in case of partitioning into two tables).
Table valued parameters
It's a new T-SQL enhancements done in sql server 2008 which allows us to pass the table as parameters for our stored procedure. In the client server architecture we used to pass individual rows from the front end and its get updated in the backend. Instead of passing individual rows, Microsoft released a new enhancement referred to as table value parameters where they are providing a flexibility to pass the table as a parameter from the front end.
Features:
1. Processing speed will be comparitively very faster.
2. We have to declare the table as a Readonly one. DML operations cannot be done on the table.
3. From the front end we have to pass the data in the form of structures.
4. Reduces roundtrip to the server
5. Processes complex logics at a stretch in one single routine.
-- Am trying to create a table "EmployeeTable" with three fields.
CREATE TABLE EmployeeTable
(id int,
[name] varchar(100),
designation varchar(100))
-- Creating a stored procedure "TableValuedSampleProc" to insert the rows.
CREATE PROCEDURE TableValuedSampleProc (@id int, @name varchar(100),@designation varchar(100))
AS
BEGIN
insert into EmployeeTable values (@id,@name,@designation)
END
-- Executing the stored procedure
EXEC TableValuedSampleProc 1,'one','manager'
EXEC TableValuedSampleProc 2,'two','sr tlr'
EXEC TableValuedSampleProc 3,'three','tlr'
SELECT * FROM EmployeeTable
-- Am trying to create a table type "EmployeeTableType"
CREATE TYPE EmployeeTableType AS TABLE
(ID int, [name] varchar(100),designation varchar(100))
-- Creating the stored procedure in insert the data using Table type.
CREATE PROCEDURE EmployeeTableTypeProc (@EmployeeTempTable EmployeeTableType READONLY)
AS
BEGIN
INSERT INTO EmployeeTable
SELECT * FROM @EmployeeTempTable
END
-- Building a temporary table type
DECLARE @TempEmployee EmployeeTableType
INSERT INTO @TempEmployee VALUES (1,'one','manager')
INSERT INTO @TempEmployee VALUES (2,'two','sr tlr')
INSERT INTO @TempEmployee VALUES (3,'three','tlr')
-- Executing the stored procedure by passing the temporary table type
EXEC EmployeeTableTypeProc @TempEmployee
-- Checking the existence of data
SELECT * FROM EmployeeTable
pivot table
One of the most requested pieces of code on the Internet forums about SQL Server these days, is the code for making a crosstab query. There was no native support for that in version 6.5. Not even version 7.0 or version 2000 had this support.
So when Microsoft announced that SQL 2005 would support crosstab queries we all (well, at least me) cheered and anticipated that this should solve a number of difficulties. Many of us have worked with MS Access since version 2.0, and in this application, pivot tables are breazes.
declare @SQLTable1 table (id int,Studentname varchar(100))
declare @SQLTable2 table (id int,Marks int)
-- inserting the data into the table variables
insert into @SQLTable1
select '1','one' union all
select '2','two' union all
select '3','three' union all
select '4','four'
insert into @SQLTable2
select '1','90' union all
select '1','20' union all
select '1','80' union all
select '2','78' union all
select '2','67' union all
select '3','89' union all
select '3','65' union all
select '3','98' union all
select '4','78' union all
select '4','76' union all
select '4','45'
select * from @SQLTable1
select * from @SQLTable2
--Creating rownumber in the cte
;with lakshmi as(
select a.id,a.Studentname, b.Marks,
row_number() over ( partition by a.id order by a.id) as rn
from @SQLTable1 a inner join @SQLTable2 b on a.id=b.id
)
--select * from VenkatCTE
-- pivoting the rows in the cte
select id,Studentname,[1] as Subject1,[2] as Subject2,[3] as Subject3
from lakshmi
pivot
(
min(Marks) for rn in ([1],[2],[3])
)
pvt
order by id
Wednesday, February 13, 2008
Undocumented Stored Procs
sp_MSgetversion
This extended stored procedure can be used to get the current version of Microsoft SQL Server. To get the current SQL Server version, run
EXEC master..sp_MSgetversion
xp_dirtree
This extended stored procedure can be used to get a list of all the folders for the folder named in the xp. To get a list of all the folders in the C:\MSSQL7 folder, run:
EXEC master..xp_dirtree 'C:\MSSQL7'
xp_enum_oledb_providers
This extended stored procedure is used to list of all the available OLE DB providers. It returns Provider Name, Parse Name and Provider Description. To get a list of all OLE DB providers for your SQL Server, run:
EXEC master..xp_enum_oledb_providers
xp_enumcodepages
This extended stored procedure can be used to list of all code pages, character sets and their description for your SQL Server. To get a list of all code pages and character sets, run:
EXEC master..xp_enumcodepages
xp_enumdsn
This extended stored procedure returns a list of all System DSNs and their description. To get the list of System DSNs, run:
EXEC master..xp_enumdsn
xp_enumerrorlogs
This extended stored procedure returns the list of all error logs with their last change date. To get the list of error logs, run:
EXEC master..xp_enumerrorlogs
xp_enumgroups
This extended stored procedure returns the list of Windows NT groups and their description. To get the list of the Windows NT groups, run:
EXEC master..xp_enumgroups
xp_fileexist
You can use this extended stored procedure to determine whether a particular file exists on the disk or not.
Syntax:
EXECUTE xp_fileexist filename [, file_exists INT OUTPUT]
For example, to check whether the file boot.ini exists on disk c: or not, run:
EXEC master..xp_fileexist 'c:\boot.ini'
xp_fixeddrives
This very useful extended stored procedure returns the list of all hard drives and the amount of free space in Mb for each hard drive.
To see the list of drives, run:
EXEC master..xp_fixeddrives
xp_getnetname
This extended stored procedure returns the WINS name of the SQL Server that you're connected to.
To view the name, run:
EXEC master..xp_getnetname
xp_readerrorlog
This extended stored procedure returns the content of the errorlog file. You can find the errorlog file in the C:\MSSQL7\Log directory, by default for SQL Server 7.0.
To see the text of the errorlog file, run:
EXEC master..xp_readerrorlog
xp_regdeletekey
This extended stored procedure will delete an entire key from the registry. You should use it very carefully.
Syntax:
EXECUTE xp_regdeletekey [@rootkey=]'rootkey', [@key=]'key' |
For example, to delete the key 'SOFTWARE\Test' from 'HKEY_LOCAL_MACHINE', run:
EXEC master..xp_regdeletekey @rootkey='HKEY_LOCAL_MACHINE', @key='SOFTWARE\Test' |
xp_regdeletevalue
This extended stored procedure will delete a particular value for a key in the registry. You should use it very carefully.
Syntax:
EXECUTE xp_regdeletevalue [@rootkey=]'rootkey', [@key=]'key', [@value_name=]'value_name' |
For example, to delete the value 'TestValue' for the key 'SOFTWARE\Test' from 'HKEY_LOCAL_MACHINE', run:
EXEC master..xp_regdeletevalue @rootkey='HKEY_LOCAL_MACHINE', @key='SOFTWARE\Test', @value_name='TestValue' |
xp_regread
This extended stored procedure is used to read from the registry.
Syntax:
EXECUTE xp_regread [@rootkey=]'rootkey', [@key=]'key' [, [@value_name=]'value_name'] [, [@value=]@value OUTPUT] |
For example, to read into the variable @test from the value 'TestValue' from the key 'SOFTWARE\Test' from the 'HKEY_LOCAL_MACHINE', run:
DECLARE @test varchar(20) EXEC master..xp_regread @rootkey='HKEY_LOCAL_MACHINE', @key='SOFTWARE\Test', @value_name='TestValue', @value=@test OUTPUT SELECT @test |
xp_regwrite
This extended stored procedure is used to write to the registry.
Syntax:
EXECUTE xp_regwrite [@rootkey=]'rootkey', [@key=]'key', [@value_name=]'value_name', [@type=]'type', [@value=]'value' |
For example, to write the variable 'Test' to the 'TestValue' value, key 'SOFTWARE\Test', 'HKEY_LOCAL_MACHINE', run:
EXEC master..xp_regwrite @rootkey='HKEY_LOCAL_MACHINE', @key='SOFTWARE\Test', @value_name='TestValue', @type='REG_SZ', @value='Test' |
xp_subdirs
This extended stored procedure is used to get the list of folders for the folder named in the xp. In comparison with xp_dirtree, xp_subdirs returns only those directories whose depth = 1.
This is the example:
EXEC master..xp_subdirs 'C:\MSSQL7'