Saturday, October 10, 2009

Factory Pattern

The factory pattern is probably of the most common creational pattern.

What is a "creational" pattern?

A creational pattern is pattern which describes ways to create objects. You can call it an abstraction of the new operator. The gain of factory classes is to encapsulate the creation of objects to a central position - the factory - and ensure that an object becomes correct initialized. A well known factory is the static XmlWriter.Create() method. It gets either a file name or an output stream and an optional XmlWriterSettings parameter which specifies things like formatting behavior of the returned XmlWriter.

Sample Scenario
Before we start to show how the factory works I'll define a sample scenario. Our system is a ordering system which handles "Order" and "Customer" objects and we got the following classes.
public class Order : ICreationInformation
{
   public DateTime CreationDate { get; set; }
   public String CreationUser { get; set; }
   public String OrderNo { get; set; }
   public Decimal Amount { get; set; }
   public Customer Customer { get; set; }
}

public class Customer : ICreationInformation
{
   public DateTime CreationDate { get; set; }
   public String CreationUser { get; set; }
   public String Name { get; set; }

   public override string ToString()
   {
      return this.Name;
   }
}
As you can see the "Order" as well as the "Customer" contain the system fields "CreationDate" and "CreationUser". In addition the order only becomes valid if it has a related "Customer".

Problem of using the "new" operator
Our ordering system might have more than one position where orders become created. If you work with new operators to create a new "Order" wherever it is possible to create a new order you always have to specify the "CreationDate" and the "CreationUser". As first you might forget to set this information at one point. Remember Murphys law "Anything that can go wrong will go wrong.". As second you duplicate your source code. If you start with a "CreationDate" from local system it might be a future requirement to change this to use a server date since many computers don't have a correct configured local time. In this case you have to fly all over your code to find all position where you worked with local system time to set a "CreationDate".

Factory
A factory class is a class which encapsulates the creation and correct initialization of objects.

The following listing shows a sample factory to create "Customer" and "Order" objects.
public class Factory
{
   public Customer CreateCustomer()
   {
      Customer customer = new Customer();

      customer.CreationDate = DateTime.Now;
      customer.CreationUser =
         System.Security.Principal.WindowsIdentity.GetCurrent().Name;

      return customer;
   }

   public Order CreateOrder()
   {
      Order order = new Order();

      order.CreationDate = DateTime.Now;
      order.CreationUser =
         System.Security.Principal.WindowsIdentity.GetCurrent().Name;

      return order;
   }
}
Once created we can use our factory wherever we need a new "Order" or "Customer" object.
Factory factory = new Factory();

Customer customer = factory.CreateCustomer();
customer.Name = "ALFKI";

Order order = factory.CreateOrder();
order.Customer = customer;
To ensure that your objects are not created by using their constructor from outside the factory it makes sense to define an internal constructor - if your objects are stored in a separate assembly. Now it is impossible to create one of these business objects without a correct initialization.

In our simple example one factory fits to create both types of objects. In a real world scenario it might make sense to use specific factory classes for each type of a business object.

Abstract Factory
An abstract factory is a factory which can create different types of objects with a single implementation. In .NET a abstract factory can be realized with a generic method (or class).

Maybe you noticed the ICreationInformation interface which was added to our "Order" and "Customer" classes. Here is the definition of our this interface which provides the properties "CreationDate" and "CreationUser".
interface ICreationInformation
{
   DateTime CreationDate { get; set; }
   String CreationUser { get; set; }
}
In addition we don't specify any constructor, so both objects have an implicit public constructor. This enables us to create a generic function which can handles the creation of both objects with one implementation.
public class AbstractFactory
{
   public T Create<T>()
      where T : ICreationInformation, new()
   {
      T instance = new T();

      instance.CreationDate = DateTime.Now;
      instance.CreationUser =
         System.Security.Principal.WindowsIdentity.GetCurrent().Name;

      return instance;
   }
}
As you see, the method "Create" works with a generic type "T" with two constraints. The specified type has to be an implementation of ICreationInformation and it needs an empty public constructor. If your business objects don't support an empty constructor you can use the not generic static method System.Activator.Create() which creates objects and enables you to specify constructor arguments.

The usage of this factory is quiet equal to the usual factory, you just have only one method and have to define the generic type.
AbstractFactory factory = new AbstractFactory();

Customer customer = factory.Create<Customer>();
customer.Name = "ALFKI";

Order order = factory.Create<Order>();
order.Customer = customer;
Now that's cool. First, because we have just one implementation for probably hundreds of different objects which implement our ICreationInformation interface and provide an empty public constructor. Second, if you a little bit like me, you are still feeling like a kid sometimes and have to say these generic approaches are always cool! :-)

I'm afraid to say, these abstract factories are not as cool as they look at the first moment. The generic approach is a huge restriction for a factory. Suggest you would be more restrictive as we have been since now. Eventually you wouldn't like to enable the user of your factory to create any "Order" without specifying a "Customer". In a usual factory you just change the "CreateOrder" function to get an instance of a "Customer" object. An abstract factory cannot handle special business cases like this.

Combine Both
As we saw an abstract factory seems to be nice to do some general initializations for objects, but it's a week solution for specific requirements. Anyway, it helps to avoid duplicated code. So why not combining both? You should use a non abstract factory to create new business objects within an application which consumes your business layer but you can use the abstract factory within other factories.

Here is a new version of our factory classes.
class AbstractFactory
{
   internal T Create<T>()
      where T : ICreationInformation, new()
   {
      T instance = new T();

      instance.CreationDate = DateTime.Now;
      instance.CreationUser =
         System.Security.Principal.WindowsIdentity.GetCurrent().Name;

      return instance;
   }
}

public class Factory
{
   AbstractFactory _abstract = new AbstractFactory();

   public Customer CreateCustomer()
   {
      Customer customer = _abstract.Create<Customer>();
      return customer;
   }

   public Order CreateOrder(Customer customer)
   {
      if (customer == null)
         throw new ArgumentNullException("customer");

      Order order = _abstract.Create<Order>();
      order.Customer = customer;

      return order;
   }
}
As you see, the abstract factory became a private class. So it is not possible to use it from outside of our assembly. The usual factory has an instance of an abstract factory to use it's gain to avoid duplicate code for the ICreationInformation interface members. The "CreateOrder" method became more restrictive to ensure that no order can be created without a specified instance of a "Customer".

Now the consumer can use a secure and handy factory and everything is ensured to be done correct.
Factory factory = new Factory();

Customer customer = factory.CreateCustomer();
customer.Name = "ALFKI";

Order order = factory.CreateOrder(customer);

Conclusion
As you saw, the factory pattern is a helpful way to ensure correct initialized objects. The abstract factory is more a tool to be used for less complex requirements but it can be very helpful to do basic initializations.

Monday, October 5, 2009

Easy Working with Threads

.NET 4.0 introduces the new Parallelism namespace which will simplify working with threads. So, as long as we are just .NET 3.5 coders we have to wait and keep our hands of these complicated multi-threading, don't we? We don't! I'll show you some approaches how to work clean and simple in a multi-threaded environment.

If you just need to start threads which have to do some work without returning any information back to the caller you are usually out of any problems. Start your threads and let them go.

Unfortunately, most times this is not the way how to work with threads usually you have an application thread which dispatches a set of tasks but needs the result of those tasks to work with. We'll simulate this with a "Task" class which will be provided to the worker methods within the following samples.

Our Task Class

In the following samples we will work with a "Task" class which will be provided to our threads to simulate a asynchronous work. Here is the definition of this class.
// represents a task which shall be handled by 
// asynchronous working thread
class Task
{
   public int Id { get; set; }
   public int ReturnValue { get; set; }
}

Classic Thread Approach

Let's start with the old-school solution, which uses lock to ensure thread safe working.
// monitor the count of working threads
static int _workingTasks;

static void Main(string[] args)
{
   // count of tasks to be simulated
   int taskCount = 10;
   // hold the dispated tasks to work with the results
   List<task> tasks = new List<task>();

   // first set the complete count of working tasks
   _workingTasks = taskCount;

   for (int i = 0; i < taskCount; i++)
   {
      // create a new thread
      Thread thread = new Thread(new ParameterizedThreadStart(DoWork));

      // create and remember the task
      Task task = new Task { Id = i };
      tasks.Add(task);
      // start the thread
      thread.Start(task);
   }

   while (_workingTasks != 0)
   {
      // wait until all tasks have been done
      Thread.Sleep(1);
   }

   // show the return values after all threads finished
   tasks.ForEach(t => 
      Console.WriteLine("Thread {0} returned: {1}", t.Id, t.ReturnValue));

   Console.ReadKey();
}

// work method
static void DoWork(object o)
{
   Task task = (Task)o;
   int id = task.Id;

   for (int i = 0; i < 10; i++)
   {
      Console.WriteLine("Thread {0} is working", id);
      // simulate some long work
      Thread.Sleep(200);

      task.ReturnValue++;
   }

   // we have to lock the monitoring variable to 
   // ensure nobody else can work with until we decremented it
   lock (typeof(Program))
   {
      _workingTasks--;
   }
}
As you see, there are quiet a lot of things to keep in mind.

We have to use the C# lock statement to ensure a thread safe work with our monitoring variable. Usually you don't have to lock a complete System.Type; you can use lock(this) or any other reference type. I just used lock(typeof(Program)) because I worked with static methods.

We use Thread.Sleep(1) to to poll the state of the dispatched tasks.

Now we'll start to simplify this work.

Using volatile

The first thing we can do to slightly simplify our method is to define our monitoring variable as

static volatile int _workingTasks;

The volatile keyword can be used to tell .NET that a variable might be accessed by many threads for write access. If you define a member variable as volatile, you don't have to use lock to ensure thread safeness.

Since we used lock just once in our previous sample this just changes the call of
lock (typeof(Program))
{
   _workingTasks--;
}
to this

_workingTasks--;

Seems to be senseless to work with volatile? Keep in mind, this is a very very simple sample. Not to need lock becomes really handy if you have to do different things with shared member fields.

Avoid the explicit Polling

The next thing to simplify your multi-threading is to remove the explicit polling. What does this mean? Since now we worked with a member field "_workingTasks" which monitored the state of our working threads. Suggest a larger class with several multi-threading implementations. In this case you would need several member fields to monitor the different threading activities. Another way to wait for the execution of a thread is to Join() it.
static void Main(string[] args)
{
   // count of tasks to be simulated
   int taskCount = 10;
   // hold the dispated tasks to work with the results
   List<task> tasks = new List<task>(taskCount);
   // remember all threads
   List<thread> threads = new List<thread>(taskCount);

   for (int i = 0; i < taskCount; i++)
   {
      // create and remember a new thread
      Thread thread = new Thread(new ParameterizedThreadStart(DoWork));
      threads.Add(thread);

      // create and remember the task
      Task task = new Task { Id = i };
      tasks.Add(task);
      // start the thread
      thread.Start(task);
   }

   // --&gt; HERE &lt;--
   // wait until all threads are finished
   threads.ForEach(thread => thread.Join());

   // show the return values after all threads finished
   tasks.ForEach(task =>
      Console.WriteLine(
         "Thread {0} returned: {1}", 
         task.Id, 
         task.ReturnValue));

   Console.ReadKey();
}

// work method
static void DoWork(object o)
{
   Task task = (Task)o;
   int id = task.Id;

   for (int i = 0; i < 10; i++)
   {
      Console.WriteLine("Thread {0} is working", id);
      // simulate some long work
      Thread.Sleep(200);

      task.ReturnValue++;
   }
}
As you see, we don't need our monitoring variable any more. The usage of Join() makes it possible to implement a much more encapsulated multi-threading.

Working with the ThreadPool

Keep in mind, threads are a system resource which are important to create. If you have to do many smaller tasks it can be more expensive to create all the threads instead work single-threaded. If you have complex tasks which probably have to handle their thread state in conjunction with your application thread or other threads, you should use the System.Threading.Thread class because it gives you the largest flexibility. If you just have to dispatch tasks and wait for them (as in our sample) it can be the wrong way always to create a new thread. A good approach to work with threads in this scenario is to reuse them for the next tasks.

Q: Okay, cool!! Let's start to code a custom thread manager!

A: Er... nope. It's already available.

For this kind of work we have to do with our "Task" objects you can use the System.Threading.ThreadPool class and it's especially made for this kind of work. It's a pool of threads which can be used to schedule working tasks within.

To schedule means you can store a larger count of small jobs within. Whenever a pooled thread is available the next scheduled (queued) task will be started. By default .NET determines the count of initially available threads by your environment information. It also starts new threads if they seem to be useful. However, you can customize all this information by static methods (e.g. ThreadPool.GetMaxThreads, ThreadPool.SetMinThreads).

So let's use the ThreadPool to do our tasks. For this propose I extended our "Task" class with an additional property.
class Task
{
   public int Id { get; set; }
   public int ReturnValue { get; set; }
   public AutoResetEvent AutoResetEvent { get; set; }
}
The AutoResetEvent is a special type of a WaitHandle which provides some special methods for multi-threading tasks.
static void Main(string[] args)
{
   // count of tasks to be simulated
   int taskCount = 10;
   // hold the dispated tasks to work with the results
   List<task> tasks = new List<task>(taskCount);

   for (int i = 0; i < taskCount; i++)
   {
      // create and remember the task
      Task task = 
         new Task 
         { 
            Id = i, 
            AutoResetEvent = new AutoResetEvent(false) 
         };
      tasks.Add(task);

      // queue the task in ThreadPool
      ThreadPool.QueueUserWorkItem(new WaitCallback(DoWork), task);
   }

   // wait until all queued tasks are finished
   tasks.ForEach(task => task.AutoResetEvent.WaitOne());

   // show the return values after all threads finished
   tasks.ForEach(task =>
      Console.WriteLine(
         "Thread {0} returned: {1}", 
         task.Id, 
         task.ReturnValue));

   Console.ReadKey();
}

// work method
static void DoWork(object o)
{
   Task task = (Task)o;
   int id = task.Id;

   for (int i = 0; i < 10; i++)
   {
      Console.WriteLine("Thread {0} is working", id);
      // simulate some long work
      Thread.Sleep(200);
      
      task.ReturnValue++;
   }

   // notify the application thread that the task finished
   task.AutoResetEvent.Set();
}
As you see, we don't need the monitoring member fields any more. We even don't need a Thread object any more. The AutoResetEvent class provides a WaitOne() method which can be used by our application thread to wait until all tasks finished. To release the wait handle of a AutoResetEvent you can use the Set() method.

Conclusion

I hope I could show you some tricks how to simplify the work with multiple threads in your application.

Almost every new computer has two or more CPUs, start to use them ;-).

Sunday, October 4, 2009

How to handle SQL Server IDENTITIES in .NET

Many people still don't use SQL Server IDENTITY columns to generate their primary key values. Most times they just don't know how to handle them in client applications. In this case implementations often end up with client side ID generations or GUID primary keys. This post tries to show why server side IDs are a good approach and how to work with.

IDENTITY is not a Sequence
First I have to say, IDENTITY columns are not made to represent a sequence which can/shall be shown to the user. What does this mean? Every ID is used exactly once, when you delete an existing ID it will not be reused for the next inserted row.

Here's a little sample to show this behavior.
DECLARE @t TABLE
(
   Id INT IDENTITY(1,1) 
      PRIMARY KEY CLUSTERED
   ,SomeInt INT
);

-- insert some rows
INSERT INTO @t (SomeInt) 
   VALUES (100)
         ,(100)
         ,(100);

SELECT * FROM @t;
Since now the data appear to have a correct sequence.
Id
SomeInt
1
100
2
100
3
100
Let's delete one row and add another one.
-- delete one row in the middle of the table
-- and insert another one
DELETE FROM @t WHERE Id = 2;

INSERT INTO @t (SomeInt) 
   VALUES (200);

SELECT * FROM @t;
As you see the deleted Id with value "2" will not be refilled with the new inserted row. The Id "2" was used, deleted and will never be reused by SQL Server (without using tricks like IDENTITY_INSERT or re-seed the IDENTITY).
Id
SomeInt
1
100
3
100
4
200
Why doesn't SQL Server decrement the ID "3" when "2" became deleted?
This would be a huge overhead for the database server. Adopt a table with 1,000,000 rows and delete ID "1". This would cause 999,999 rows to be decremented.

Why doesn't SQL Server reuse the ID "3" for the next inserted row?
In this case SQL Server would have to remember each deleted ID. This would affect the INSERT performance and the storage usage.

If you need a dense rank or a sequence without any gaps you can use functions like ROW_NUMBER() or DENSE_RANK() in your select statement. Anyway, usually this kind of sequences should be handled in client if possible. It's quiet simple to use a int variable to create a sequence to show in GUI.
Client Side ID Generation
Before we start with server side IDs, let's have a short look to the client side approaches.

One solution I've seen some times is a SELECT MAX(Id) FROM ... before execution of the INSERT statement. This has two huge problems! First, this is a huge performance problem since every INSERT causes an index scan. The second problem is, this solution is not safe. Keep in mind, a database server is a multi-user environment. Between your MAX select and the INSERT statement another user can INSERT the same ID and you get a primary key violation!

A second solution to handle client side IDs is to use a ID table which contains the next available. The client applications use this table to get the next IDs for their INSERT statements. This way to handle client side IDs is a good approach but needs a correct implementation. What does this mean? Suppose a table like this:
table_name
next_id
Table1
344828
Table2
6454
Table3
23432
If every client application selects one ID whenever it wants to insert a new row into a table, the whole system meets in one single table with just some rows, which are probably stored in same data and index page. The result would be a problem with locks on this table.

A correct solution to work with a ID table would be to use a client side ID caching. Every client selects a range of IDs to work with for the next INSERT statements. The cache size depends on the type of the client and the count of INSERT operations generated by this client. A windows application which does not generate too many new rows might be fine with a cache size of 10, a server side import process probably needs a cache size of 500 IDs or more.

Doesn't this mean that cached IDs will not be used if the client application exists when IDs are cached? Yes, but this doesn't matter. Keep in mind, primary key IDs are not made to show to a user. They are the database identity of a row. Since INT has a maximum value of 2,147,483,647 this should be enough for most systems - even if you loose some IDs. If you think you might exceed this count of IDs you can use BIGINT which has a maximum value of 9,223,372,036,854,775,807.

Database Structure
Before we start with server side IDENTITIES, you should know the database structure for the following tests.


And here the create statements for both tables.
CREATE TABLE Parent
(
 Id int IDENTITY(1,1) NOT NULL
    PRIMARY KEY CLUSTERED
 ,SomeInt int NULL
);

CREATE TABLE Child
(
   Id int IDENTITY(1,1) NOT NULL
      PRIMARY KEY CLUSTERED
   ,ParentId int NOT NULL
      REFERENCES Parent (Id)
   ,SomeInt int NULL
);
As you see, both tables have an IDENTITY column defined as primary key and "Child" table has a foreign key which points to "Parent" table.

Low Level ADO.NET
First we'll have a look to the low level way to work with server side IDs. Therefore we use a SqlCommand to execute an INSERT statement and retrieve the new id from server.
string cnStr = "Server=.;Database=Sandbox;Integrated Security=SSPI;";

// the INSERT sql statement. Notice the "SELECT @Id"!
string insertSql = @"INSERT INTO Parent (SomeInt) 
                        VALUES (@SomeInt); 
                     SELECT @Id = SCOPE_IDENTITY();";

using (SqlConnection cn = new SqlConnection(cnStr))
using (SqlCommand insertCmd = new SqlCommand(insertSql, cn))
{
   cn.Open();

   // Add the @SomeInt param with a value
   SqlParameter someIntParam = insertCmd.Parameters.Add(
      "@SomeInt", SqlDbType.Int);
   someIntParam.Value = 123;
   
   // Add the @Id param and define it as output parameter
   SqlParameter idParam = insertCmd.Parameters.Add(
      "@Id", SqlDbType.Int);
   idParam.Direction = ParameterDirection.Output;

   // execute the command
   insertCmd.ExecuteNonQuery();

   // show the "Id" after the data are written to server
   Console.WriteLine("after database insert: {0}", idParam.Value);
}
Since our "idParam" is specified as output parameter ADO retrieves the new SCOPE_IDENTITY() from server and we can work with it in our client.

Mostly we don't need to work with this low level implementations. Anyway, sometimes it's good to know how the upper level works.

Working with untyped DataTables
Still quiet low level but common. I've seen many solutions working with untyped DataTables to handle data from a database server. Usually I would say use at least typed DataSets or an O/R-mapper. Anyway. ;-)

How to handle server side IDs with a DataTable? The DataTable will be updated with a SqlDataAdapter, so there is no way to customize the parameters and handle the returned values, is there? To combine the columns of a DataTable with return values of a SqlParameter you can use the SqlParameter.SourceColumn property, which specifies the mapping between the parameter and the columns of the table.
string cnStr = "Server=.;Database=Sandbox;Integrated Security=SSPI;";

// the INSERT sql statement. Notice the "SELECT @Id"!
string insertSql = @"INSERT INTO Parent (SomeInt) 
                        VALUES (@SomeInt); 
                     SELECT @Id = SCOPE_IDENTITY();";

using (SqlConnection cn = new SqlConnection(cnStr))
using (SqlCommand insertCmd = new SqlCommand(insertSql, cn))
using (SqlDataAdapter adap = new SqlDataAdapter())
{
   // create a DataTable which represents our Parent table
   DataTable table = new DataTable();
   // specify Id columnd as AutoIncrement
   table.Columns.Add("Id", typeof(int)).AutoIncrement = true;
   table.Columns.Add("SomeInt", typeof(int));

   // Add a row to our table and show the current Id value (which is "0")
   DataRow row = table.Rows.Add(null, 1);
   Console.WriteLine("before database insert: {0}", row["Id"]);

   // add the "SomeInt" param
   insertCmd.Parameters.Add("@SomeInt", SqlDbType.Int, 4, "SomeInt");
   // add the "Id" param and define it as output param
   SqlParameter idParam = insertCmd.Parameters.Add(
      "@Id", SqlDbType.Int, 4, "Id");
   idParam.Direction = ParameterDirection.Output;

   // inject the adapter with the custom command and update data
   adap.InsertCommand = insertCmd;
   adap.Update(table);

   // show the "Id" after the data are written to server
   // this returns the new IDENTITY now.
   Console.WriteLine("after database insert: {0}", row["Id"]);
}
The fourth parameter of SqlCommand.Parameters.Add specifies the "SourceColumn". The data adapter uses this column to create the mapping between the parameters and the columns of the table to be updated. The console output shows the server side generated value of our "Id" column.

Working with typed DataSets
Welcome on "it works build in" level. Maybe some of you made bad experiences with IDENTITY columns and typed DataSets, let me tell you that's just caused of a tiny configuration issue within the DataSet designer.

If you already used the previously shown CREATE statements to create the sample tables "Parent" and "Child" on a sample database just create a typed DataSet named "DataSet1" in a Visual Studio C# project. You will find some detailed tutorials in MSDN, which show hot to handle this. Here I just want to show how to configure a correct handling of IDENTITY columns.

Have a look to the DataSet1.Designer.cs file and navigate to the ParentTableAdapter. You will find the following line in "InitAdapter()" method.
this._adapter.InsertCommand.CommandText = 
   "INSERT INTO [dbo].[Parent] ([SomeInt]) VALUES (@SomeInt);\r\n" +
   "SELECT Id, SomeInt FROM Parent WHERE (Id = SCOPE_IDENTITY())";
As you see, it selects all inserted values back to client as soon as the new row is inserted. Unfortunately, in default configuration the selected "Id" value will not be written to our related ChildDataTable. Go back to the DataSet Designer and do the following tasks:
  • Right click the relation between "Parent" and "Child" table and select "Edit Relation...". The relation configuration dialog appears.
  • In section "Choose what to create" select option "Both Relation and Foreign Key Constraint".
  • Change "Update Rule" to "Cascade".
  • Ensure "Accept/Reject Rule" is set to "None". If you change this option to "Cascade", the changes of our "Child" row will be automatically when the "Parent" table becomes saved.
The following picture shows the configuration.




Finally a little sample which can be copied and pasted to illustrate the correct identity handling with a typed DataSet.
// create a DataSet
DataSet1 ds = new DataSet1();

// get the parent table and add a new row
DataSet1.ParentDataTable parentTable = ds.Parent;
DataSet1.ParentRow parentRow = parentTable.AddParentRow(1);

// get the child table and add a new row
DataSet1.ChildDataTable childTable = ds.Child;
DataSet1.ChildRow childRow = childTable.AddChildRow(parentRow, 1);

// create the table adapters
DataSet1TableAdapters.ParentTableAdapter parentAdapter =
   new ConsoleApplication2.DataSet1TableAdapters.ParentTableAdapter();
DataSet1TableAdapters.ChildTableAdapter childAdapter =
   new ConsoleApplication2.DataSet1TableAdapters.ChildTableAdapter();

// update the parent table
parentAdapter.Update(parentTable);
// #######################
// At this point, the new IDENTITY value from parent row
// was written to our child row!
// #######################
Console.WriteLine(childRow.ParentId);

childAdapter.Update(childTable);
Special thanks at this point goes to ErfinderDesRades, a power-user state member of myCSharp.de - a great C# forum. He showed me the correct configuration of the DataSet.

LINQ 2 SQL and Entities Framework
LINQ 2 SQL as well as Entities Framework support IDENTITY columns out of the box, so there is nothing more to say about these O/R mappers.