Event delegates and exception handling

Delegates

This blog post will focus on handling exceptions thrown by event delegates.

But first we will briefly discuss what a delegate is. If you know C or C++ it is natural to think of delegates as function pointers. Why is this handy? Let’s say that I have an API that will read a list of car objects from a database and return that list to the user. Something like this:

    public IEnumerable GetCars()
    {
      var cars = new List<Car>();
      //Get all cars from database
      return cars;
    }

    public class Car
    {
      //define car properties like color, weight, brand etc.
    }

Nothing fancy here. Now lets say we want to let the users of our API to be able to filter the cars – for instance only return red cars. We could make a new API method to return a specific color of cars, but we really can’t guess what the user would like to filter on – it would be much nicer if the user could provide us with at filter method to use.  First we define what the filter shall look like, and then we use it in out API:

   public delegate bool Filter(Car car);

    public IEnumerable GetCars(Filter filter)
    {
      var cars = new List<Car>();
      //Get all cars from database and the apply filter
      return cars.Where(car => filter(car));
    }

Now we can define and use a filter that returns red cars:

    public bool CarIsRed(Car car)
    {
      return car.Color == ConsoleColor.Red;
    }

    public void useGetCars()
    {
      var filteredCars = GetCars(CarIsRed);
      //do sutff with cars
    }

But the user could supply any kind of filter he desired.

So that was delegates – what is an events?
And how is it different from a delegate?

A C# event is collection of delegates that can be called by the event. So the event itself is not a delegate but you can add delegates to it using the += operator and remove them again using the –= operator.

In the following example a timer is startet, and every second the timer calls the DoTick method. The DoTick method fires the Tick event if there are any subscribers. In our case there are two subscribers Tick and Tick2 – that will write to the console.

  public class Program
  {
    private static void Main(string[] args)
    {
      var t = new Ticker();
      t.Tick += t_Tick;
      t.Tick += t_Tick2;

      Console.ReadKey();
    }

    private static void t_Tick2(object sender, EventArgs e)
    {
      Console.WriteLine("Tick2");
    }

    private static void t_Tick(object sender, EventArgs e)
    {
      //throw new Exception("boom!!");
      Console.WriteLine("Tick1");
    }
  }

  public class Ticker
  {
    private Timer timer;
    public event EventHandler<EventArgs> Tick;

    public Ticker()
    {
      timer = new System.Threading.Timer(DoTick, null, 1000, 1000);
    }

    private void DoTick(object state)
    {
      if (Tick != null)
        Tick(this, new EventArgs());
    }
  }

This works fine – but in the t_Tick method I have hinted at a problem. What if one of the event handlers (subscribers) throws an exception? That is the topic of the next section.

What happens if one of the event handlers throws an exception.

As the code in DoTick is written now the application will be terminated – not so nice. If you are in complete control of your application you should have normal exception handling in each event handler, so execptions aren’t propagated back to the place where the event was fired from.

What if I’m not in complete control of the application?

Maybe your application has support for third party plugins and the event handlers could belong to two different plugins provided by different manufactures. One of the plugins throws an exception, and we are not interested in our application going boom! The first thing you try is to rewrite DoTick like this:

    private void DoTick(object state)
    {
      try
      {
      if (Tick != null)
        Tick(this, new EventArgs());
      }
      catch (Exception)
      {
        //do exception handling, log the error etc.
      }
    }

Now our application doesn’t crash anymore, but what happens is that the  event handlers are called one at a time, and then an event handler throws an exception, it is caught, but the rest of the event handlers will not be called, meaning that one defect event handler will ruin it for everybody else.

Instead we can use the fact that an event contains a collection of delegates and that each delegate can be invoked by calling DynamicInvoke on it:

    private void DoTick(object state)
    {
      if (Tick != null)
      {
        foreach (var @delegate in Tick.GetInvocationList())
        {
          try
          {
            @delegate.DynamicInvoke(this, new EventArgs());
          }
          catch (Exception)
          {
            //do exception handling, log the error etc.
          }
        }
      }
    }

Now only the defect event handler will fail and every body else will be able to do their job.

I don’t want to write that ugly code every time I fire an event.

Yes the code is somewhat ugly and it is quite a lot to write just so we can fire an event. I already think that having to check for null is kind of stupid, since that is what you do every time – and this is event worse. Therefore I suggest to create an extensiuo method, that works for events that use the EventHandler signature. (which is what is generally recommende, so of corse you use that for all your event handlers).

  public static class Utils
  {
    public static void FireEvent<T>(this EventHandler evt, object sender, T eventArgs)
    {
      if (evt != null)
      {
        foreach (var @delegate in evt.GetInvocationList())
        {
          try
          {
            @delegate.DynamicInvoke(sender, new EventArgs());
          }
          catch (Exception)
          {
            //do the exception handling
          }
        }
      }      
    }
  }

Now that we have written this code once and for all, and we can fire events like this:

private void DoTick(object state)
{
  Tick.FireEvent(this, new EventArgs());
}

Wow, that’s really nice

Yes it is – you should of corse still make sure that the error handling in the try-catch in the extension method do something appropriate like logging the error, disble the offending plug-in or whatever makes sense in your situation.

Posted in C# | Leave a comment

Trying out Windows Azure Storage

I am part of the team behind a set of web pages at Myvoices.com where you can search for voice talents (people speaking in commercials, narrating documentaries etc.)

One of the features of these sites, is that you get to listen to some of the work, these people have done before. This means we have a lot of mp3-files stored online at a web-hotel, and in the foreseeable future we will exceed the amount of space available. Therefore I have started to look at some alternatives – one of which is Windows Azure Storage.

I found this guide to get me started: http://www.windowsazure.com/en-us/develop/net/how-to-guides/blob-storage/ (from now on: The Guide)

I had some problems however (the reason for this blog post). And I deviated from the guide in that I did not create a Windows Azure account – I simply used the built-in development test account, since I wanted to check out the programming model first. Also, you should note that in regard to The Guide mentioned above, I use .Net configuration, not Cloud Service.

NuGet or SDK? NuGet first…

The Guide says that I can get the relevant assemblies with NuGet, so I didn’t bother to get the SDK. I could write some code and get it to compile without too much trouble. When I wanted to test it I ran into problems.

Connection string for DevelopmentStore

First there was the connection string for azure. I use the following code to connect:

var connectionString = ConfigurationManager.ConnectionStrings["StorageConnectionString"].ConnectionString;
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(connectionString);
blobClient = storageAccount.CreateCloudBlobClient();

The default string looks like this:

connectionString="DefaultEndpointsProtocol=https;AccountName=[AccountName];AccountKey=[AccountKey]"

When using the test account you can drop AccountName and AccountKey, and simply set UseDevelopmentStorage=true. I tried this:

connectionString="DefaultEndpointsProtocol=https;UseDevelopmentStorage=true"

This connection string does not parse. It turns out that the only thing in the connection string shall be UseDevelopmentStorage = true. (It is possible to add proxy-information as described in The Guide.)

connectionString="UseDevelopmentStorage=true"

NuGet or SDK? On second thought: SDK

Next I wanted to set up a container for our media files. I chose to call it “Media”:

container = blobClient.GetContainerReference("Media");
if (container.CreateIfNotExists())
{
   container.SetPermissions(
                           new BlobContainerPermissions
                             {
                               PublicAccess = BlobContainerPublicAccessType.Blob
                             });
}

I could not connect to the server. When you are using the development test account you are supposed to connect to a storage emulator – and you don’t get the emulator with NuGet. So I also downloaded and installed the Windows Azure SDK. Then I had the Storage Emulator – you have to start it yourself from the start menu. Restarting my program, I could now connect to the server.

Container name

I immediately ran into the next problem. I received this error: “The remote server returned an error: (400) Bad Request”. It turns out that container names can only contain lowercase letters. So after changing “Media” to “media” it worked – I now have a container. The rest seems to be as described in The Guide – uploading, download, listing container contents and so on.

The structure of Azure Blob Storage:

Account

When you use Azure Storage for real, you will make a named account, that will determine the URL where your files can be found:

http://<storage account>.blob.core.windows.net/<container>/<blob>

Container

The container is somewhat like the root of a hard drive. You connect to the container before you do any blob manipulation,

Blob

The blob is simply data – like a file. The name of a blob can contain almost all characters, but should not end with . or / or a combination of these. The URI for the file is created using the .Net URI class and this will strip off these characters. But in general / is allowed as part of the blobs name (the filename).

This is because there is no directories in the containers data structure. So the answer to one of my first questions: How do you create a directory in Azure Storage, is: You don’t. There is no such thing as a directory. But to create structure you can name your blobs like this “MySimulatedDir/MyBlob”.

This section of The Guide http://www.windowsazure.com/en-us/develop/net/how-to-guides/blob-storage/#header-8 show how to list all the files in the container like they are in a directory structure, by using virtual folders.

Posted in C#, Cloud Computing, Windows Azure | Tagged , , , , | Leave a comment

The #DevReach .Net conference in Sofia, Bulgaria

This is the second year in a row, I attended the DevReach conference. Last year I got an invite from Telerik, because I used some Telerik Silverlight components – this made me eligible for a discount. This year I simply joined the early bird programme. The early bird price was below 160 EUR for two days with 6 tracks including an amazing after party, where you could meet all the speakers and the other attendees that also chose to buy a VIP pass. The nearby five-star hotel was 80 EUR a night and the flight to Sofia is also quite affordable if you live in Europe. This year they also added a pre-conference day with four half-day workshop.

Does these prices mean it is a discount conference?

Granted: This is all very affordable, but I assure you there is nothing discount about this. It is a great conference with international speakers of a very high class. At the time of this writing the speaker list is still available at www.devreach.com.

Topics covered include agile, testing, architecture, mobile developement, Windows 8  development, HTML5, cloud, web – also how to make cross-platform apps, with as much shared code as possible.

I can only recommend that you follow @devreach on twitter or on Facebook, so you get notified when the registration for next years conference opens. And when it does, tell your friends! You will not regret it.

Posted in Conference | Tagged , , , | Leave a comment

LINQ – the order of elements when using Where()

Last week I was asked if the order of elements found using a where clause was guaranteed to be the same as the order of the original collection. That question is hard to answer.

In the normal case (where nobody has overwritten any framework methods or any such thing) the answer in practise is yes, the order will be the same, because Where(), simply enumerates over the collection one element at a time, and if the elements parses the condition it is returned. This small test program demonstrates it – I have used a SortedDictionary, because it makes it easy to see the order.

using System;
using System.Collections.Generic;
using System.Linq;
namespace ConsoleApplication1
{
 class Program
 {
  static void Main(string[] args)
  {
   var sd = new SortedDictionary<string, string="">() { { "b", "test1" }, { "a", "test2" }, { "c", "test3" } };
   Console.WriteLine("The dictionay");
   foreach (var kv in sd)
   {
    Console.WriteLine(kv.Key + " - " + kv.Value);
   }
   Console.WriteLine("the dictionary where Key != b");
   foreach (var kv in sd.Where(x => x.Key != "b"))
   {
    Console.WriteLine(kv.Key + " - " + kv.Value);
   }
   Console.ReadKey();
  }
 }
}

And the output is as expected:

The dictionay
a – test2
b – test1
c – test3
the dictionary where Key != b
a – test2
c – test3

Does that mean that we can conclude that the order of elements returned from Where() will be the same as in the original collection? In the current version of the framework it will. In the general case the Where method is implemented like this:

public static IEnumerable Where(this IEnumerable source, Func<tsource, bool=""> predicate) {
 return new WhereEnumerableIterator(source, predicate);
}

In the real code there are special implementations for List<T>, arrays Iterator<T>. But here we look at the general case. In the WhereEnumerableIterator the main part of the MoveNext method is implemented like this:

while (enumerator.MoveNext()) {
 TSource item = enumerator.Current;
 if (predicate(item)) {
  current = item;
  return true;
 }
}

(enumerator is initialized with source.GetEnumerator())

So we see the order of the original collection must be preserved.

It should be noted, however, that nowhere in the documentation for Where() is it guaranteed to preserve the order. It is only stated that Where() will return an IEnumerable<T> containing the elements that satisfies the condition.

This means it probably isn’t safe to rely on the ordering in the future.

Further somebody might come by and try to make your foreach loop faster by adding an AsParallel() – and that will certainly ruin the order, so if order is important it might be better to add an OrderBy() – that will make it clear that order is important. (or insert a comment to the same effect).

A funny side note: In the documentation for Parallel LINQ it is stated that: Therefore, by default, PLINQ does not preserve the order of the source sequence. In this regard, PLINQ resembles LINQ to SQL, but is unlike LINQ to Objects, which does preserve ordering. See http://msdn.microsoft.com/en-us/library/dd460677.aspx – so it is sort of documented that order is preserved – just not in the LINQ Where() documentation. The question is if this is good enough to guarantee that order is really preserved in the future also. (maybe I just haven’t found the place in the documentation for LINQ where it says order is preserved…)

Posted in C#, Linq | Tagged , , , , , , | Leave a comment

LINQ and iterating over null collections.

Ok, todays title is weird, but I have come up with a little hack, that will make the code easier to read (sometimes).

At work we have some pretty clear specifications, that says how to fill out some properties in an object that is to be sent over a web service. The specification could go something like (this is fictive – we don’t do webshops at work):

Accounting
– Month total amount WebShop:Orders:Amount:Sum
– Month total items number of items in WebShop:Orders
– Month average sales price Month total amount / Month total items,
If Month total items is zero then don’t set this field.

In code this translates to something like:

var accounting = new Accounting();
accounting.MonthTotalAmount = WebShop.Orders.Sum(o => o.Amount);
accounting.MonthTotalItems = WebShop.Orders.Count();
if(accounting.MonthTotalItems != null)
  accounting.MonthAverageSalesPrice = accounting.MonthTotalAmount / 
                                      accounting.MonthTotalItems ;

This is pretty straight forward. As it happens some of these specifications are very long with 50+ pages with 100+ different properties to read from. All of this is of corse split into smaller classes, but that is irrelevant for the topic at hand. It also happens that some of the properties will return null instead of an empty collection – this is unfortunately not consistent throughout our code base. This means that I have to check for null, for each property, that I want to set:

if(WebShop.Orders != null)
  accounting.MonthTotalAmount = WebShop.Orders.Sum(o => o.Amount);
else
  accounting.MonthTotalAmount = 0m;

This becomes  quite boring to write, and it also makes it more difficult to read and compare to the specification. It would be nice if the Sum(o => o.Amount) of null was 0. In order to do exactly that I made an extension method called NullToEmpty(), that will return the collection if it is not null, and an empty collection if it is null.

For example I can write

accounting.MonthTotalAmount = WebShop.Orders.NullToEmpty().Sum(o => o.Amount);

and it will work no matter if Orders is null or not null.

The code for the method is inspired by some code I saw in the .Net framework (I don’t remember where).

public static class LinqExtension
{
  public static IEnumerable NullToEmpty(this IEnumerable source)
  {
    return source ?? EmptyEnumerable.Instance;
  }

  internal class EmptyEnumerable
  {
    static TElement[] instance;

    public static IEnumerable Instance
    {
      get { return instance ?? (instance = new TElement[0]); }
    }
  }
}

I don’t think this should be used blindly everywhere, but in my case it helps readability, and it is not used in highly performance critical situations.

Posted in C#, Linq | Tagged , , , , | Leave a comment

LINQ and counting

When we work with collections we often have to count the elements in some way. Sometimes we need to know how many elements there are or how many elements that  complies with some  condition.

So let’s assume that I have a collection ‘myCollection’ containing a number of elements. If we want to know how many elements is in the collection we will naturally write this:

var numberOfElements = myCollection.Count();

Counting

It is important to know that the only way the Count() method can know how many elements there are in the collection, is to enumerate through the collection. (unless the collection is an ICollection<T>). Count() is implemented like this (almost – I have removed some null checks):

public static int Count(this IEnumerable source) {
 //If the collection is of a type that implements the Count
 //property, then use that as there is a good chance that the Count property is
 //more efficient.
 ICollection collectionoft = source as ICollection;
 if (collectionoft != null) return collectionoft.Count;
 ICollection collection = source as ICollection;
 if (collection != null) return collection.Count;
 //Otherwise iterate through the collection until we reach the end
 int count = 0;
 using (IEnumerator e = source.GetEnumerator()) {
   checked {
     while (e.MoveNext()) count++;
    }
 }
 return count;
} 

The important part to note here, is that we should call Count() more times than strictly necessary, as we might need to iterate through the complete collection every time.

At least one – Any

I often se code like this:

if(myCollection.Count() >= 1)
  //… do stuff…

So we iterate through the complete collection just found out if there is at least one item in the collection. That seems like a waste, especially when there might be thousands or millions of items in the collection. Luckily there is an optimized method for that.

if(myCollection.Any())
  // … do stufff…

Which will produce the exact same result, but notice that Any() is implemented like this:

public static bool Any(this IEnumerable source) {
  using (IEnumerator e = source.GetEnumerator()) {
    if (e.MoveNext()) return true;
  }
  return false;
} 

So this will only iterate one item and so much more efficient.

None

Sometimes we want to know if there are no items that satisfies a certain condition. We can do this in a number of ways:

if(myCollection.Where(x => x > 5).Count() == 0)
  // … do stuff …

This is still inefficient for the same reasons as above.

if(myCollection.Where(x => x > 5).Any() ==false)
  // … do stufff …

This is more efficient, but a little hard to read –  instead I have implemented my own extension, called None(). So we can write

if(myCollection.Where(x => x > 5).None())
  // … do stuff …

It is implemented like this:

public static bool None(this IEnumerable source)
{
  return !source.Any();
}

I also implement an overload to match the version of Any() that takes a predicate.

public static bool None(this IEnumerable source,  Func<tsource, bool=""> predicate)
{
  return !source.Any(predicate);
}

This overload makes it possible to write either

if(myCollection.Where(x => x > 5).None())
  // … do stuff …

or

if(myCollection.None(x => x > 5))
  // … do stuff …

Exactly n or more than n

Sometimes we want to know if the collection has exactly 10 elements or more than 10 elements – and again it is a waste to use Count(), when only need to iterate over 10 elements to answer the question.

For this purpose I have these two methods, each in two versions:

public static bool ContainsExactly(this IEnumerable source,  int count)
{
  if (count < 0)
    throw new ArgumentOutOfRangeException("count");
  if (source == null)
    throw new ArgumentNullException("source");
  using (var e = source.GetEnumerator())
  {
    for (int i = 0; i < count; i++)
    {
      if (!e.MoveNext())
        return false;
    }
    return !e.MoveNext();
  }
}
public static bool ContainsExactly(this IEnumerable source,
                                            int count, Func<t, bool=""> predicate)
{
  return source.Where(predicate).ContainsExactly(count);
}
public static bool ContainsMoreThan(this IEnumerable source,
                                             int count)
{
  if (count < 0)
    throw new ArgumentOutOfRangeException("elementCount");
  if (source == null)
    throw new ArgumentNullException("source");
  return seq.Skip(count).Any();
}
public static bool ContainsMoreThan(this IEnumerable source,
                                             int count, Func<t, bool=""> predicate)
{
  return source.Where(predicate).ContainsMoreThan(count);
}
Posted in C#, Linq | Tagged , , , , , | 2 Comments

Do more, sleep more, feel better: Status after the three weeks.

Weeks two and three has been really good. I have run three times a week with one of my friends and my stamina has definitely gotten better.

On the other hand, one of my goals was to go to bed at 22:00 and get up at 04:30  – I have to admit that it is simply too difficult for me to go to bed that early. And going to bed any later means getting up no earlier that 5:00, that means no morning training. So right now there is no weight training, but that’s ok, as long as all the other stuff is going so well.

I still don’t have a scale, and the fit bit I am contemplating is a bit expensive. But I feel better and better. I think I will eventually need the scale for motivation but right now the difference is easily felt.

I also have a goal of not eating cake and candy at work, and generally eating more sensible. Knowing that a ned new habit is made in 21 days really helps in convincing one self, that it can be done. I hardly think about it anymore, and the only time I have eaten cake at work, was today – I brought it myself and I was very conscious about what I did – it was kind of hard to do, but I decided to simply enjoy it.

Next week I will wrap up this experiment / change in life style.

Posted in Health, Life Quality | Tagged , , | Leave a comment

Performance of Skip and Take in Linq to Objects

Note: This is not relevant for Linq to Sql or Entity framework, where Skip and Take are transalted into their SQL equivalents.

This week I had to write some code where some 500.000 objects in a collection had to chopped up in pieces of 100 objects. My first thought was to use Skip and Take in a construction like this:

int count = collection.Count();
for(int i=0; i<count; i+=100)
{
  var x = collection.Skip(i).Take(i);
  //do stuff with x
}

But it turns out this does not perform at all. I wanted to understand why, and I also wanted to understand how bad the problem is.

First I made a micro benchmark. Normally I don’t micro benchbark very much, as I often find the small differences you find doesn’t make a difference in the big picture. But then again: sometimes it does. In my case the collection is a List<T> so an alternative is to use GetRange(index, Count). I decided to benchmark this for collection sizes from 1000 item to 1000000 items. The Y-axis below shows number of milliseconds to iterate through the collection. The X-axis is number of items in the collection. “do stuff with x” in my test is a simple x.ToList() in order to simulate that all items in x is enumerated. The lower graph shows time elapsed when GetRange is used.

image

Beside a few bad measurement points we se that the GetRange graph is constant at 1-3 milliseconds, while the time to itereate through the complete collection rises with something like n2.

A collection size of 500.000 elements takes about 13 seconds to iterate throgh. Double the size and it takes more than 45 seconds.

I then did a second experiment where I kept the collection size constant, but changed how many items to Take() at a time. Note that the X-axis is logarithmic in this case.

image

It is clear that the more we Take() at a time, the fewer time we have to call Skip(), and the faster it goes.

Why is Skip() so slow and why is GetRange() for List<T> so fast?

Well looking at the MS reference code shows that everytime you invoke Skip() it will have to iterate you collection from the beginning in order to skip the number of elements you desire, which gives a loop within a loop (n2 behaviour).

GetRange() on the other hand can use the knowledge that a list is used, and do some highly optimized copying of the desired elements to new collection. It doesn’t have to iterate to get to the first item to copy since the collection is in one continuus piece of memory, so a simple calculation can find the memory location to start copying from.

Conclusion: For large collections, don’t use Skip and Take. Find another way to iterate through your collection and divide it.

Posted in C#, Linq | Tagged , , , | 1 Comment

T-SQL isolation levels

Isolation levels determines how data is read and written when the database is accessed by multiple processes. There are six isolation levels, listed below in rising order of locking.

  • READ UNCOMMITTED or NOLOCK
  • READ COMMITTED (default in SQL Server)
  • REPEATABLE READ or HOLDLOCK
  • SERIALIZABE
  • SNAPSHOT
  • READ COMMITED SNAPSHOT

The isolation level is set for the active session by using this command:

SET TRANSACTION ISOLATION LEVEL <level>

or you can use a table hint for a single query:

SELECT <columns> FROM <table> WITH (<level>)

You can only set the isolation level for queries, not for insert, update or delete. SQL server handles the locking for these operations by itself, but the isolation level of concurrently running queries can affect the kind of locks taken when running these commands.

Note that the isolation levels are written in one word when used with SELECT. So it is either

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED

or

SELECTFROM

WITH (READUNCOMMITED)

Below I describe the isolation levels.

READ UNCOMMITTED / NOLOCK

This is the lowest isolation level. The query doesn’t ask for a shared lock and thus can’t be in conflict with somebody else holding an exclusive lock. This means we will always be allowed to read, even uncommitted changes – this is also known as dirty reads, because what you read might be rolled back by another transaction.

I find READ UNCOMMITTED usefull when reading from tables where I know the data is never modified but only added to. A good example of this is if you use a table for logging some kind of events. Logging by its very nature only adds new data.

READ COMMITTED

This is the default isolation level in SQL Server. This guarantees that you can only read data that is committed. If somebody else is modifying og inserting data that match your query, your query will have to wait for that transaction to be committed before a query result can be returned. This is ensured by requesting a shared lock for the data.

The lock is released as soon as the query returns its data. This means that the data can be changed by somebody else if you request them again – even in the same transaction.

REPEATABLE READ / HOLDLOCK

This isolation level have the same behaviour as READ COMMITTED, but with the added bonus that the lock is held until the end of the transaction – so nobody can change the data under your feed. The downside is of corse that the processes wanting to modifying the data have to wait until your transaction is ended.

SERIALIZABLE

This is even stronger than REPEATABLE READ. REPEATBLE READ guarantees that data does no change under your feed, but it does not prevent new data to be inserted. SERIALIZABLE will prevent new data that match your query from being inserted, so an even higher level of consistency is achieved.

SNAPSHOT

Under the SNAPSHOT isolation level the last committed version of the data is returned. This is achieved by storing a copy of the data in tempdb. It is not needed to wait for a shared lock as the data is immediately available in the correct version. The downside is a performance penalty when writing to the affected tables.

In order to use SNAPSHOT isolation levels you need to activate it at the database level with this command:

ALTER DATABASE <database> SET ALLOW_SNAPSHOT_ISOLATION ON

In order to do this you need the right permissions to alter the database.

The SNAPSHOT isolation level behaves the same as the SERIALIZABLE isolation level in terms of data consistency.

READ COMMITTED SNAPSHOT

As above but with behavior like READ COMMITTED in terms of data consistency.

Posted in MSSQL Server, SQL Server, T-SQL | Tagged , , , , , , , , , | Leave a comment

Status after the first week

As written in the original post I started monday with meditation in the train, and running and blogging in the evening.

Tuesday i didn’t get up at 4:30 to do my morning exercises. I simply went to bed too late monday night. It is hard to go to bed as early as 22:00when you are used stay up a couple of hours more….

I did better tuesday night and wednesday and thursday I did get up at 4:30for a half hour training session. I did my 20 minutes of zen-meditation at the start of the train ride. My wednesday run was moved to thursday, so tuesday and wednesday I worked on my project and thursday I did an 8km run. So these two days I did everything I had planned.

Friday was different. I had to be at work early and I also had to be onsite in the evening/night due to some upgrades to the production environmemt. I might not be home until 6 saturday morningso I decided that this day was out of the program (I had to take the car to work, so no meditation, and I would probably eat something ‘unhealthy’ to keep my energy up all day and night). This is of corse not ideal, but I think it is better to take the day out of the program beforehand, and not experience a failure to comply with the rules.

The other days I have managed to stay away from candy and cake, and I also have eaten sensibly. Simply going for more greens, and eating a little less (actually stopping when not hungry anymore).

Between friday and saturday I got 4 hours of sleep, then went to my brother in laws birthday party – resulting in too little sleep and a bit too much alcohol – not very drunk, but definitely not able to drive. This meant sunday was a very tired day, and my evening run was one of the worst in a long time. That can teach me to respect how much sleep I need. The lack of sleep also ment that I chose to skip training monday morning and get the extra 30 minutes of sleep.

Part of what I want to do is to lose weight, but I don’t own a scale, so right now I am just going by the feel of it. I don’t know if I have lost some weight already, but it feels that way.  The small amount of training I do in the morning makes med feel more ‘rank’ and slimmer, though I probably haven’t lost any weight yet. It simply feels good. I am contemplating getting both a scale and a ‘pedometer’ from fitbit.com, so I can follow the progress. The scale from fitbit also measures fat-mass, which is something that is also very interesting to follow. I believe it will be motivating to follow progress.

Posted in Do more, Health, Life Quality | Tagged , , , , , | 1 Comment