Lex.DB Performance Tips.

Hi there,

Last week I’ve read the article about Lex.DB performance by Diederik Krols (link). This article has inspired me to provide some valuable feedback on usage of Lex.DB.

- Why don’t you like orcs? You just cannot cook them properly.
MMORPG Folklore

There are several principles one has to know to use Lex.DB efficiently.

1. Database files kept consistent all the time. During updates a short write file lock is acquired to prevent concurrent updates. As soon as concurrent update is detected, all related data is reloaded. This is important in a world of suspended/resumed applications like W8/WP8 as well as concurrent access from different threads (aka ASP.NET).

2. As consequence of (1.) every update operation always flushes & closes the data files.

3. Read/Writes are possible from within corresponding read/write transaction only. If there is no ambient transaction in a thread scope, the one will be created automatically.

4. Cost of transaction creation/disposal is constant. Cost of write transaction is bigger than cost of read one (because of highly probable data index update).

5. If several records are read/updated in a loop, the transaction costs are multiplied.

6. Overhead of async/await methods is relatively high. If several records are read/updated in async/await manner, these costs skyrocket. General performance hint: avoid async/await unless necessary.

7. To address these inefficiencies Lex.DB has bulk methods like BulkRead, BulkWrite, Save<T>(IEnumerable<T>), Delete<T>(IEnumerable<T>), DeleteByKeys<K>(IEnumerable<K>) and asynchronous extensions for them.

8. BulkRead allows to pack several read operations into single read transaction.

Here is a fastest way to load several tables at once:

db.BulkRead(() =>
{
  _myPeople = db.LoadAll<Person>();
  _myAddresses = db.LoadAll<Address>();
  _myCompanies = db.LoadAll<Company>();
});

Just single async/await for BulkRead will move all loading work in background thread:

await db.BulkReadAsync(()=>
{
  _myPeople = db.LoadAll<Person>();
  _myAddresses = db.LoadAll<Address>();
  _myCompanies = db.LoadAll<Company>();
});

Here is a BAD example (opening 3 read transactions, doing async/await marshalling 3 times, doing 3 background threads repeatedly):

// DON'T
await _myPeople = db.LoadAllAsync<Person>();
await _myAddresses = db.LoadAllAsync<Address>();
await _myCompanies = db.LoadAllAsync<Company>();

9.  BulkWrite/BulkWriteAsync can speed up several writes like this:

db.BulkWrite(() =>
{
  db.Save(myPerson);
  db.Save(myAddress);
  db.Save(myCompany);
  db.Delete(anotherPerson);
});

or like this with async/await:

await db.BulkWriteAsync(() =>
{
  db.Save(myPerson);
  db.Save(myAddress);
  db.Save(myCompany);
  db.Delete(anotherPerson);
});

And here is a typical BAD example:

// DON'T, UNLESS NO OTHER WAY
await db.SaveAsync(myPerson);
await db.SaveAsync(myAddress);
await db.SaveAsync(myCompany);
await db.DeleteAsync(anotherPerson);

10. Don’t save/delete collections of records using loops. And moreover – asynchronously.
Extremely wrong:

// DON'T. EVER
foreach (var person in personsToUpdate)
   await db.SaveAsync(person);

foreach (var person in personsToDelete)
   await db.DeleteAsync(person);

The proper async way at least looks like this:

await db.SaveAsync(persons);
await db.DeleteAsync(persons);

But the best ever async way is:

await db.BulkWriteAsync(() =>
{
   db.Save(persons);
   db.Delete(persons);
});

Keep these ideas in mind and Lex.DB will always be the fastest way to your data.

King regards,
Lex

Universal Apps for Windows Store/Windows Phone Apps

Hi there,

I’m well aware that Visual Studio 2013 Update 2 Release Candidate is available for a few days now.
It introduced long awaited Universal Apps feature to simplify code sharing between projects of different .NET flavors.

I have not really had a time to take a closer look on how it works. But from what I know so far, my time estimate is a week for Lex.DB to support Universal Apps via Nuget.

Stay tuned,
Viva .NET!

Object Relations with Lex.DB

The next big unanswered question regarding Lex.DB is “How does Lex.DB deal with object relations?”. The short answer – it doesn’t. However, it doesn’t mean showstopper.

Object relations are different. There are many-to-many, one-to-many, one-to-one, nested (on delete cascade) etc. The idea of Lex.DB is to be simple as a nail and fast as a hammer. Implementing extended relation support will bloat the library, increase complexity of code, configuration and understanding.

While some alternative local databases support nested collections of any objects, they serialize all data to a blob (Xml, Binary, Json etc.). When data structures evolve, the guarantee that already stored data is still readable is shallow.

Lex.DB uses table with columns structures internally, so it cannot map arbitrary nested/related objects into those columns. But you as developer can. The developer definitely knows, what kind of relations his objects have, when to load related objects, when to delete. Lex.DB does not cache any data. It provides you the means to serialize/deserialize your data fast, query by indices, load by keys.

If you care about performance, you should load all data your application needs only once. Entity Framework makes me laugh, when I see how it preloads *-to-many relation from database. Same objects with all their content are transferred over the network for every second-end of the *-to-many reference.

Lex.DB simply does not allow this scenario. It just gives you the full control.

All your data objects have primary key, right? If not, consider introducing it. An integer, long or Guid field/property is fine. Lex.DB supports auto generated keys for these types of PK. However, the key is auto generated on save. This is important.

A relation that Lex.DB can serialize/deserialize is just a foreign key or a list/array of foreign keys. This is more than enough to load related objects when you need them. This is enough to pull your foreign keys through your custom cache to avoid loading same objects several times and one by one. The developer knows lifetime of his data objects, knows which objects have to be cached, knows which can be loaded every time. Full freedom!

Here is an example of simple one-to-mane and one-to-one object relations. First the main entity classes without partials:

  public class Customer
  {
    public int Id { get; set; }
    public string Name { get; set; }
    public string City { get; set; }
  }

  public class Product
  {
    public int Id { get; set; }
    public string Name { get; set; }
  }

  public partial class Order
  {
    public int Id { get; set; }
    public int? CustomerFK { get; set; }
    public int[] ProductsFK { get; set; }
  }

The Order class will have reference to Customer and list of references to Products.

Please welcome, partial classes for object relations:

  public partial class Order
  {
    [XmlIgnore] // to avoid DB Serialization
    public Customer Customer
    {
      get
      {
        return CustomerFK == null ? null : _customer ?? (_customer = Database.Customers.LoadByKey(CustomerFK.Value));
      }
      set
      {
        _customer = value;
        SyncCustomerFK();
      }
    }
    Customer _customer;

    void SyncCustomerFK()
    {
      CustomerFK = _customer != null ? (int?)_customer.Id : null;
    }

    [XmlIgnore]  // to avoid DB Serialization
    public Collection<Product> Products { get { return _products ?? (_products = CreateProductsRelation()); } }
    Collection<Product> _products;

    void SyncProductsFK()
    {
      ProductsFK = (from i in _products select i.Id).ToArray();
    }

    Collection<Product> CreateProductsRelation()
    {
      var products = ProductsFK == null ? Enumerable.Empty<Product>() : Database.Products.LoadByKeys(ProductsFK);
      var result = new ObservableCollection<Product>(products);
      result.CollectionChanged += (s, e) => SyncProductsFK();
      return result;
    }
  }

As you may see, now we added loaded on demand live Customer instance and Collection of Products. There properties are invisible to Lex.DB due to [XmlIgnore] attribute.
In Customer property setter, we call SyncCustomerFK to update CustomerFK foreign key reference.
The Products collection is baked by ObservableCollection instance, where we sync Product foreign keys on collection changes.

Here is the Database static class used for loading:

  public static class Database
  {
    public static DbInstance Instance { get { return _db ?? (_db = CreateDb()); } }
    static DbInstance _db;

    public static DbTable<Product> Products { get { return _products ?? (_products = Instance.Table<Product>()); } }
    static DbTable<Product> _products;

    public static DbTable<Order> Orders { get { return _orders ?? (_orders = Instance.Table<Order>()); } }
    static DbTable<Order> _orders;

    public static DbTable<Customer> Customers { get { return _customers ?? (_customers = Instance.Table<Customer>()); } }
    static DbTable<Customer> _customers;

    static DbInstance CreateDb()
    {
      var result = new DbInstance("Relational");
      result.Map<Product>().Automap(i => i.Id, true);
      result.Map<Order>().Automap(i => i.Id, true);
      result.Map<Customer>().Automap(i => i.Id, true);

      result.Initialize();
      return result;
    }

It exposes typed table classes a-la DataContext/ObjectContext.

Here is a source code for this example (SL & WinRT versions).

So we can get/set/data-bind our Order.Customer property like a normal one, CustomerFK will be in sync as long as we don’t assign it directly.

The same valid for Order.Products collection, OnCollectionChanged handler will keep ProductsFK array in sync. And Lex.DB will persist it.

Improved Indexing in Lex.DB 1.1

Original indexing feature was rather limited. You could look for records with specified values of indexed key only. Basically it was only useful for speedy access to related entities. Like child entities via parent key.

This has now been drastically improved. The new indexing adds capabilities to query records with less, great and between operators. In addition some LINQ-like methods are available: Skip, Take and Where, helping you to limit record loading.

Let me show few examples to understand the details. Indexing declaration during initialization phase is not changed, changes in code are minimal. I just copied the code snippets from original indexing post.

public class Contact 
{   
  public int Id { get; set; }   
  public string FirstName { get; set; }   
  public string LastName { get; set; }   
  public DateTime? BirthDate { get; set; }   
  public string FullName { get { return FirstName + " " + LastName; } } 
}

And initialization:

db.Map<Contact>()
  .Automap(i => i.Id)
  .WithIndex("LastName", i => i.LastName)
  .WithIndex("FullName", i => i.FullName);

And now we’d like to get all contacts with the last name Guthrie:

var contacts = db.Table<Contact>();
var list1 = contacts.IndexQueryByKey("LastName", "Guthrie").ToList();

Or without implicitly inferring index type:

var list2 = contacts.IndexQuery<string>("LastName").Key("Guthrie").ToList();

The interesting part is the new method IndexQuery<K> that returns IIndexQuery interface. This interface is backed by an object containing indexed query parameters.
For example, if you want to query all last names starting from letter L, following query construct can be used.

The IndexQuery instance is immutable, so we may cache the query in variable:

var index = contacts.IndexQuery<string>("LastName");

// The index has low bound L (inclusive) 
// and high bound M (exclusive by default),
// so we basically make operator Between 
// "L" <= i.LastName < "M"
var query = index.GreaterThan("L", true).LessThan("M");

To actually materialize the entities from query above ToList() method is used. All relevant entities will be loaded at once in a singe read-only transaction:

var items = query.ToList();

If we don’t want to load relevant entities, but just to look at resulting last names, we can use lazy loading feature with ToLazyList() method. In this case list of Lazy<T> entity wrappers is returned. Lazy<T> will load wrapped entity only once, when you access Lazy<T>.Value property.

For your convenience indexed key value stored in Key property is also part of Lazy<T>:

var items = query.ToLazyList();

foreach(var i in items)
{
  var lastName = i.Key;
  if ((lastName ?? "").Length > 5)
  {
    // load from database
    var entity = i.Value;
    // process entity
  }
}

Sometimes you need to limit the resulting number of entities returned by the indexed query. To avoid materialization of excessive entities, limit your query with Take(int) method:

var items = query.Take(5).ToList();

In this case only five first available entities will be loaded from database.

Sometimes you’d want to implement paging. To simplify this scenario, Skip(int) method can be used:

var pageIndex = 3;
var pageSize = 50;
var items = query.Skip(pageIndex * pageSize).Take(pageSize).ToList();

This will skip first 3 pages with page size of 50 entities. The order you call Skip and Take are not important for sake of simplicity.

There is also parameterless overload for PK index query allowing you to query primary index the same way:

var index = contacts.IndexQuery<string>();

And last but not least. There are some complex logic involved in deciding whether to load entity or not based on indexed values. To simplify this scenario Filter<K>(Func<K, bool>) is available at your disposal:

var items = query.Where(i => (i ?? "").Length> 5).Take(5).ToList();

The method above will return first five entities with last name’s length greater than 5.
Similarly to LINQ, IndexQuery has Count() method to determine the number of entities without actually loading them.

To get list of primary keys for resulting entities, ToIdList() method is used. If you pass the resulting value to DbTable.LoadByKeys(), you’ll get all keyed entities loaded. That is what ToList() method does.

But be aware, all these methods above are not LINQ. Thy are just plain old CLR methods. They look like LINQ, but LINQ they are not. So don’t expect them to behave exactly like LINQ.

Feel free to submit your feedback. Sometimes one’s minute can save another’s days.

NuGet Release of KindOfMagic

Happy to announce that latest version 1.2.1 of KindOfMagic MSBuild task is now available as NuGet package.

I’ve got several requests to publish KindOfMagic on NuGet, but due to lack of time wasn’t able to accomplish this task before. But now it is finally happen. I dig into PowerShell scripts to find out how to transform .csproj files to include KindOfMagic.targets. Fortunately, it wasn’t too difficult.

So all is needed now to install KindOfMagic for your project is just to execute the following command in NuGet Package Manager Console:

Install-Package KindOfMagic

Have fun and Merry Christmas!

Portable Usage Scenario of Lex.DB

Preface

One of the beauties of .NET framework is that it works the same way across all Microsoft platforms including PCs, tablets, gaming consoles, phones and browsers. However there are still some major differences between CLR and BCL versions to make the same code truly cross-platform.

For example, WinRT platform (Windows Store Apps) lacks FileStream class and, to accommodate COM reflection, turns CLR reflection along with messed up namespaces upside down. Windows Phone 7 BCL lacks HashSet and several Linq.Expression classes. Silverlight 5 lacks slim versions of synchronization primitives like ReaderWriterLockSlim, Stopwatch and BufferedStream classes.

It is not always that lacking features are just missing. It could be that the certain version of platform offers some other ways of doing the same things. Theoretically, if we stay in subset of .NET which is common between all these platforms, we could produce truly cross-platform assembly. We just need to use only those methods and classes which are available across all target platforms we’d like to address.

This technique is already available (built-in in VS2012) and called Portable Class Library. By selecting different combinations of target platforms (.NET 4.0-4.5, SL 4-5, WP 7-7.5-8, .NET for Windows Store Apps, Xbox 360) we limit our API set to lowest common denominator of those target platforms. The produced assembly is compatible with all target platforms on binary level, so no recompilation required.

Isn’t this a dream?

But practical experience proves that portable API set is too limited to be useful. The more-or-less complex library like Lex.DB is just impossible to make 100% portable across all target platforms: no files to read/write data, no thread synchronization and no expressions to make Lex.DB fast. Consider all issues and there is no Lex.DB.

What about abstracting some platform-dependent parts from Lex.DB into interfaces and implementing them in platform-specific assemblies? Nice idea, but what will we end up with? We’ll get a portable assembly that requires platform-dependent sidekick anyway. Right now we have just one platform-dependent all-in-one assembly, so why bother?

Wait; there should be some way to make portable database access. Yes, there is. Abstracting data access logic of your app itself.

Portable Way

The idea is simple. We put everything what could be portable in our portable Common Logic assembly. Some components have to interact with platform specific code. To make them portable with need to break the dependency by abstracting interaction via “service” interface.

  • Data Entities – data classes to read/write in Lex.DB
  • View Models – non-visual models of UI in your app
  • Business Logic to work with data entities and view models
  • Resources – text localization and other
  • Interfaces to abstract underlying platform features (IMessageBox, IDialog, IApplication)
  • IDataAccessService interface to abstract data access
  • Service Locator or IoC (Inversion-Of-Control) singleton to obtain platform-specific service implementations

Platform-specific application references platform-specific Lex.DB and portable Common Logic assembly, gaining access to all classes above. During initialization phase our application sets the Service Locator or IoC-container up with platform-specific implementations of these services to allow Common Logic interact with the platform.

portable

Working example

I wrote an article to Nokia Developer wiki, in details explaining this technique. The example provided is a Contacts application, that works on all MS platforms: WinRT, Windows Phone 8, Silverlight 5, .NET (WPF). 10000 records are loaded in a split second. Check it out.

In the example I use my Visual Studio plugin KindOfMagic, which makes implementation of INotifyPropertyChanged objects completely effortless. This plugin saved me lot of nerves and time. Check it also out.

 

That’s it for today.

Supported types in Lex.DB

(previous post)

Lex.DB supports following data member CLR types of out of box:

  • string (System.String)
  • byte (System.Byte)
  • int (System.Int32)
  • long (System.Int64)
  • double (System.Double)
  • float (System.Single)
  • decimal (System.Decimal)
  • bool (System.Boolean)
  • System.DateTime
  • System.DateTimeOffset
  • System.TimeSpan
  • System.Guid
  • Enums based on int, long and byte types

In addition to types listed above, Lex.DB supports derivative nullable types:

  • byte?
  • int?
  • long?
  • double?
  • float?
  • decimal?
  • bool?
  • DateTime?
  • DateTimeOffset?
  • TimeSpan?
  • Guid?
  • Nullable Enums

as well as collection based types like:

  • List<T>
  • HashSet<T>
  • T[] (Arrays)
  • Dictionary<TKey, TValue>

where T, TKey, TValue are supported non-array types.

Serialization of custom types

Knowing that the combination of types above could never be enough, Lex.DB will soon allow extension of database type system (starting from next release).

However, there is a catch:

  • Each new registered type must have unique identifier inside your data universe, collision will corrupt the data
  • Custom types have to be registered before any Lex.DB instance initialization
  • Once written in database, types must ALWAYS be registered (even if your app has evolved and there are no data members of this type anymore), otherwise data will become unreadable
  • Registration type identifier (short) < 1000 are reserved for internal and future use

Let’s define a custom type we’d like to support:

public struct Point
{
  public float X { get; set; }
  public float Y { get; set; }
}

To register this type with Lex.DB we need to provide serialization extension. It will look like this:

public static class CustomTypesSerializers
{
  public static Point ReadPoint(DataReader reader)
  {
     return new Point { X = reader.ReadDouble(), Y = reader.ReadDouble() };
  }

  public static void WritePoint(DataWriter writer, Point value)
  {
    writer.Write(value.X);
    writer.Write(value.Y);
  }
}

Please notice, that we read the data in the same order as we write it into the stream. Read and Write methods are public and static. They have naming pattern like ReadTYPENAME and WriteTYPENAME, where TYPENAME is name of your custom type without namespace.

DataReader and DataWriter are subclasses of BinaryReader and BinaryWriter accordingly, so all reading and writing methods are available to you.

Now here is how to register it. This method must be called somewhere inside initialization section of your app. I would recommend put it in static constructor of your Application class. It guarantees that it will be called only once and before instance constructor of your app.

Serialization.Extender.RegisterType<Point, CustomTypesSerializers>(1000);

That’s it for today! Stay tuned.

(to be continued)

Indexing in Lex.DB

(previous post)

In this post I will explain, how to improve data access in Lex.DB using built-in indexing feature.

To keep the lib compact, Lex.DB provides very simple, yet powerful indexing:

  • all non-PK indexes are non-unique
  • non-calculated indexes support composite keys based on up to 3 data members
  • calculated indexes (indexes based on calculated data members) are supported
  • all indexes (including PK index) are stored together in the index file of the table

Let’s take an example for the previos blog post. Again our demo class looks like this:

public class Contact 
{   
  public int Id { get; set; }   
  public string FirstName { get; set; }   
  public string LastName { get; set; }   
  public DateTime? BirthDate { get; set; }   
  public string FullName { get { return FirstName + " " + LastName; } } 
}

To define an index we’ll modify our mapping declaration like this:

db.Map<Contact>()
  .Automap(i => i.Id)
  .WithIndex("LastName", i => i.LastName);

The name of the index must be unique for the table. It will be used later for querying.

We can also chain several index definitions together like this:

db.Map<Contact>()
  .Automap(i => i.Id)
  .WithIndex("LastName", i => i.LastName)
  .WithIndex("FullName", i => i.FullName);

Now we have two independent non-unique indexes for LastName and FullName properties. Notice that FullName is calculated one.

Let’s query the table using our freshly defined indexes.

var list1 = db.Table<Contact>().LoadAll("FullName", "Scott Guthrie");
var list2 = db.Table<Contact>().LoadAll("LastName", "Papa");

So simple is that.

continue…

Introduction to Lex.DB

(previous post)

Let’s start with simple example. We have a data class to store locally.

Every piece of data has to have a primary key. The type of primary key is up to you. What is important, the PK type T has to implement IComparable<T>.

Lex.DB maintains the Red-Black-Tree index of primary key values in memory. The index is loaded on first use, so memory is allocated on demand. Moreover, if Lex.DB detects that index is updated by another process, the index will be reloaded. During read/write operations, Lex.DB locks index file for read/write access accordingly. This approach maintains the DB consistency all the time.

So our demo class looks like this:

public class Contact 
{ 
  public int Id { get; set; } 
  public string FirstName { get; set; } 
  public string LastName { get; set; } 
  public DateTime? BirthDate { get; set; } 
  public string FullName { get { return FirstName + " " + LastName; } } 
}

The Contact class defines several properties, some are calculated and read only. Lex.DB does not require data members to be properties, public fields are also supported. However, if you like your class to be UI-bindable, data members have to be properties.

Now that we have our data class, we may actually start with database. Create a test project of your preferred platform (.NET 4, .NET 4.5, Silverlight 5, Windows Phone 8.0, Windows Store App) and add a reference to the NuGet package named “lex.db”. At the moment of writing this post, Lex.DB is in Prerelease version yet, so include Prerelease packages in search dialog to locate it.

Alternatively, from NuGet Package Manager Console you may run the following command to install Lex.DB:

install-package lex.db -pre

Then add a “using Lex.Db;” in the header of your C# file.

To initialize Lex.DB database, we need to name and initialize it first.

// demo here is a name of the folder, where all data will be stored.
var db = new DbInstance("demo");

Next step is to define our data to store. During this phase we produce highly optimized mapping code to blazingly fast read/write our data classes.

db.Map<Contact>().Automap(i => i.Id);

Automap method uses reflection to inspect and produce mappings for all public properties and fields. It is also possible to manually enlist all properties and fields to be serializable in our data storage like this:

db.Map<Contact>()
  .Key(i => i.Id)
  .Map(i => i.FirstName)
  .Map(i => i.LastName)
  .Map(i => i.BirthDate);

The approach above provides more flexibility, but it has to be kept in sync in case we extend our Contact class with more data members.

After we’ve finished with declaration of our data classes, following method initializes database instance:

db.Initialize();

Now we are ready to rock-n-roll… Just store DbInstance variable for all your data access needs somewhere.

To save data:

db.Save(new Contact { Id = 1, FirstName = "Lex", LastName = "Lavnikov" },
        new Contact { Id = 2, FirstName = "John", LastName = "Doe" },
        new Contact { Id = 3, FirstName = "Scott", LastName = "Guthrie" },
        new Contact { Id = 4, FirstName = "John", LastName = "Papa" });

To load all data:

var list = db.LoadAll<Contact>();

To load certain instance by key:

var contact = db.LoadByKey<Contact>(1);

An the end, don’t forget to release all resources allocated:

db.Dispose();

That’s it for intro.

continue…
lexdb

Announcing Lex.DB

After several years writing Line-of-Business (LOB) .NET applications, I started to really like disconnected app architecture.

The idea is to load all needed data at once, keep it locally, minimize access time and server communication.

Good examples of such architecture are various mail clients (Outlook, iPhone mail app) as well as feed readers (Twitter and Facebook apps).

After primary data block is received, the app needs a short-lived connection to the server to synchronize changes: download new and updated records, list of deleted records, upload pending changes and, if needed, resolve optimistic locking concurrency issues.

My LOB app needed all these features, so my quest to find a good library for local data storage begun.

First, I made a list of criterias to search for viable solution:

  • concurrent write access (in case user starts several instances of my app)
  • compatibility with older/newer table structures (new field is introduced, old version of my app should continue to work)
  • be really fast (slow app could be a bummer, remember original FB client for iPhone)
  • support indexing
  • support upsert/merge operation
  • be written in pure C# without p/invokes to be portable at least across .Net processor platforms (x86/x64/ARM) and runtimes (.Net, Silverlight, Windows Phone, WinRT). I don’t want to rewrite my app from scratch every time Microsoft introduces new platform.
  • support of isolated storage for Silverlight app without elevated privileges
  • support POCO without extensive use of Attributes (since they are pain to apply on tool-generated classes)
  • have reasonable compact storage
  • be easy to use and understand

After evaluating all of these requirements above, I made following conclusions:

  • SQLite is quite good, but it’s native, platform dependent dll. What means you have to anchor your code to specific processor architecture. If you publish your Windows 8 app, you have to submit 3 versions (x86/x64/ARM) every time you update your app. Consider also testing 3 times more. And if you want crossplatform Xaml app for Mac & Windows (I mean good old Silverlight), you can totally forget SQLite.
  • SQLite C# port. Too complex to understand, trust and thus use in production. Maybe in my previous life.
  • Sterling. This one I liked a lot, some nice workarounds to avoid implementation of full-blown Linq provider especially, but performance and storage were unacceptably slow. Knowing that Silverlight does quota-check on every isolated storage file IO, database access was implemented absolutely wrong technically.
  • some other commercial solutions. Fall short in different areas (attributes, performance, complexity, concurrent access etc).

So what do brave people do, if something is just not there? Right, gather knowledge, compile and innovate.

So Lex.DB was born…

continue…