Most recently, I had a task to solve concurrency issues at work, that was good enough idea for this post š.
Optimistic concurrency
The first solution is so-called optimistic concurrency. In short, the assumption is that concurrency conflicts will occur very rarely (contrary to the pessimistic approach). The approach boils down to preventing the data row from being saved, i.e. bursting, if the data has changed since the time of retrieval.
Whole concept is described through the adding version
column in table.
More on versions
This approach solves the problem by using concurrency tokens
. In the example below, the token represents the Version
field. The token is loaded and tracked when an entity is queried. After the update/delete operation and by hitting SaveChanges()
, the value of the token in the database is compared with the value read in EF Core.
public class ProductInventory
{
public int ProductId { get; set; }
public string Name { get; set; }
public int Count { get; set; }
[Timestamp]
public byte[] Version { get; set; }
}
Of course, if the values donāt match, š© happens. In SQL Server, the token is automatically changed by changing the row.
Time for concurrent example
The class from above, called ProductInventory
, will be used for the example. Letās see what happens with the given example
var product = context.ProductInventories.Single(p => p.Name == "Milk");
product.Count -= 1;
context.SaveChanges();
- Access the product, including the token
- Change the value of
Count
- Saving changes. Since the token is included, EF Core generates the following SQL query:
UPDATE [ProductInventories] SET [Count] = @p0
WHERE [ProductId] = @p1 AND [Version] = @p2;
The query, in addition to searching by ProductId
, also searches by Version
and thus only changes the row if the tokens match.
If a concurrent update occurs, the operation will return:
UPDATE fails to find any matching rows and reports that zero were affected
As a result SaveChanges()
will return a DbUpdateConcurrencyException
.
Native database-generated concurrency tokens
This example used a native database-generated concurrency token. In the ProductInventory
class, the [TimeStamp]
attribute was used to map the property to the SQL Server rowversion
column. As rowversion
changes automatically when changes are made, minimal effort is required to āpreserveā the entire row.
Application-managed concurrency tokens
If the adding a column to database is not an option. Token handling can also be done by application itself.
public class ProductInventory
{
public int ProductId { get; set; }
public string Name { get; set; }
public int Count { get; set; }
[ConcurrencyCheck]
public GuidVersion { get; set; }
}
As this property is not generated in the database, it must be āmanuallyā maintained:
var product = context.ProductInventories.Single(p => p.Name == "Water");
product.Count -= 1;
product.Version = Guid.NewGuid();
context.SaveChanges();
To be honest, I havenāt gone into details with this approach, but this would be enough to implement the pattern.
Other solutions
Of course, Optimistic concurrency is not the only solution for handling consistency.
Another approach is using database isolation. The assumption is that conflicts will happen often. When user is updating the row, the row is locked. Other row updates can only be done when lock is released. In our case, when an UPDATE operation is to be performed on `ProductInventoryā, the database must ensure that no other transaction interferes with that row until the transaction is completed.
When a row is queried, the transaction acquires the shared lock. Any external transaction that attempts to update the row will be blocked until the transaction completes. This is a form of pessimistic locking, and is implemented by SQL Serverās āserializableā isolation level.
Implementation of this concept is quite simple, just a simple configuration is needed as in the code below:
var transaction = _context.Database.BeginTransaction(IsolationLevel.Serializable);
var product = _productRepository.GetById(productId);
_context.SaveChanges();
transaction.Commit();
The biggest downside of this approach is performance. While rows are being locked, all other actions must wait for the previous completion.