Depending on the nature of your application, this may be something that you can not do anything about. However, if it is possible to create your threads such that they operate only on non-overlapping portions of your database, then you can reduce lock contention because your threads will rarely if ever block on one another's locks. It is possible to configure DB's transactions so that they never wait on blocked lock requests. Instead, if they are blocked on a lock request, they will notify the application of a deadlock see the next section.
You configure this behavior on a transaction by transaction basis. A deadlock occurs when two or more threads of control are blocked, each waiting on a resource held by the other thread. When this happens, there is no possibility of the threads ever making forward progress unless some outside agent takes action to break the deadlock. For example, if Txn A is blocked by Txn B at the same time Txn B is blocked by Txn A then the threads of control containing Txn A and Txn B are deadlocked; neither thread can make any forward progress because neither thread will ever release the lock that is blocking the other thread.
When two threads of control deadlock, the only solution is to have a mechanism external to the two threads capable of recognizing the deadlock and notifying at least one thread that it is in a deadlock situation. Once notified, a thread of control must abandon the attempted operation in order to resolve the deadlock. DB's locking subsystem offers a deadlock notification mechanism. See Configuring Deadlock Detection for more information.
Note that when one locker in a thread of control is blocked waiting on a lock held by another locker in that same thread of the control, the thread is said to be self-deadlocked. The things that you do to avoid lock contention also help to reduce deadlocks see Avoiding Blocks.
Beyond that, you can also do the following in order to avoid deadlocks:. Never have more than one active transaction at a time in a thread. A common cause of this is for a thread to be using auto-commit for one operation while an explicit transaction is in use in that thread at the same time. Make sure all threads access data in the same order as all other threads.
So long as threads lock database pages in the same basic order, there is no possibility of a deadlock threads can still block, however. Be aware that if you are using secondary databases indexes , it is not possible to obtain locks in a consistent order because you cannot predict the order in which locks are obtained in secondary databases.
If you are writing a concurrent application and you are using secondary databases, you must be prepared to handle deadlocks. If you are using BTrees in which you are constantly adding and then deleting data, turn Btree reverse split off. See Reverse BTree Splits for more information. Doing this causes DB to give your read operation a write lock. This means that no other thread of control can share a read lock which might cause contention , but it also means that the writer thread will not have to wait to obtain a write lock when it is ready to write the modified data back to the database.
Locks, Blocks, and Deadlocks. Locks Blocks Deadlocks. Lock Resources. The locker. Cyber Security SourceBook. Artificial Intelligence. Data Center Management. Data Modeling. Data Quality. Data Warehousing. Database Security. Internet of Things. Master Data Management. MultiValue Database Technology. Subscribe to Database Trends and Applications Magazine. White Papers. How Expedia modernized operations on one of the world's fastest-moving IT stacks.
Data Sourcebook But it is also prudent to not be too extreme, as this can lead to overemphasis and cause problems to appear in testing that would never occur in production. This may seem like common sense, but it is absolutely critical that the structure of the database including table definitions and indexes be the same in testing as it is in production. The presence or absence of an index can completely change locking behaviour, for the better or worse. Another vital strategy is to ensure that appropriate monitoring is implemented to allow identification of locking problems during testing.
Without appropriate monitoring, Lock Contention and Database Deadlock problems may occur but not be noticed or identified for what they are. This is especially problematic with intermittent and timing dependent problems. Also, Long Term Blocking or System Deadlock problems may occur, but sufficient information will not be available to identify them as such and to enable diagnosis of the cause. The exact monitoring required will depend on your application and DBMS.
However, the following are crucial:. Designing explicit locking tests is also critical. By considering the possible locking problems and your specific application, it is possible to prepare test that will increase the chances of finding locking problems during testing. This testing, in general, will need to involve performance or load test automation in order to be successful. This testing needs to consider 3 factors. When designing explicit locking tests, it is best to start with tests that simulate "normal activity".
This vague term is meant to indicate the workload that the application or system is most often required to support. This would generally include short requests or actions involving a wide breadth of functionality. As opposed to Normal Activity, Long Running Tasks are the less frequent components of workload that tend to take a long time to execute. These are often batch or schedule tasks which would be run infrequently during a day or week. Examples would be data synchronization events, index rebuilds, or backups.
Depending on the application or system, they may also involve end-user initiated actions that are expected to take a long time to execute. Examples would be report execution, user initiated exports, or batch imports. Data overlap is an important concept in defining explicit locking tests. Levels of data overlap must be explicitly designed into the tests. The reason for this is that some locking would be expected.
For example, if 10 users were all attempting to update the username of the same user, blocking and contention would be expected. However, if 10 users were all attempting to update the usernames for different users, then blocking and contention would not be expected or desirable. The fourth strategy is to combine automation with manual testing. In many testing processes, Performance Testing under which much of the above would be categorized , functional automated testing, and manual testing are completed separately.
They often involve different teams and different equipment. However, this can greatly limit the ability to reproduce locking problems during testing. This is because automated testing, even that which follows an explicit locking test plan, will almost always be narrower in scope than manual testing.
Automation creates a "beaten path" where problems are quickly identified and resolved. Conversely, manual testing environments often involve broader and more variable activity.
This usage is also often more valid than that which results from automation. By combining automated load testing with manual testing, it will be possible to reproduce a much broader set of locking issues than would be possible through load testing alone. And by implementing the same monitoring used to detect locking in fully automated tests, one can be sure that problems which do occur will be identified.
Combining these in a UAT environment can also help to assess the severity of locking problems that occur. Load test automation that pushes a system to extreme levels of concurrency may result in locking issues which would not occur under normal conditions.
Running load tests, at realistic levels, during UAT can help to assess whether these problems will actually occur and whether they must be fixed prior to release.
Locking issues and their relative impact can be very hard to predict. However, the following is a list of some scenarios where QA professionals should be particularly concerned. This paper is mainly focused on how to identify problems during testing. Diagnosing and resolving these problems often requires code changes by developers which are beyond the scope of action for QA professionals. However, there are some configuration level changes, not involving code changes, which can help.
There are also some workarounds that the QA professional should know about, since the implications can be problematic.
In general, these do not change the end result of a statement but will alter the execution plan or some other behaviour. This allows a statement to complete without placing any locks, thereby avoiding Select Blocking and reducing the overhead on the lock manager. However, this will lead to uncommitted data being returned. QA professionals should be aware of all places where NOLOCK hints are used within applications because they can cause strange and complicated problems.
In this scenario, cache corruption can occur if "dirty data" read into the cache is then rolled back. This can also lead to variable behaviour depending on whether the data is cached or results in a database query. This may be enabled for an entire database and will result in behaviour similar to that seen with Oracle i. This is not enabled by default and must be explicitly enabled.
However, this option is overhead is generally manageable and well worth the benefit of reducing locking issues. Lock escalation can be disabled using the TraceFlag or TraceFlag trace flags. However, these should be used with caution as they can lead to other problems related to exhaustion of memory.
This is because the full table scan will access every row and will be impacted if any row is locked. Creating an index that is included in the execution plan will reduce the incidence of locking.
This database level change can often be implemented during installation of your system and can be implemented after the fact though this requires a rebuilding of the related objects. As such, these changes should only be done in close consultation with a trained DBA. Click here to view the complete list of archived articles. Browse a selected list of upcoming Software Development Conferences.
You can read AddThis Privacy Policy here. You can read Google Privacy Policy here. Scope The breadth and depth of this topic necessitates that scope be constrained. The scope of this paper has been chosen with the following considerations in mind: The audience is QA professionals The audience does not have significant database experience Core concepts can be generalized once understood Specifically, the scope will be constrained to: "Transactional locking" not all types of locking Oracle and SQL Server 2.
Why do Databases Lock? These protected resources could be: Tables Data Rows Data blocks Cached Items Connections Entire Systems There are also many types of locks that can occur such shared locks, exclusive locks, transaction locks, DML locks, and backup-recovery locks.
Examples of Simple Locking The example below illustrates the most common and logical form of transactional locking. This example shows how the DBMS is maintaining consistency and isolation. Locks Outlive Statements It is critical to understand that locks will often remain after a statement has finished executing. Issues that can occur So what does this mean for your application? Note: The examples used are quite extreme, but are meant to be illustrative.
Figure 2: Example Symptoms of Batch Locking 5. There are three kinds of locking that I will discuss: lock contention, long term blocking and deadlocks. First, we will discuss the most common form of locking which is Lock Contention. It happens when concurrent sessions are competing for the right to apply updates on the same data.
As we have discussed earlier, locks are enforced when a user alters a table property or updates data. This in turn blocks other session users from accessing that same database component, i. This leads us to the second kind of locking - Long Term Blocking. Similar to Lock Contention, Long Term Blocking happens when multiple users are accessing the same database component. The notable difference occurs when the user is holding on to that component for a very long time.
Dependent sessions and objects will be blocked from reading and writing. Last, we have Deadlocks. Database Deadlocks happen when 2 or more database sessions holds to a database object which is a dependent object for their transaction.
Hence, neither session can continue unless the other concedes hold of that dependent object. The Figure above simplifies how deadlocks occur. The only caveat is DBMS will choose the easiest transaction to rollover.
Sure, deadlocks are manageable when it concerns user-driven ad-hoc. But, when applications and sql jobs are involved, this can lead to data loss, and logic problems. Transactions that are high dependent in timing and data will always lead to writing or outputting the incorrect information.
Now that we have a better understanding the kinds of locks and how they occur, we can discuss further how to deal with locks and prevent them in the future. Before that, let us simulate how a lock can occur in a session.
In this example, I have prepared separate two sessions to act as our users. Session 1 will designate as our first user then Session 2 will be our second user. After that, we will execute another query for Session 2 that will select the designated table.
0コメント